Hi All,
I'm not sure if this is in the correct place since the new forums organisation, mods feel free to move if not.
So i am Using AVFoundation to capture video and audio from an external device and to write to a ProRes encoded *.mov file.
I started with an application that only writes video buffers to a file, and this works fine, producing a ProRes encoded .mov as expected.
However when i add the Audio AssetWriterInput to the AssetWriter, the file that gets created appears to be invalid.
I have checked all the return values for each step of the operation, and everything appears to be ok, it's not until i call finishWriting on the assetWriter at the very end of things that i get an error. Sadly checking this error reveals the -11800 AVErrorUnkown error, which isn't much help.
Now, a few things to note
- I am trying to create a file with multiple tracks of uncompressed audio. Eg 48Khz, 16-bit,
however for the sake of simplicity currently my sample only uses 1 track, which means only 1 AVAssetWriterIput for the audio.
I am 95% sure i have specified the dictionary settings correctly, but if your confident with that topic plz have a quick look to confirm.
- Am a bit unsure of the correct CMTime value to use when i create my audio CMSampleBuffer, with video i am adding 1 frame at a time, with a fps of 25,
my CMTime values are created as such: CMTimeMake(frameNo, 25.00)
but for the Audio buffers i am using CMTimeMake((frameNo * 1920), 48000) because the CMSampleBuf contains 1920 smples, at an overall rate of 48khz.
Do the CMTime values need to match across the 2 different AVAssetInputWriter instances ( ie. video and audio)?
1920 because there are 1920 unique audio samples for a single frame of Video ie. PAL / 1080i50
- The resulting mov file that is created appears to be the correct size, ie. it is slightly bigger than the same file but with Video only.
- If I comment out ONLY the Audio AVAssetWriterInput appendSampleBuffer call, the resulting file is readable, and fine, - though it shows no audio tracks, despite me adding, starting and stopping the inputWriter, but obviously not writing any actual samples.
OK so here is the Code, just ignore the stuff about decoding hanc buffers and geting the video thats all relevant to getting the data form the device in Question.
int main(int argc, const char * argv[])
{
@autoreleasepool
{
NSLog(@"Hello, World! - Welcome to the ProResCapture With Audio sample app. ");
OSStatus status;
AudioStreamBasicDescription audioFormat;
CMAudioFormatDescriptionRef audioFormatDesc;
// OK so lets include the BVC stuff first and then we can see about doing some actual capture and compress stuff
BLUEVELVETC_HANDLE pBVC = bfcFactory();
if (pBVC)
{
BLUE_UINT32 inputVidMode = VID_FMT_INVALID;
BLUE_UINT32 memFmt = MEM_FMT_ARGB;
BLUE_UINT32 height = 0;
BLUE_UINT32 width = 0;
BLUE_UINT32 fpsRate = 0;
BLUE_UINT32 bIs1001 = false;
BLUE_UINT32 bIsProgressive = false;
unsigned long ulUpdateType = UPD_FMT_FRAME;
unsigned long ulFieldCount = 0;
unsigned int numAudioChannels = 1; //4;
int numFramesToCapture = 200;
bfcAttach(pBVC, 1); // just use the first card for this demo / sample
bfcSetCardProperty32(pBVC, DEFAULT_VIDEO_INPUT_CHANNEL, BLUE_VIDEO_INPUT_CHANNEL_A); // use input channel A
// check for a Valid input video signal.
bfcQueryCardProperty32(pBVC, VIDEO_INPUT_SIGNAL_VIDEO_MODE, inputVidMode);
if (inputVidMode < VID_FMT_INVALID)
{
// Configure the card for basic RGBA capture.
// I have removed a bunch of code that configures the device...
gBFBytes = (BLUE_UINT32*)bfAlloc(gGoldenSize);
bool canAddVideoWriter = false;
bool canAddAudioWriter = false;
// declare the vars for our various AVAsset elements
AVAssetWriter* assetWriter = nil;
AVAssetWriterInput* assetWriterInputVideo = nil;
AVAssetWriterInput* assetWriterAudioInput = nil;
AVAssetWriterInputPixelBufferAdaptor* adaptor = nil;
NSURL* localOutputURL = nil;
NSError* localError = nil;
// create the file we are goijmng to be writing to
localOutputURL = [NSURL URLWithString:@"file:///Volumes/RAID/bfProResCapture.mov"];
// possibly add some code to check if the file ealready exists and overwrite iof so...
assetWriter = [[AVAssetWriter alloc] initWithURL: localOutputURL fileType:AVFileTypeQuickTimeMovie error:&localError];
if (assetWriter)
{
assetWriter.shouldOptimizeForNetworkUse = NO;
// Lets configure the Audio and Video settings for this writer...
{
// Video First.
// Add a video input
// create a dictionary with the settings we want ie. Prores capture and width and height.
NSMutableDictionary* videoSettings = [NSMutableDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecAppleProRes422, AVVideoCodecKey,
[NSNumber numberWithInt:width], AVVideoWidthKey,
[NSNumber numberWithInt:height], AVVideoHeightKey,
nil];
assetWriterInputVideo = [AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeVideo outputSettings:videoSettings];
adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterInputVideo
sourcePixelBufferAttributes:nil];
canAddVideoWriter = [assetWriter canAddInput:assetWriterInputVideo];
}
{ // Add a Audio AssetWriterInput
// Create a dictionary with the settings we want ie. Uncompressed PCM audio 16 bit little endian.
NSMutableDictionary* audioSettings = [NSMutableDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:48000.0], AVSampleRateKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
[NSNumber numberWithUnsignedInteger:1], AVNumberOfChannelsKey,
nil];
// currently set up to do 1 channel per track, which is i think that we want but it sort of looks like using this buffer you might be able to do multiple...?????
audioFormat.mSampleRate = 48000.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = CalculateLPCMFlags(16, 16, false, false); // kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault,
&audioFormat,
0,
NULL,
0,
NULL,
NULL,
&audioFormatDesc
);
assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioSettings];
canAddAudioWriter = [assetWriter canAddInput:assetWriterAudioInput];
if (canAddAudioWriter)
{
assetWriterAudioInput.expectsMediaDataInRealTime = YES; //true;
[assetWriter addInput:assetWriterAudioInput];
}
CMFormatDescriptionRef myFormatDesc = assetWriterAudioInput.sourceFormatHint;
NSString* medType = [assetWriterAudioInput mediaType];
}
if(canAddVideoWriter)
{
// tell the asset writer to expect media in real time.
assetWriterInputVideo.expectsMediaDataInRealTime = YES; //true;
// add the Input(s)
[assetWriter addInput:assetWriterInputVideo];
// Start writing the frames..
BOOL success = true;
success = [assetWriter startWriting];
CMTime startTime = CMTimeMake(0, fpsRate);
[assetWriter startSessionAtSourceTime:kCMTimeZero];
// [assetWriter startSessionAtSourceTime:startTime];
if (success)
{
bfcVideoCaptureStart(pBVC);
// wait 3 frames before try and start reading frames off the card's FiFo
bfcWaitVideoInputSync(pBVC, ulUpdateType, ulFieldCount);
bfcWaitVideoInputSync(pBVC, ulUpdateType, ulFieldCount);
bfcWaitVideoInputSync(pBVC, ulUpdateType, ulFieldCount);
// **** possible enhancement is to use a pixelBufferPool to manage multiple buffers at once...
CVPixelBufferRef buffer = NULL;
int kRecordingFPS = fpsRate;
bool frameAdded = false;
BLUE_UINT32 bufferID;
for( int i = 0; i < numFramesToCapture; i++)
{
printf("\n");
buffer = pixelBufferFromBFCard(pBVC, bufferID, width, height, memFmt); // Use this function to get a CVBufferREf From our device.
while(!adaptor.assetWriterInput.readyForMoreMediaData)
{
printf(" readyForMoreMediaData FAILED \n");
}
if (buffer)
{
// Add video
printf("appending Frame %d ", i);
CMTime frameTime = CMTimeMake(i, kRecordingFPS);
frameAdded = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if (frameAdded)
printf("VideoAdded.....");
// Add Audio
{
char* pAudioSamples = new char(2002*4);
int bufLen = 1920 *2;
CMTime audioTimeStamp = CMTimeMake((i*1920), 48000); // (frame * number of samples per Frame) / sampleRate for audio
CMBlockBufferRef blockBuf = NULL; // *********** MUST release these AFTER adding the samnples to the assetWriter...
CMSampleBufferRef cmBuf = NULL;
// Create sample Block buffer for adding to the audio input.
status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
(void*)pAudioSamples,
bufLen,
kCFAllocatorNull,
NULL,
0,
bufLen,
0,
&blockBuf);
if (status != noErr)
{
NSLog(@"CMBlockBufferCreateWithMemoryBlock error");
}
status = CMAudioSampleBufferCreateWithPacketDescriptions(kCFAllocatorDefault,
blockBuf,
TRUE,
0,
NULL,
audioFormatDesc,
1,
audioTimeStamp,
NULL,
&cmBuf);
if (status != noErr)
{
NSLog(@"CMSampleBufferCreate error");
}
// leys check if the CMSampleBuf is valid
bool bValid = CMSampleBufferIsValid(cmBuf);
if (status != noErr)
{
NSLog(@"Invalid Buffer found!!! possible CMSampleBufferCreate error?");
}
if(!assetWriterAudioInput.readyForMoreMediaData)
{
printf(" readyForMoreMediaData FAILED - Had to Drop a frame\n");
}
else
{
if(assetWriter.status == AVAssetWriterStatusWriting)
{
BOOL r = YES;
r = [assetWriterAudioInput appendSampleBuffer:cmBuf];
if (!r)
{
NSLog(@"appendSampleBuffer error");
}
else
success = true;
}
else
printf("AssetWriter Not ready???!? \n");
}
if(blockBuf)
CFRelease(blockBuf);
if (cmBuf)
CFRelease(cmBuf);
}
if(success)
{
printf("Audio tracks Added..");
}
else
{
NSError* nsERR = [assetWriter error];
printf("Problem Adding Audio tracks / samples");
}
printf("Success \n");
}
if (buffer)
{
CVBufferRelease(buffer);
}
}
}
AVAssetWriterStatus sta = [assetWriter status];
CMTime endTime = CMTimeMake((numFramesToCapture-1), fpsRate);
// Finish the session
bfcVideoCaptureStop(pBVC);
[assetWriterInputVideo markAsFinished];
[assetWriterAudioInput markAsFinished];
[assetWriter endSessionAtSourceTime:endTime];
bool finishedSuccessfully = [assetWriter finishWriting];
if (finishedSuccessfully)
NSLog(@"Writing file ended successfully \n");
else
{
NSLog(@"Writing file ended WITH ERRORS...");
sta = [assetWriter status];
if (sta != AVAssetWriterStatusCompleted)
{
NSError* nsERR = [assetWriter error];
printf("investoigating the error \n");
}
}
}
else
{
NSLog(@"Unable to Add the InputVideo Asset Writer to the AssetWriter, file will not be written - Exiting");
}
if (audioFormatDesc)
CFRelease(audioFormatDesc);
}
if (gBFBytes)
bfFree(gGoldenSize, gBFBytes);
if (gBFHancBuffer)
bfFree(gHANC_SIZE, gBFHancBuffer);
}
else
{
NSLog(@"Unable to find a valid input signal - Exiting");
}
bfcDetach(pBVC);
bfcDestroy(pBVC);
}
}
return 0;
}
So all up i am getting pretty frustrated by this. 😟
I can make video only work.
I can do all the setup and configuration for the audio but not add the buffers and it works.
When i DO add the audio samples the appendSampleBuffer call complets without error.
But when i try and finish/complete the file i get an error from the [assetWriter finishWriting] call!!
Of course looking into the rror is no help 😟
So what am i doing wrong? anyone got any suggestins on how i can examine or diagnose the broken mov file that is created?
Any comments would be most appreciated,
Thanks,
James