AVAssetWriterInput appendSampleBuffer failed with error -12780

I tried adding watermarks to the recorded video. Appending sample buffers using AVAssetWriterInput's append method fails and when I inspect the AVAssetWriter's error property, I get the following:

Error Domain=AVFoundation Error Domain Code=-11800 "This operation cannot be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDDescription=This operation cannot be completed, NSUnderlyingError=0x302399a70 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}}

As far as I can tell -11800 indicates an AVErrorUknown, however I have not been able to find information about the -12780 error code, which as far as I can tell is undocumented.

Thanks!

- (void)createNewVideoWithWaterMark:(NSURL *)videoPath

{

    NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,  NSUserDomainMask,YES);

    NSString *ourDocumentPath =[documentPaths objectAtIndex:0];

    NSString *resultPath = [ourDocumentPath stringByAppendingPathComponent:[NSString stringWithFormat:@"%@.mp4",@"resultVideo"]];

    // Delete old videos

    [self removeFileWithUrl:resultPath];

    

    NSError *error;

    AVURLAsset *asset = [AVURLAsset URLAssetWithURL:videoPath options:nil];

    AVAssetReader *readerAsset = [AVAssetReader assetReaderWithAsset:asset error:&error];

    AVAssetWriter *writerAsset = [AVAssetWriter assetWriterWithURL:[NSURL fileURLWithPath:resultPath] fileType:AVFileTypeMPEG4 error:&error];


    NSArray *videoTracks = [asset tracksWithMediaType:AVMediaTypeVideo];

    NSDictionary *videoSetting = @{(id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]};

 

    AVAssetReaderVideoCompositionOutput *videoOutput  = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:videoTracks videoSettings:videoSetting];

    AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:[self videoCompressSettings]];

    

    if ([readerAsset canAddOutput:videoOutput]) {

        NSDateFormatter *formater = [[NSDateFormatter alloc] init];

        [formater setDateFormat:@" yyyy-MM-dd HH:mm:ss "];

        NSString *waterMarkTime = [formater stringFromDate:[NSDate date]];

        

        videoOutput.videoComposition = [self fixedCompositionWithAsset:asset];

        [readerAsset addOutput:videoOutput];

    }

    

    if ([writerAsset canAddInput:videoInput]) {

        [writerAsset addInput:videoInput];

    }

 

    // audio

    NSArray *audioTracks = [asset tracksWithMediaType:AVMediaTypeAudio];

    NSDictionary *audioSetting = @{AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM]};

    AVAssetReaderAudioMixOutput *audioOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:audioSetting];

    AVAssetWriterInput *audioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:[self audioCompressSettings]];


    if ([readerAsset canAddOutput:audioOutput]) {

        [readerAsset addOutput:audioOutput];

    }


    if ([writerAsset canAddInput:audioInput]) {

        [writerAsset addInput:audioInput];

    }

    

    [writerAsset startWriting];

    [writerAsset startSessionAtSourceTime:kCMTimeZero];

    [readerAsset startReading];


    dispatch_queue_t videoQueue = dispatch_queue_create("Video Queue", DISPATCH_QUEUE_SERIAL);

    dispatch_queue_t audioQueue = dispatch_queue_create("Audio Queue", DISPATCH_QUEUE_SERIAL);

    dispatch_group_t group = dispatch_group_create();

    dispatch_group_enter(group);


    [videoInput requestMediaDataWhenReadyOnQueue:videoQueue usingBlock:^{

        while ([videoInput isReadyForMoreMediaData]) {

            CMSampleBufferRef sampleBuffer;

            if([readerAsset status] == AVAssetReaderStatusReading && (sampleBuffer = [videoOutput copyNextSampleBuffer])){

                BOOL result = [videoInput appendSampleBuffer:sampleBuffer];

                CFRelease(sampleBuffer);

                if(!result){

                    [readerAsset cancelReading];

                    break;

                }

            }else{

                [videoInput markAsFinished];

                dispatch_group_leave(group);

                break;

            }

        }

    }];

    

    dispatch_group_enter(group);

    

    [audioInput requestMediaDataWhenReadyOnQueue:audioQueue usingBlock:^{

        while ([audioInput isReadyForMoreMediaData]) {

            CMSampleBufferRef sampleBuffer;

            if ([readerAsset status] == AVAssetReaderStatusReading && (sampleBuffer = [audioOutput copyNextSampleBuffer])) {

                // AVAssetWriterStatusFailed (reason: Error Domain=AVFoundationErrorDomain Code=-11800 "这项操作无法完成" UserInfo={NSLocalizedFailureReason=发生未知错误(-12780), NSLocalizedDescription=这项操作无法完成, NSUnderlyingError=0x302399a70 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}})

                BOOL result = [audioInput appendSampleBuffer:sampleBuffer];

                CFRelease(sampleBuffer);

                if(!result){

                    [readerAsset cancelReading];

                    break;

                }

            } else {

                [audioInput markAsFinished];

                dispatch_group_leave(group);

                break;

            }

        }

    }];


    dispatch_group_notify(group, dispatch_get_main_queue(), ^{

    });

}


- (AVMutableVideoComposition *)fixedCompositionWithAsset:(AVAsset *)videoAsset

{

    AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] firstObject];

    NSUInteger degress = 0;

    if (videoAssetTrack) {

        CGAffineTransform t = videoAssetTrack.preferredTransform;

       if (t.a == 0 && t.b == 1.0 && t.c == -1.0 && t.d == 0) {

            // Portrait

            degress = 90;

        } else if(t.a == 0 && t.b == -1.0 && t.c == 1.0 && t.d == 0) {

            // PortraitUpsideDown

            degress = 270;

        } else if(t.a == 1.0 && t.b == 0 && t.c == 0 && t.d == 1.0) {

            // LandscapeRight

            degress = 0;

        } else if(t.a == -1.0 && t.b == 0 && t.c == 0 && t.d == -1.0) {

            // LandscapeLeft

            degress = 180;

        }

   }

    

    CGAffineTransform translateToCenter;

    CGAffineTransform mixedTransform;

    

    if (degress == 90) {

        translateToCenter = CGAffineTransformMakeTranslation(videoAssetTrack.naturalSize.height, 0.0);

        mixedTransform = CGAffineTransformRotate(translateToCenter,M_PI_2);

    } else if(degress == 180) {

        translateToCenter = CGAffineTransformMakeTranslation(videoAssetTrack.naturalSize.width, videoAssetTrack.naturalSize.height);

        mixedTransform = CGAffineTransformRotate(translateToCenter,M_PI);

    } else if(degress == 270) {

        translateToCenter = CGAffineTransformMakeTranslation(0.0, videoAssetTrack.naturalSize.width);

        mixedTransform = CGAffineTransformRotate(translateToCenter,M_PI_2*3.0);

    } else {

        translateToCenter = CGAffineTransformMakeTranslation(0.0, videoAssetTrack.naturalSize.width);

        mixedTransform = CGAffineTransformRotate(translateToCenter,0);

    }

    

    AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];

    mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration);

    AVMutableVideoCompositionLayerInstruction *videolayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoAssetTrack];

    [videolayerInstruction setOpacity:0.0 atTime:videoAsset.duration];

    [videolayerInstruction setTransform:mixedTransform atTime:kCMTimeZero];

    mainInstruction.layerInstructions = [NSArray arrayWithObjects:videolayerInstruction,nil];

    AVMutableVideoComposition *mainCompositionInst = [AVMutableVideoComposition videoComposition];

    

    CGSize naturalSize = videoAssetTrack.naturalSize;

    float renderWidth, renderHeight;

    renderWidth = naturalSize.width;

    renderHeight = naturalSize.height;

    if (isnan(renderWidth) || renderWidth <= 0) {

        if (degress == 90 || degress == 270) {

            renderWidth = 1080.0;

        } else {

            renderWidth = 1920.0;

        }

    } else {

        if (degress == 90 || degress == 270) {

            renderWidth = naturalSize.height;

        }

    }

    

    if (isnan(renderHeight) || renderHeight <= 0) {

        if (degress == 90 || degress == 270) {

            renderHeight = 1920.0;

        } else {

            renderHeight = 1080.0;

        }

    } else {

        if (degress == 90 || degress == 270) {

            renderHeight = naturalSize.width;

        }

    }

    

    mainCompositionInst.renderSize = CGSizeMake(renderWidth, renderHeight);

    mainCompositionInst.instructions = [NSArray arrayWithObject:mainInstruction];

    mainCompositionInst.frameDuration = CMTimeMake(1, 30);


    CATextLayer *subtitleText = [[CATextLayer alloc] init];

    [subtitleText setFontSize:35];

    [subtitleText setFrame:CGRectMake(10, renderHeight-50, renderWidth, 50)];

    [subtitleText setString:@"content 12345"];

    subtitleText.contentsScale = [UIScreen mainScreen].scale;

    [subtitleText setForegroundColor:[[[UIColor whiteColor] colorWithAlphaComponent:0.8] CGColor]];

    

    CALayer *overlayLayer = [CALayer layer];

    [overlayLayer addSublayer:subtitleText];

    overlayLayer.frame = CGRectMake(0, 0, renderWidth, renderHeight);

    [overlayLayer setMasksToBounds:YES];

    

    CALayer *parentLayer = [CALayer layer];

    CALayer *videoLayer = [CALayer layer];

    parentLayer.frame = CGRectMake(0, 0, renderWidth, renderHeight);

    videoLayer.frame = CGRectMake(0, 0, renderWidth, renderHeight);

    [parentLayer addSublayer:videoLayer];

    [parentLayer addSublayer:overlayLayer];

    mainCompositionInst.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];

    return mainCompositionInst;

}


- (NSDictionary *)audioCompressSettings

{

    AudioChannelLayout stereoChannelLayout = { .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,

        .mChannelBitmap = 0,

        .mNumberChannelDescriptions = 0};

    NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];

    NSDictionary *audioCompressSettings = @{AVFormatIDKey:@(kAudioFormatMPEG4AAC),

                                            AVEncoderBitRateKey:@96000,

                                            AVSampleRateKey:@44100,

                                            AVChannelLayoutKey:channelLayoutAsData,

                                            AVNumberOfChannelsKey:@2};

    return  audioCompressSettings;

}
Answered by DTS Engineer in 821637022

Hello @HeYingJian,

Your issue stems from the topic mentioned in this TN3177: Understanding alternate audio track groups in movie files.

In short, iPhone 16 and iPhone 16 Pro can record a stereo and Spatial track for the same movie. It is incorrect to mix those tracks together. Your app is mixing them together with AVAssetReaderAudioMixOutput, and that is resulting in this issue.

-- Greg

Hello @HeYingJian,

Please provide a focused sample project that reproduces the error.

-- Greg

Hello @HeYingJian,

Your issue stems from the topic mentioned in this TN3177: Understanding alternate audio track groups in movie files.

In short, iPhone 16 and iPhone 16 Pro can record a stereo and Spatial track for the same movie. It is incorrect to mix those tracks together. Your app is mixing them together with AVAssetReaderAudioMixOutput, and that is resulting in this issue.

-- Greg

Can I solve this problem by changing AVAssetReaderAudioMixOutput to AVAssetReaderTrackOutput?

The important part to any solution here is to not mix the active and inactive audio tracks together. One way to do that is to write them to the destination asset as separate audio tracks (just as they are in the source asset).

You don't necessarily have to use AVAssetReaderTrackOutput, for example, you could use AVAssetReaderAudioMixOutput with just the active track. It's really up to you to decide what behavior you want your app to have here.

-- Greg

AVAssetWriterInput appendSampleBuffer failed with error -12780
 
 
Q