AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

AVFoundation Documentation

Posts under AVFoundation tag

362 Posts
Sort by:
Post not yet marked as solved
3 Replies
1.9k Views
Hi, We are using AVPlayer for FPL HLS stream, after migrating to iOS 15 (currently Beta 3), we have observed a strange behavior during segments download; The player downloads last video segment of VOD HLS stream The player downloads all audio segments Playback starts playing from the end (last segment) Needs to restart playback to start downloading all video segments. Note that we do not have this behavior with older versions of iOS (14 and before). m3u8 file Thank you,
Posted
by
Post not yet marked as solved
3 Replies
1.5k Views
I ran into a strange problem. A camera app using AVFoundation, I use the following code; captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInUltraWideCamera, for: AVMediaType.video, position: .back) then, let isAutoFocusSupported = captureDevice.isFocusModeSupported(.autoFocus) "isAutoFocusSupported" should be "true". For iPhone 13 pro, it is "true". But for 13 / 13 mini, it is "false". Why?
Post not yet marked as solved
15 Replies
5.4k Views
I have code that has worked for many years for writing ProRes files, and it is now failing on the new M1 Max MacBook. Specifically, if I construct buffers with the pixel type "kCVPixelFormatType_64ARGB", after a few frames of writing, the pixel buffer pool becomes nil. This code works just fine on non Max processors (Intel and base M1 natively). Here's a sample main that demonstrates the problem. Am I doing something wrong here? //  main.m //  TestProresWriting // #import <Foundation/Foundation.h> #import <AVFoundation/AVFoundation.h> int main(int argc, const char * argv[]) {     @autoreleasepool {         int timescale = 24;         int width = 1920;         int height = 1080;         NSURL *url = [NSURL URLWithString:@"file:///Users/diftil/TempData/testfile.mov"];         NSLog(@"Output file = %@", [url absoluteURL]);         NSFileManager *fileManager = [NSFileManager defaultManager];         NSError *error = nil;         [fileManager removeItemAtURL:url error:&error];         // Set up the writer         AVAssetWriter *trackWriter = [[AVAssetWriter alloc] initWithURL:url                                                    fileType:AVFileTypeQuickTimeMovie                                                         error:&error];         // Set up the track         NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:                                        AVVideoCodecTypeAppleProRes4444, AVVideoCodecKey,                                        [NSNumber numberWithInt:width], AVVideoWidthKey,                                        [NSNumber numberWithInt:height], AVVideoHeightKey,                                        nil];                  AVAssetWriterInput *track = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo                                                         outputSettings:videoSettings];         // Set up the adapter         NSDictionary *attributes = [NSDictionary                                     dictionaryWithObjects:                                     [NSArray arrayWithObjects:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_64ARGB], // This pixel type causes problems on M1 Max, but works on everything else                                      [NSNumber numberWithUnsignedInt:width],[NSNumber numberWithUnsignedInt:height],                                      nil]                                     forKeys:                                     [NSArray arrayWithObjects:(NSString *)kCVPixelBufferPixelFormatTypeKey,                                      (NSString*)kCVPixelBufferWidthKey, (NSString*)kCVPixelBufferHeightKey,                                      nil]];         /*         NSDictionary *attributes = [NSDictionary                                     dictionaryWithObjects:                                     [NSArray arrayWithObjects:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], // This pixel type works on M1 Max                                      [NSNumber numberWithUnsignedInt:width],[NSNumber numberWithUnsignedInt:height],                                      nil]                                     forKeys:                                     [NSArray arrayWithObjects:(NSString *)kCVPixelBufferPixelFormatTypeKey,                                      (NSString*)kCVPixelBufferWidthKey, (NSString*)kCVPixelBufferHeightKey,                                      nil]];         */         AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor                             assetWriterInputPixelBufferAdaptorWithAssetWriterInput:track                             sourcePixelBufferAttributes:attributes];         // Add the track and start writing         [trackWriter addInput:track];         [trackWriter startWriting];         CMTime startTime = CMTimeMake(0, timescale);         [trackWriter startSessionAtSourceTime:startTime];         while(!track.readyForMoreMediaData);         int frameTime = 0;         CVPixelBufferRef frameBuffer = NULL;         for (int i = 0; i < 100; i++)         {             NSLog(@"Frame %@", [NSString stringWithFormat:@"%d", i]);             CVPixelBufferPoolRef PixelBufferPool = pixelBufferAdaptor.pixelBufferPool;             if (PixelBufferPool == nil)             {                 NSLog(@"PixelBufferPool is invalid.");                 exit(1);             }             CVReturn ret = CVPixelBufferPoolCreatePixelBuffer(nil, PixelBufferPool, &frameBuffer);             if (ret != kCVReturnSuccess)             {                 NSLog(@"Error creating framebuffer from pool");                 exit(1);             }             CVPixelBufferLockBaseAddress(frameBuffer, 0);             // This is where we would put image data into the buffer.  Nothing right now.             CVPixelBufferUnlockBaseAddress(frameBuffer, 0);             while(!track.readyForMoreMediaData);             CMTime presentationTime = CMTimeMake(frameTime+(i*timescale), timescale);             BOOL result = [pixelBufferAdaptor appendPixelBuffer:frameBuffer                                            withPresentationTime:presentationTime];             if (result == NO)             {                 NSLog(@"Error appending to track.");                 exit(1);             }             CVPixelBufferRelease(frameBuffer);         }         // Close everything         if ( trackWriter.status == AVAssetWriterStatusWriting)             [track markAsFinished];         NSLog(@"Completed.");     }     return 0; }
Posted
by
Post not yet marked as solved
3 Replies
1.6k Views
I receive a buffer from[AVSpeechSynthesizer convertToBuffer:fromBuffer:] and want to schedule it on an AVPlayerNode. The player node's output format need to be something that the next node could handle and as far as I understand most nodes can handle a canonical format. The format provided by AVSpeechSynthesizer is not something thatAVAudioMixerNode supports. So the following:   AVAudioEngine *engine = [[AVAudioEngine alloc] init];   playerNode = [[AVAudioPlayerNode alloc] init];   AVAudioFormat *format = [[AVAudioFormat alloc] initWithSettings:utterance.voice.audioFileSettings];   [engine attachNode:self.playerNode];   [engine connect:self.playerNode to:engine.mainMixerNode format:format]; Throws an exception: Thread 1: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 \"(null)\"" I am looking for a way to obtain the canonical format for the platform so that I can use AVAudioConverter to convert the buffer. Since different platforms have different canonical formats, I imagine there should be some library way of doing this. Otherwise each developer will have to redefine it for each platform the code will run on (OSX, iOS etc) and keep it updated when it changes. I could not find any constant or function which can make such format, ASDB or settings. The smartest way I could think of, which does not work:   AudioStreamBasicDescription toDesc;   FillOutASBDForLPCM(toDesc, [AVAudioSession sharedInstance].sampleRate,                      2, 16, 16, kAudioFormatFlagIsFloat, kAudioFormatFlagsNativeEndian);   AVAudioFormat *toFormat = [[AVAudioFormat alloc] initWithStreamDescription:&toDesc]; Even the provided example for iPhone, in the documentation linked above, uses kAudioFormatFlagsAudioUnitCanonical and AudioUnitSampleType which are deprecated. So what is the correct way to do this?
Posted
by
Post not yet marked as solved
2 Replies
1.4k Views
I am using AVFoundation for live camera view. I can get my device from the current video input (of type AVCaptureDeviceInput) like: let device = videoInput.device The device's active format has a isPortraitEffectSupported. How can I set the Portrait Effect on and off in live camera view? I setup the camera like this: private var videoInput: AVCaptureDeviceInput! private let session = AVCaptureSession() private(set) var isSessionRunning = false private var renderingEnabled = true private let videoDataOutput = AVCaptureVideoDataOutput() private let photoOutput = AVCapturePhotoOutput() private(set) var cameraPosition: AVCaptureDevice.Position = .front func configureSession() { sessionQueue.async { [weak self] in guard let strongSelf = self else { return } if strongSelf.setupResult != .success { return } let defaultVideoDevice: AVCaptureDevice? = strongSelf.videoDeviceDiscoverySession.devices.first(where: {$0.position == strongSelf.cameraPosition}) guard let videoDevice = defaultVideoDevice else { print("Could not find any video device") strongSelf.setupResult = .configurationFailed return } do { strongSelf.videoInput = try AVCaptureDeviceInput(device: videoDevice) } catch { print("Could not create video device input: \(error)") strongSelf.setupResult = .configurationFailed return } strongSelf.session.beginConfiguration() strongSelf.session.sessionPreset = AVCaptureSession.Preset.photo // Add a video input. guard strongSelf.session.canAddInput(strongSelf.videoInput) else { print("Could not add video device input to the session") strongSelf.setupResult = .configurationFailed strongSelf.session.commitConfiguration() return } strongSelf.session.addInput(strongSelf.videoInput) // Add a video data output if strongSelf.session.canAddOutput(strongSelf.videoDataOutput) { strongSelf.session.addOutput(strongSelf.videoDataOutput) strongSelf.videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)] strongSelf.videoDataOutput.setSampleBufferDelegate(self, queue: strongSelf.dataOutputQueue) } else { print("Could not add video data output to the session") strongSelf.setupResult = .configurationFailed strongSelf.session.commitConfiguration() return } // Add photo output if strongSelf.session.canAddOutput(strongSelf.photoOutput) { strongSelf.session.addOutput(strongSelf.photoOutput) strongSelf.photoOutput.isHighResolutionCaptureEnabled = true } else { print("Could not add photo output to the session") strongSelf.setupResult = .configurationFailed strongSelf.session.commitConfiguration() return } strongSelf.session.commitConfiguration() } } func prepareSession(completion: @escaping (SessionSetupResult) -&gt; Void) { sessionQueue.async { [weak self] in guard let strongSelf = self else { return } switch strongSelf.setupResult { case .success: strongSelf.addObservers() if strongSelf.photoOutput.isDepthDataDeliverySupported { strongSelf.photoOutput.isDepthDataDeliveryEnabled = true } if let photoOrientation = AVCaptureVideoOrientation(interfaceOrientation: interfaceOrientation) { if let unwrappedPhotoOutputConnection = strongSelf.photoOutput.connection(with: .video) { unwrappedPhotoOutputConnection.videoOrientation = photoOrientation } } strongSelf.dataOutputQueue.async { strongSelf.renderingEnabled = true } strongSelf.session.startRunning() strongSelf.isSessionRunning = strongSelf.session.isRunning strongSelf.mainQueue.async { strongSelf.previewView.videoPreviewLayer.session = strongSelf.session } completion(strongSelf.setupResult) default: completion(strongSelf.setupResult) } } } Then to I set isPortraitEffectsMatteDeliveryEnabled like this: func setPortraitAffectActive(_ state: Bool) { sessionQueue.async { [weak self] in guard let strongSelf = self else { return } if strongSelf.photoOutput.isPortraitEffectsMatteDeliverySupported { strongSelf.photoOutput.isPortraitEffectsMatteDeliveryEnabled = state } } } However, I don't see any Portrait Effect in the live camera view! Any ideas why?
Posted
by
Post marked as solved
2 Replies
1.6k Views
I'm trying to create a custom Quick Look preview on macOS. I've found the Quick Look Preview Extension target, which is brilliant, and does most of the 'heavy' lifting, but I've run into a few problems. I'm implementing a preview for MIDI files (which has been missing since 2009...) using AVMIDIPlayer. The player keeps playing when the file is no longer selected! What's the mechanism for fixing that? Some sort of check that the view exists..? I notice that the OS preview for audio files has a different interface for the Finder's preview column and for the QuickLook 'pop-up' window. Again, I can't see how you define different views for those two environments. Is there any documentation that's specifically "Mac"? I can only find iOS stuff. (Same for third-party tutorials.)
Posted
by
Post not yet marked as solved
12 Replies
5.8k Views
Setting a voice for AVSpeechSynthesizer leads to an heap buffer overflow. Turn on address sanitizer in Xcode 14 beta and run the following code. Anybody else experiencing this problem, is there any workaround? let synthesizer = AVSpeechSynthesizer() var synthVoice : AVSpeechSynthesisVoice? func speak() { let voices = AVSpeechSynthesisVoice.speechVoices()           for voice in voices {       if voice.name == "Daniel" {    // select e.g. Daniel voice         synthVoice = voice       }     }           let utterance = AVSpeechUtterance(string: "Test 1 2 3")           if let synthVoice = synthVoice { utterance.voice = synthVoice     }           synthesizer.speak(utterance) // AddressSanitizer: heap-buffer-overflow }
Posted
by
Post marked as solved
3 Replies
2.0k Views
I set rectOfInterest to AVCaptureMetadataOutput, which is a rectangular box. iOS15 has always been useful. After upgrading to iOS16, it doesn't work. Can only be in the middle of the screen, which is not what I want, does anyone know how to fix this? Looking for your help~
Posted
by
Post marked as solved
8 Replies
3.1k Views
Hello, We are developing to shoot 48MP Apple Pro Raw feature in my app, but we couldn't find any documentation on shooting 48MP Apple Pro Raw. Is it possible to shoot 48MP Apple ProRaw with a third-party app? Any help is much appreciated. Thanks.
Posted
by
Post not yet marked as solved
6 Replies
1.6k Views
Hello. In iOS16, I faced the problem that main thread become deadlock when call copyNextSampleBuffer method in AVAssetReaderOutput in requestMediaDataWhenReady method of AVAssetWriterInput. Main thread become deadlock sometimes and main thread's call stack like below. Thread 1 Queue : com.apple.main-thread (serial) #0 0x00000001ebd73b48 in mach_msg2_trap () #1 0x00000001ebd86008 in mach_msg2_internal () #2 0x00000001ebd86248 in mach_msg_overwrite () #3 0x00000001ebd7408c in mach_msg () #4 0x00000001b0059564 in CA::Render::Message::send_message() () #5 0x00000001b0235084 in CA::Render::Encoder::send_message(unsigned int, unsigned int, unsigned int*, unsigned long) () #6 0x00000001b0002274 in CA::Context::commit_transaction(CA::Transaction*, double, double*) () #7 0x00000001b0035148 in CA::Transaction::commit() () #8 0x00000001b001e4c8 in CA::Transaction::flush_as_runloop_observer(bool) () #9 0x00000001b1004870 in _UIApplicationFlushCATransaction () #10 0x00000001b1151c78 in _UIUpdateSequenceRun () #11 0x00000001b17958f8 in schedulerStepScheduledMainSection () #12 0x00000001b1794ac4 in runloopSourceCallback () #13 0x00000001aea32394 in __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ () #14 0x00000001aea3e76c in __CFRunLoopDoSource0 () #15 0x00000001ae9c2650 in __CFRunLoopDoSources0 () #16 0x00000001ae9d7fe8 in __CFRunLoopRun () #17 0x00000001ae9dd314 in CFRunLoopRunSpecific () #18 0x00000001e851f368 in GSEventRunModal () #19 0x00000001b0ea23e8 in -[UIApplication _run] () #20 0x00000001b0ea204c in UIApplicationMain () #21 0x000000010506f918 in main at /Users/user/Desktop/LINE/QA/line-ios/Talk/AppDelegate.swift:72 #22 0x00000001cd8d9960 in start () I tried to delay some logic to call markAsFinished and finishWriting methods of AVAssetWriter but that couldn't solve this problem. Is there anyone to experience same issue as me? If so, let me know what is workaround code if you solve this problem?
Posted
by
Post not yet marked as solved
3 Replies
1.3k Views
Hi! Just created a new empty project with just a AVSpeechSynthesizer var and when I try to play a tts string in app console I see: Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 Unable to list voice folder [AXTTSCommon] MauiVocalizer: 11006 (Can't compile rule): regularExpression=\Oviedo(?=, (\x1b\pause=\d+\)?Florida)\b, message=unrecognized character follows , characterPosition=1 [AXTTSCommon] MauiVocalizer: 16038 (Resource load failed): component=ttt/re, uri=, contentType=application/x-vocalizer-rettt+text, lhError=88602000 Tested with Xcode Version 14.0.1 (14A400) on iPhone 14 Pro device ( iOS 16.0.3 ). More, in case Address Sanitizer is active app crashes: ERROR: AddressSanitizer: heap-buffer-overflow on address 0x000103e5ae75 at pc 0x000100ada758 bp 0x00016ff6e030 sp 0x00016ff6d7e8 READ of size 3062 at 0x000103e5ae75 thread T11 #0 0x100ada754 in wrap_strlen+0x164 (/private/var/containers/Bundle/Application/F72885E1-54FA-4FB9-B32E-320C8A8770A2/TestTTS.app/Frameworks/libclang_rt.asan_ios_dynamic.dylib:arm64e+0x16754)  #1 0x235339d78 in IsThisUrlOrRealPath+0x30 (/System/Library/PrivateFrameworks/TextToSpeechMauiSupport.framework/TextToSpeechMauiSupport:arm64e+0x2fd78) #2 0x235775994 in ve_ttsResourceLoad+0x16c (/System/Library/PrivateFrameworks/TextToSpeechMauiSupport.framework/TextToSpeechMauiSupport:arm64e+0x46b994)
Posted
by
Post not yet marked as solved
14 Replies
3.7k Views
Hi, Our apps currently facing a crash that only happens in iPhone with iOS 16. Unfortunately, we could not reproduce this issue, neither from XCode build or App store build. From third party crash analytics, it happens mostly of background and they highlighted this as the issue. Fatal Exception: NSInvalidArgumentException *** -[__NSArrayM insertObject:atIndex:]: object cannot be nil And the stack trace, it shows: AVFCore __22-[AVPlayer _addLayer:]_block_invoke Pegasus __116-[PGPictureInPictureProxy _endDeactivatingPictureInPictureIfNeededWithAnimationType:stopReason:cleanupHandlerOrNil:]_block_invoke We also attach the full crash log to give full context of the crash. Any idea why this crash happens and is there anything i can do to prevent the crashes from happening again? pegasus.crash
Posted
by
Post marked as solved
3 Replies
2.4k Views
Issue: I am supporting an iOS application that streams Fairplay DRM protected content. On iOS 16 devices, I am seeing intermittent exceptions thrown when trying to process the CKC returned by the license server. The thrown exception is as follows: -[AVContentKeyRequest processContentKeyResponse:] AVContentKeySession's keySystem is not same as that of keyResponse This issue does not occur on older devices (we support iOS 13, 14, 15) I am unable to find documentation about this error so any insight is appreciated: High-Level Code Overview Use ContentKeyRequest to request an application certificate Use returned Cert to call makeStreamingContentKeyRequestData Use returned data to request FairPlay license Use returned CKC to generate AVContentKeyResponse (i.e. AVContentKeyResponse(fairPlayStreamingKeyResponseData:_)) Call processContentKeyResponse(_) App crash/exception thrown when callling processContentKeyResponse I am seeing other issues related to DRM and iOS 16 but these are specific to downloaded and offline content which do not match my use case.
Posted
by
Post not yet marked as solved
2 Replies
740 Views
I use HLS to play video in AVPlayer. My .mp3u8 file has following content #EXTM3U #EXT-X-VERSION:3 #EXT-X-MEDIA:TYPE=SUBTITLES, DEFAULT=YES,GROUP-ID="subs",NAME="English",LANGUAGE="eng",URI="some/path" .... There is 3 options for subtitles "Off", "Auto (Recommended)", "English" on the screen. By default "Auto" option is chosen in the player, but there is no English subtitles are displayed (phone language is English).I have tried set FORCED attribute to YES but it does not help(subtitle icon become hidden). I have 2 questions: Is it possible to display subtitles when user selects "Auto option"? What criteria is used to determine subtitles language or subtitles visibility?
Posted
by
Post not yet marked as solved
2 Replies
1.2k Views
We have QR-scanner feature implemented on web view (WKWebView). If it's dark we want to light our QR-code using flashlight in iPhone. In general, this feature works, but without flashlight. But have that problems: If we turn on torch then camera preview disappears. If turn off torch then camera preview appears. Do you have any idea why it's so? And how can we sort it out? Thanks
Posted
by
Post not yet marked as solved
1 Replies
1.5k Views
Hi, I would like to read in .mxf files using AVPlayer and also AVAssetReader. I would like to write out to .mxf files using AVAssetWriter, Should this be possible ? Are there any examples of how to do this ? I found the VTRegisterProfessionalVideoWorkflowVideoDecoders() call but this did not seem to help. I would be grateful for any suggestions. Regards Tom
Posted
by
Post not yet marked as solved
2 Replies
2.1k Views
I'm developing a media player for Mac (AppKit, not Catalyst) that plays local and remote content. I have AirPlay working with AVPlayer (with an AVRoutePickerView assigning the route), but while I get the metadata that I've set for the MPNowPlayingInfoCenter on the AirPlay device (a TV in this case), I don't get any album art (but I do in the macOS now playing menu bar/control centre applet). It looks like this: (imgur link because I can't get it to upload in the forum): https://i.imgur.com/2JBIYCw.jpg My code for setting the metadata:         NSImage *artwork = [currentTrack coverImage];         CGSize artworkSize = [artwork size];         MPMediaItemArtwork *mpArtwork = [[MPMediaItemArtwork alloc] initWithBoundsSize:artworkSize requestHandler:^NSImage * _Nonnull(CGSize size) {             return artwork;         }];         [songInfo setObject: mpArtwork forKey:MPMediaItemPropertyArtwork]; I noticed that it doesn't resize, but it seems at least macOS doesn't care. I tried modifying the code to resize the artwork in the callback, but that also doesn't change anything. I noticed in the logs that I get a message about a missing entitlement: 2023-01-29 14:00:37.889346-0400 Submariner[42682:9794531] [Entitlements] MSVEntitlementUtilities - Process Submariner PID[42682] - Group: (null) - Entitlement: com.apple.mediaremote.external-artwork-validation - Entitled: NO - Error: (null) ...however, this seems to be a private entitlement and the only reference I can find to it is WebKit. Using it makes LaunchServices very angry at me, and I presume it's a red herring.
Posted
by
Post not yet marked as solved
1 Replies
1k Views
Hello, In one of my apps, I'm trying to modify the pixel buffer from a ProRAW capture to then write the modified DNG. This is what I try to do: After capturing a ProRAW photo, I work in the delegate function func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { ... } In here I can access the photo.pixelBuffer and get its base address: guard let buffer = photo.pixelBuffer else { return } CVPixelBufferLockBaseAddress(buffer, []) let pixelFormat = CVPixelBufferGetPixelFormatType(buffer) // I check that the pixel format corresponds with ProRAW . This is successful, the code enters the if block if (pixelFormat == kCVPixelFormatType_64RGBALE) { guard let pointer = CVPixelBufferGetBaseAddress(buffer) else { return } // We have 16bits per component, 4 components let count = CVPixelBufferGetWidth(buffer) * CVPixelBufferGetHeight(buffer) * 4 let mutable = pointer.bindMemory(to: UInt16.self, capacity: count) // As a test, I want to replace all pixels with 65000 to get a white image let finalBufferArray : [Float] = Array.init(repeating: 65000, count: count) vDSP_vfixu16(finalBufferArray, 1, mutable, 1, vDSP_Length(finalBufferArray.count)) // I create an vImage Pixel buffer. Note that I'm referencing the photo.pixelBuffer to be sure that I modified the underlying pixelBuffer of the AVCapturePhoto object let imageBuffer = vImage.PixelBuffer<vImage.Interleaved16Ux4>(referencing: photo.pixelBuffer!, planeIndex: 0) // Inspect the CGImage let cgImageFormat = vImage_CGImageFormat(bitsPerComponent: 16, bitsPerPixel: 64, colorSpace: CGColorSpace(name: CGColorSpace.displayP3)!, bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue | CGBitmapInfo.byteOrder16Little.rawValue))! let cgImage = imageBuffer.makeCGImage(cgImageFormat: cgImageFormat)! // I send the CGImage to the main view controller. This is successful, I can see a white image when rendering the CGImage into a UIImage. This lets me think that I successfully modified the photo.pixelBuffer firingFrameDelegate?.didSendCGImage(image: cgImage) } // Now I try to write data. Unfortunately, this does not work. The photo.fileDataRepresentation() writes the data corresponding to the original, unmodified pixelBuffer `if let photoData = photo.fileDataRepresentation() { // Sending the data to the view controller and rendering it in the UIImage displays the original photo, not the modified pixelBuffer firingFrameDelegate?.didSendData(data: photoData) thisPhotoData = photoData }` CVPixelBufferUnlockBaseAddress(buffer, []) The same happens if I try to write the data to disk. The DNG file displays the original photo and not the data corresponding to the modified photo.pixelBuffer. Do you know why this code should not work? Do you have any ideas on how I can modify the ProRAW pixel buffer so that I can write the modified buffer into a DNG file? My goal is to write a modified file, so, I'm not sure I can use CoreImage of vImage to output a ProRAW file.
Posted
by
Post not yet marked as solved
3 Replies
1.5k Views
AVPlayer was working on a previous version of XCode, but when XCode updated yesterday, the AVPlayer stopped working in the simulator. Here is the error message: [connection] nw_connection_add_timestamp_locked_on_nw_queue [C2] Hit maximum timestamp count, will start dropping events
Posted
by