Core Media

RSS for tag

Efficiently process media samples and manage queues of media data using Core Media.

Posts under Core Media tag

36 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Macro-mode in AVCaptureDevice(custom camera)
Hi, I would like to use macro-mode for the custom camera using AVCaptureDevice in my project. This feature might help to automatically adjust and switch between lenses to get a close up clear image. It looks like this feature is not available and there are no open apis to achieve macro mode from Apple. Is there a way to get this functionality in the custom camera without losing the image quality. Please let me know if this is possible. Thanks you, Adil Thamarasseri
1
1
97
4d
Debug MediaExtension plugin in system exectutable?
I am developing a macOS 15 MediaExtension plugin to enable additional codecs and container formats in AVFoundation My Plugin is sort of working, but i'd like to debug the XPC process that AVFoundation 'hoists' for me from the calling app (ie - the process hosting my plugin instance that is managing the MESampleBuffer protocol calls for example) Is there a method to configure XCode for interactive attaching to this background process for interactive debugging? Right now I have to use Console + Print which is not fun or productive. Does Apple have a working example of a MediaExtension anywhere? This is an exciting API that is very under-documented. I'm willing to spend a Code Review 'credit' for this, but my issues are not quite focused. Any assistance is highly appreciated!
0
0
259
Dec ’24
Selecting an appropriate AVCaptureDeviceFormat
My app currently captures video using an AVCaptureSession set with the AVCaptureSessionPreset1920x1080 preset. However, I'd like to update this behavior, such that video can be recorded at a range of different resolutions. There isn't a preset aligning to each desired resolution, so I thought I'd instead directly set the AVCaptureDeviceFormat. For any desired resolution, I would find the format that is closest without going under the desired resolution, and then crop it down as a post-processing step. However, what I've observed is that there can be a range of available formats for a device at each resolution, with various differing settings. Presumably there is logic within AVCaptureSession that selects a reasonable default based on all these different settings, but since I am applying the format directly, I think I don't have a way to make use of that default logic? And it is undocumented? Does this mean that the only way to select a format is to implement a comparison function that considers all different values of all different properties on AVCaptureDeviceFormat, and then sort the formats according to this comparator? If so, what if some new property is added to AVCaptureDeviceFormat in the future? The sort would not take this new property into account, and the function might select a format with some new undesired property. Are there any guarantees about what types for formats will be supported on a device? For example, can I take for granted that a '420v' format will exist at each resolution? If so I could filter the formats down only to those with this setting without risking filtering out all of the supported formats. I suspect I may be missing something obvious. Any help would be greatly appreciated!
3
0
358
Dec ’24
IOSurface with System Extensions
Hi All, I'm working on a camera system extension where the main app is supposed to transfer a video stream using IOSurface memory sharing to the cam extension. I have built a sample app that does contains all the logic, but without a camera extension. So I'm essentially using IOSurface to render a video in one SwiftUI view and show the result in another SwiftUI view. Just for testing purposes. And everything works fine so far. Now, when moving the receiver code to the camera extensions, I'm having problems in accessing the IOSurface via ID. I am sharing the IOSurface ID via UserDefaults. I know from the logs the ID is correctly transferred. Here is the code that uses IOSurfaceLookup to get the IOSurface. But this fails with the given message. The error message prints the surface ID which is the correct one. I know this from the main app where I get the ID and print it as well. private var surfaceId: Int = -1 { didSet { logger.info("surfaceId has changed") if surfaceId == -1 { stopReceivingFrames() ioSurface = nil } else { guard let surface = IOSurfaceLookup(IOSurfaceID(surfaceId)) else { logger.error("failed to lookup IOSurface with ID: \(self.surfaceId)") return } self.ioSurface = surface logger.info("surface set, now starting receiving frames") startReceivingFrames() } } } My gut feeling says that this issue might be related to some missing entitlement, sandboxing. In general, I have a working camera extension. I'm just not able to render a video in the main app, and send it over to the camera extension to overlay another web cam. Both, the main app and camera extension are in the same XCode workspace and share the same AppGroup. In short, my actual questions are: Is there any entitlement required for using IOSurface between app and camera system extension? Is using IOSurface actually possible in system extensions? Is there any specific setting/requirement that I need to handle to make this work?
0
0
265
Nov ’24
Error Domain=NSOSStatusErrorDomain Code=-16384, -16155, -16512
I’ve built a custom media player using AVSampleBufferAudioRenderer and AVSampleBufferRenderSynchronizer, and overall, it works great! However, I’ve noticed some unusual logs popping up: Domain: NSOSStatusErrorDomain Error Codes: -16384, -16155, -16512 *That error -16512 keeps happening repeatedly for one of our users, preventing them from playing any media at all. I’ve searched around but can’t find any documentation explaining what these errors mean. Has anyone run into this issue or have any suggestions? Any help would be hugely appreciated! Thanks!
1
0
420
Dec ’24
AVASSETREADER and AVAssetWriter: ideal settings
Hello, To create a test project, I want to understand how the video and audio settings would look for a destination video whose content comes from a source video. I obtained the output from the source video in the audio and video tracks as follows: let audioSettings = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2 ] as [String : Any] var audioOutput = AVAssetReaderTrackOutput(track: audioTrack!, outputSettings: audioSettings) // Video let videoSettings = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey: videoTrack!.naturalSize.width, kCVPixelBufferHeightKey: videoTrack!.naturalSize.height ] as [String: Any] var videoOutput = AVAssetReaderTrackOutput(track: videoTrack!, outputSettings: videoSettings) With this, I'm obtaining the CMSampleBuffer using AVAssetReader.copyNextSampleBuffer . How can I add it to the destination video? Should I use a while loop, considering I already have the AVAssetWriter set up? Something like this: while let buffer = videoOutput.copyNextSampleBuffer() { if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) { let frame = imgBuffer as CVPixelBuffer let time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) adaptor.append(frame, withMediaTime: time) } } Lastly, regarding the destination video. How should the AVAssetWriterInput for audio and PixelBuffer of the destination video be set up? Provide an example, something like: let audioSettings = […] as [String: Any] Looking forward to your response.
5
0
306
Dec ’24
Decode video frames in lower resolution before processing
We are processing videos with Core Image filters in our apps, using an AVMutableVideoComposition (for playback/preview and export). For older devices, we want to limit the resolution at which the video frames are processed for performance and memory reasons. Ideally, we would tell AVFoundation to give us video frames with a defined maximum size into our composition. We thought setting the renderSize property of the composition to the desired size would do that. However, this only changes the size of output frames, not the size of the source frames that come into the composition's handler block. For example: let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in let input = request.sourceImage // <- this still has the video's original size // ... }) composition.renderSize = CGSize(width: 1280, heigth: 720) // for example So if the user selects a 4K video, our filter chain gets 4K input frames. Sure, we can scale them down inside our pipeline, but this costs resources and especially a lot of memory. It would be way better if AVFoundation could decode the video frames in the desired size already before passing it into the composition handler. Is there a way to tell AVFoundation to load smaller video frames?
0
1
292
Nov ’24
Managing Excessive Memory Usage with AVAssetReader and AVASSETWriter
Hello, I.m deaf-blind programmer. I'm experiencing memory issues in my app. Essentially, I'm writing a video. In this output video, I get content from two sources. The first source is an already recorded video of 18 seconds (just for testing). It will be shown at the beginning of the output video. The second source is an array with photos and another array with audio buffers from AVSpeechSynthesizer.write(). The photos will be added along with the audio buffers to the output video, right after adding the 18-second video. So, in the end, the output video should be: 18-second video + array of photos as video images and, for audio, the buffers from AVSpeechSynthesizer.write(). However, my app crashes as soon as I start the first process. I'm using AVAssetWriter to write the video and AVAssetReader to read the video. Below, I'll show the code where I get the CMSampleBuffer. I'd like an example of how to add the 18-second video to the beginning of the output video. It doesn't need to be a big piece of code. Here it is: // Variables var audioReaderBuffers = [CMSAMPLEBUFFER]() var videoReaderBuffers = [(frame: CVPixelBuffer, time: CMTIME)]() // Get CMSampleBuffer of a video asset if let videoURL = videoURL { let videoAsset = AVAsset(url: videoURL) Task { let videoAssetTrack = try await videoAsset.loadTracks(withMediaType: .video).first! let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first! let reader = try AVAssetReader(asset: videoAsset) let videoSettings = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey: videoAssetTrack.naturalSize.width, kCVPixelBufferHeightKey: videoAssetTrack.naturalSize.height ] as [String: Any] let readerVideoOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoSettings) let audioSettings = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2 ] as [String : Any] let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings) reader.add(readerVideoOutput) reader.add(readerAudioOutput) reader.startReading() // Video CMSampleBuffer while let sampleBuffer = readerVideoOutput.copyNextSampleBuffer() { autoreleasepool { if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) { let pixBuf = imgBuffer as CVPixelBuffer let pTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) videoReaderBuffers.append((frame: pixBuf, time: pTime)) } } } if let videoURL = videoURL { let videoAsset = AVAsset(url: videoURL) Task { let videoAssetTrack = try await videoAsset.loadTracks(withMediaType: .video).first! let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first! let reader = try AVAssetReader(asset: videoAsset) let videoSettings = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey: videoAssetTrack.naturalSize.width, kCVPixelBufferHeightKey: videoAssetTrack.naturalSize.height ] as [String: Any] let readerVideoOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoSettings) let audioSettings = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2 ] as [String : Any] let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings) reader.add(readerVideoOutput) reader.add(readerAudioOutput) reader.startReading() while let sampleBuffer = readerVideoOutput.copyNextSampleBuffer() { autoreleasepool { if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) { let pixBuf = imgBuffer as CVPixelBuffer let pTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) }
1
0
289
Nov ’24
tvOS 18 playback issues on 4k
Observing 4K playback issues on tvOS 18. Encountering HTTP 416 (Range Not Satisfiable) errors when the player attempts to request byte ranges that are outside the available data on the server. This leads to fatal playback error resulting in the error CoreMediaErrorDomain error -12939 - HTTP 416: Requested Range Not Satisfiable Notably, there are no customizations or modifications to the standard AVPlayerViewController on tvOS player. AVPlayer is trying to request the resource of length equals 739 bytes with an invalid byte range (739-) request. Since the request is not satisfiable server returns with 416. Note this is only limited to tvOS 18 and we are trying to understand why AVPlayer is making this invalid request in tvOS 18 resulting in playback error.
1
0
281
Oct ’24
Writing video using AVAssetWriter, AVAssetReader, and AVSPEECHSYNTHESIZER
Hello, First, some version and software details: Software: iOS 18.1 Hardware: iPhone 14 Pro Max and later Xcode: 16.0 Summary: AVAssetReader is not concatenating a video at the beginning of the output video. The output video should contain a scene of me introducing the content, followed by a blue screen with AVSpeechSynthesizer reading out a text that I pasted above the "Generate Video" button. Details: Now, let's talk about the app. Basically, I’m developing an app that generates a video with the following features: My app will create an output video that is split into an opening scene followed by a fully blue screen. The opening scene will be taken from a video I choose from my gallery. I will read the opening video using AVAssetReader as usual. After the opening scene, I will use the content of a text read by AVSpeechSynthesizer.write(). After the opening scene, the synthesized audio will start playing while the blue screen is displayed. All of this is already defined in the attached project. Each project file has a comment at the beginning introducing its content. How to test: Write something in the field above the "Generate Video" button. For example, type "Hello, world!" Then, press the "Library" button and select a video from the gallery, about 30 seconds long. That’s it. Press the "Generate Video" button. The result I’ve experienced is a crash or failure to generate the video. Practical example of what I want to achieve: Suppose I record a 30-second video where I say, "I’m going to tell you the story of Snow White." Then, I paste the "Snow White" story into the field above the "Generate Video" button. The output video should contain me saying, "I’m going to tell you the story of Snow White." After that, the AVSpeechSynthesizer will read the story I pasted, while displaying a blue screen. I look forward to a solution. Thank you very much! convertToCMSampleBuffer.swift convertToPixelBuffer.swift createInputs.swift createVideo.swift test.swift saveVideo.swift TestApp.swift editingVideo.swift sampleReaderProvider.swift misc.swift sampleProvider.swift
8
0
716
Nov ’24
AVAssetWriterInput -- inserting sample buffers with pauses in between not working
Hi, I'm trying to insert CMSampleBuffers into an AVAssetWriterInput that has been configured with expectsMediaDataInRealTime = false with pauses. That is, I insert fixed-length audio at specific (increasing and non-overlapping) time points with large gaps in between. E.g., 5 seconds of audio at t=3.0, 5 seconds of audio at t=12.0, etc. The first audio sample plays at t=3 in the final output video as expected. But then all the other samples are bunched up immediately after it instead of being scheduled at the correct time. Below is my code. I'm just loading the asset and then readjusting its timestamps to be correct in the absolute timeline. Why do they not get scheduled correctly when the timestamps and durations are definitely correct and non-overlapping? func addFrame(_ pixelBuffer: CVPixelBuffer) { guard CGSize(width: pixelBuffer.width, height: pixelBuffer.height) == outputSize else { return } let frameTime = CMTimeMake(value: frameCount, timescale: frameRate) if videoInput?.isReadyForMoreMediaData == true { pixelBufferAdaptor?.append(pixelBuffer, withPresentationTime: frameTime) frameCount += 1 currentTime = frameTime } } func addMP3AudioClip(_ audioData: Data) async throws { let tempURL = FileManager.default.temporaryDirectory.appendingPathComponent(UUID().uuidString + ".mp3") defer { try? FileManager.default.removeItem(at: tempURL) } try audioData.write(to: tempURL) let asset = AVAsset(url: tempURL) let duration = try await asset.load(.duration) let audioTrack = try await asset.loadTracks(withMediaType: .audio).first! let audioReader = try AVAssetReader(asset: asset) let outputSettings: [String: Any] = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2, AVLinearPCMBitDepthKey: 16, AVLinearPCMIsFloatKey: false, AVLinearPCMIsBigEndianKey: false, AVLinearPCMIsNonInterleaved: false ] let audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: outputSettings) audioReader.add(audioReaderOutput) guard audioReader.startReading() else { throw NSError(domain: "AudioReaderError", code: 0, userInfo: [NSLocalizedDescriptionKey: "Failed to start reading audio"]) } let baseInsertionTime = currentTime.convertScale(duration.timescale, method: .default) // Capture the current video time when the method is called print("Adding audio clip at \(baseInsertionTime.seconds) seconds, duration: \(duration.seconds) seconds") var audioTime = CMTime.zero var totalDuration: Double = 0 while let sampleBuffer = audioReaderOutput.copyNextSampleBuffer() { let bufferDuration = CMSampleBufferGetDuration(sampleBuffer) let adjustedBuffer = adjustTimeStamp(of: sampleBuffer, by: baseInsertionTime) while !audioInput!.isReadyForMoreMediaData { try await Task.sleep(nanoseconds: 100_000_000) // 0.1 second } audioInput!.append(adjustedBuffer) print(" Adjusted time: \(adjustedBuffer.presentationTimeStamp.seconds)") audioTime = CMTimeAdd(audioTime, bufferDuration) totalDuration += bufferDuration.seconds } print("Finished adding audio clip. Last sample at: \(CMTimeAdd(baseInsertionTime, audioTime).seconds) seconds") print(" totalDuration=\(totalDuration)") } private func adjustTimeStamp(of sampleBuffer: CMSampleBuffer, by timeOffset: CMTime) -> CMSampleBuffer { var count: CMItemCount = 0 CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: 0, arrayToFill: nil, entriesNeededOut: &count) var timingInfo = [CMSampleTimingInfo](repeating: CMSampleTimingInfo(), count: count) CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: count, arrayToFill: &timingInfo, entriesNeededOut: nil) for i in 0..<count { timingInfo[i].presentationTimeStamp = CMTimeAdd(timingInfo[i].presentationTimeStamp, timeOffset) if timingInfo[i].decodeTimeStamp != .invalid { timingInfo[i].decodeTimeStamp = CMTimeAdd(timingInfo[i].decodeTimeStamp, timeOffset) } else { timingInfo[i].decodeTimeStamp = timingInfo[i].presentationTimeStamp } } var adjustedBuffer: CMSampleBuffer? CMSampleBufferCreateCopyWithNewTiming(allocator: nil, sampleBuffer: sampleBuffer, sampleTimingEntryCount: count, sampleTimingArray: &timingInfo, sampleBufferOut: &adjustedBuffer) return adjustedBuffer! }
0
0
327
Oct ’24
CMSAMPLEBuffer: audio PCM to MP4 AAC
Hello, As explained in this link, the AVAssetReaderTrackOutput.copyNextSampleBuffer() returns a CMSampleBuffer in linear PCM audio format. I want to place this audio buffer into an AVAssetWriterInput of type kAudioFormatMPEG4AAC, but I can't manage the conversion. Could you help me by providing an extension that returns a CMSampleBuffer converted from linear PCM audio format to kAudioFormatMPEG4AAC? Example: extension CMSampleBuffer { func fromPCMToAAC() -> CMSampleBuffer? { // Here, get a new AudioStreamBasicDescription, create a CMSampleBuffer and a CMBlockBuffer } } I've tried multiple times but without success. Software: iOS 18.1 XCode: 16.0 Thank you!
1
0
434
Oct ’24
AVUnknown error using Camera Extensions in AVCaptureSession
I have a Mac Catalyst video conferencing app that streams video using AVCaptureMultiCamSession. Everything has been working well for me in a variety of scenarios and hardware, but recently I got a report that virtual cameras / camera extensions do not seem to work - which I can reproduce 100% of the time by using something like OBS's virtual camera. FaceTime and Photo Booth work okay with these virtual cameras. Although my app can see and add the external AVCaptureDevice, I get an AVCaptureSessionRuntimeError posted when I start the session with a connection between the virtual camera and a AVCaptureVideoDataOutput (I don't get the error if I don't connect or add an output). The posted error is AVUnknown: AVCaptureSessionRuntimeErrorNotification with Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x600001dcd680 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}} Which doesn't tell me too much. I do see some fig assertions just above in Console though: <<<< BWMultiStreamCameraSourceNode >>>> Fig assert: "err == 0 " at bail (BWMultiStreamCameraSourceNode.m:3964) - (err=-12780) <<<< BWMultiStreamCameraSourceNode >>>> Fig assert: "err == 0 " at bail (BWMultiStreamCameraSourceNode.m:1591) - (err=-12780) <<<< BWMultiStreamCameraSourceNode >>>> Fig assert: "err == 0 " at bail (BWMultiStreamCameraSourceNode.m:1418) - (err=-12780) <<<< FigCaptureCameraSourcePipeline >>>> Fig assert: "err == 0 " at bail (FigCaptureCameraSourcePipeline.m:3572) - (err=-12780) <<<< FigCaptureCameraSourcePipeline >>>> Fig assert: "err == 0 " at bail (FigCaptureCameraSourcePipeline.m:4518) - (err=-12780) <<<< FigCaptureCameraSourcePipeline >>>> Fig assert: "err == 0 " at bail (FigCaptureCameraSourcePipeline.m:483) - (err=-12780) I've verified formats are sane (the usual 420v 1080p 30fps I have everywhere else) and data output functions and such, but I'm a bit stuck as to where to go from here. One thing that did stand out is that in the AVCamBarcode example I can see the virtual camera in that app's preview layer, but if I create an AVCaptureVideoDataOutput and add it to the session in that example, it fails in what looks like exactly the same way that my app does, with the same assertions. Does anyone have any advice? Thanks!
4
0
520
Sep ’24
Sound quality
Am a musician/DJ. Jumped from 14 Pro Max iOS 17 to 16 Pro Max iOS 18.1 b4. For each audio source(music app/yt music etc.) same track/eq/volume compared side by side. With the new device, overall it's a bit muffled and damping music when highs and lows are mixed. Most noticeable when listening to high vocals and acoustic instruments. Drum and bass sound much like on an old Nokia. On 14 Pro it's nothing like that. Thank you
1
0
304
Oct ’24
Alternative for crashing API MPMediaItemArtwork
When setting the now playing info for playing media in MPNowPlayingInfoCenter we can set artwork. But it seems the Apple API for creating the artwork is crashing on iOS 18 (FB15145734). On iOS 17 this gave the warning that the completion handler was not run on the main thread. I've tried to seek help here: https://stackoverflow.com/questions/78989543/swift-data-race-with-appkit-mpmediaitemartwork-function/78990231?noredirect=1#comment139277425_78990231 but it seems that it's not possible to override the completion handler and therefor it's up to Apple to fix this issue. .task { await MainActor.run { let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default() var nowPlayingInfo = [String: Any]() let image = NSImage(named: "image")! // warning: data race detected: @MainActor function at MPMediaItemArtwork/ContentView.swift:22 was not called on the main thread nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size, requestHandler: { _ in // Not on main thread here! return image }) nowPlayingInfoCenter.nowPlayingInfo = nowPlayingInfo } } I'm wondering if there is an alternative method to set the now playing artwork?
3
0
519
Oct ’24
CMIO Custom Properties Don't Work With NSSNumber under macOS 12.x (but do under macOS 14.x)
I created a camera extension project that required interaction via custom properties. I originally coded it on macOS 14.x. In the camera extension code, receiving and returning NSNumber values as shown in the code at the bottom. The app was working perfectly until I went to test on mac 12.7.6. Under that version of macOS, the custom properties weren't working at all. I also saw lines in the logs like this: CMIO_DAL_CMIOExtension_Stream.mm:1165:GetPropertyData 50 wrong 4cc format for key 4cc_back_glob_0000 which was totally perplexiing, because it was clear that the 4cc codes were fine under 13.x, especially given that the documentation for CMIOExtensionPropertyState clearly says that NSNumbers should be OK. On a hunch, I changed the code to use NSString instead of NSNumber, and it started working again under 12.7.6. So, my experience is that macOS 12.x doesn't allow you to use NSNumbers, but it happy with NSStrings. Hope that saves someone else some time. And here is is the code that worked on 14.x, but not on 12.x. // 4cc constant const CMIOExtensionProperty CMIOExtensionPropertyCustomPropertyData_BackgroundOption = @"4cc_back_glob_0000"; - (nullable CMIOExtensionDeviceProperties *)devicePropertiesForProperties:(NSSet<CMIOExtensionProperty> *)properties error:(NSError * _Nullable *)outError { // doesn't work on macOS 12.x, works on 14.x CMIOExtensionDeviceProperties *deviceProperties = [CMIOExtensionDeviceProperties devicePropertiesWithDictionary:@{}]; if ([properties containsObject:CMIOExtensionPropertyCustomPropertyData_BackgroundOption]) { NSNumber* nsBackgroundOption = [NSNumber numberWithUnsignedInt:(unsigned int)_backgroundOption]; CMIOExtensionPropertyState* state = [CMIOExtensionPropertyState propertyStateWithValue:nsBackgroundOption]; [deviceProperties setPropertyState:state forProperty:CMIOExtensionPropertyCustomPropertyData_BackgroundOption]; } return deviceProperties; } - (BOOL)setDeviceProperties:(CMIOExtensionDeviceProperties *)deviceProperties error:(NSError * _Nullable *)outError { // doesn't work on macOS 12.x, works on 14.x NSDictionary* devicePropertiesDict = [deviceProperties propertiesDictionary]; CMIOExtensionPropertyState* propState = nil; propState = [devicePropertiesDict objectForKey:CMIOExtensionPropertyCustomPropertyData_BackgroundOption]; if (propState != NULL) { NSNumber* nsBackgroundOption = [propState value]; if (nsBackgroundOption != NULL) { uint32_t newBackgroundOption = [nsBackgroundOption unsignedIntValue]; if (newBackgroundOption != _backgroundOption) { log_info(@"##### Set Background Option to %d", (int) newBackgroundOption); } _backgroundOption = newBackgroundOption; } } return YES; }
1
0
522
Aug ’24
Configuration issues with CMBlockBuffer for AAC Audio
I am trying to acheive AAC playback, I have stripped off the ADTS header using a function. I am not being shown any errors by the Apple API however I cannot hear any playback. Here is my asbd My sample is definitely 44.1KHz and AAC_LC. Here is the file for reference: https://dl.espressif.com/dl/audio/ff-16b-2c-44100hz.aac Here are some relevant snippets of the code: AudioStreamBasicDescription desc = {0}; desc.mSampleRate = 44100; // Sample rate desc.mFormatID = kAudioFormatMPEG4AAC; // Format ID for AAC desc.mChannelsPerFrame = 2; // Stereo audio desc.mFramesPerPacket = 1024; // AAC typically uses 1024 frames per packet desc.mBitsPerChannel = 0; desc.mBytesPerPacket = 0; desc.mBytesPerFrame = 0; OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &desc, inlayout_size, //32 corresponding to stereo inlayout_buf, kAudioFormatMPEG4AAC, nil, nil, &_fmtDesc); const CMBlockBufferCustomBlockSource blockSource = { .version = kCMBlockBufferCustomBlockSourceVersion, .FreeBlock = customBlock_Free, .refCon = block, }; OSStatus status; CMSampleBufferRef sampleBuffer; CMBlockBufferRef blockBuf; status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault, (block->p_buffer), // memoryBlock (block->i_buffer), // blockLength kCFAllocatorNull, // blockAllocator &blockSource, // customBlockSource 0, // offsetToData (block->i_buffer), // dataLength 0, // flags &blockBuf); const CMSampleTimingInfo timeInfo = { .duration = kCMTimeInvalid, .presentationTimeStamp = CMTimeMake(_ptsSamples, _sampleRate), .decodeTimeStamp = kCMTimeInvalid, }; status = CMSampleBufferCreateReady(kCFAllocatorDefault, blockBuf, // dataBuffer _fmtDesc, // formatDescription 1024, // numSamples 1, // numSampleTimingEntries &timeInfo, // sampleTimingArray 1, // numSampleSizeEntries &_bytesPerFrame, // sampleSizeArray &sampleBuf); The renderer then handles this sampleBuf which is working correctly as I have tested it for other formats. I have verified the hex dump of the p_buffer and it matches with that of the .aac file having removed the ADTS header. Here is an output example Hex dump of p_buffer(which is being passed to CMBlockBufferCreateWithMemoryBlock): 4CFE1DE0: 21 1A 8F 20 63 E7 FF FF 11 72 A3 20 C5 E3 B7 E9 4CFE1DF0: 42 F5 3D 9A D1 77 D2 F0 9A 00 00 B2 32 53 84 8C 4CFE1E00: E8 24 ED DF 23 04 3D CF A6 51 A8 D2 8F EE B3 FB 4CFE1E10: F4 CC 17 F9 7C 8B 75 06 29 8D D6 95 98 78 9D 87 4CFE1E20: 9C B4 9D 8E 2B 6C D2 90 D7 E3 C4 37 05 97 85 C1 4CFE1E30: F7 5E 7F D8 F3 DD 20 B5 73 31 C5 EC 3D 6F AC 5E 4CFE1E40: 45 AF CC 38 0D 5B 98 F5 F9 3B 3E D7 C3 8E 1B 38 4CFE1E50: F8 F1 9A 6F 96 05 15 CE 39 D6 2B 06 60 33 8A C4 4CFE1E60: EE 4F 6B B3 C9 CF F2 BF 3F B1 96 69 B9 62 34 62 4CFE1E70: CD 41 1C 08 CF 80 5F A4 60 BD 45 36 AC 66 00 40 4CFE1E80: 42 F6 95 F4 89 8A A2 24 11 01 74 08 82 33 94 D1 4CFE1E90: 0B 24 51 4A 55 28 06 21 78 85 D4 B5 13 49 1D AA 4CFE1EA0: 44 02 32 E9 42 61 8C 59 4A 65 96 4D BC BC AE D2 4CFE1EB0: F1 D0 00 00 D4 A2 F8 87 A0 FD C8 93 87 59 A2 CB 4CFE1EC0: BE B3 AB 49 C6 37 60 2B 50 26 D3 0C 1D 29 45 81 4CFE1ED0: D9 4E 62 5E 29 8E 27 19 75 FB 62 0B 3B C0 B9 E6 4CFE1EE0: EB A0 3F B8 D5 7E 77 90 C1 E2 9C D9 4E 5B 82 ED 4CFE1EF0: CF BC 55 1C 55 1B F2 DE CC B2 13 25 CB ED F5 B5 4CFE1F00: 6E F9 EF 38 DE 8C C4 38 C2 60 CF DA F3 F2 1F 80 4CFE1F10: C5 23 0C 3E 57 31 0D 5E EB 63 58 1A 28 38 7B B2 4CFE1F20: 0B F3 5B 33 96 59 55 44 4A 09 55 73 EC 94 A0 F3 4CFE1F30: FC F4 70 F9 76 FB FF 8D AD 13 01 30 05 C0 90 01 4CFE1F40: B2 37 27 24 44 B9 F0 24 4E C5 D4 25 D6 F7 20 4D 4CFE1F50: 39 92 5D 31 71 5B 4A B2 A4 C1 59 D4 42 60 1C 00 4CFE1F60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 i_buffer: 383 CMBlockBufferIsEmpty check: 0 Block not null in wrapBuffer CMBlockBufferCreateWithMemoryBlock status: 0 I did try multiple configurations, I am shown no errors by any log however I cannot hear playback. Please help me identify what is wrong here. I have used this as a reference, which seems to be based on previous Apple documentation https://github.com/UFOooX/iOSAACStreamPlayer
1
0
448
Aug ’24
xCode Project with Breakpoints and .app Crashing?
I recently created a project, originally in xCode 13.3 if i am not miistaken. Then i update it to 13.4 then update to mac os 15.6 as well. Previously the project worked fine, but then i notice if i activate breakpoints the projects stopped when i run it in xCode but it worked fine like before, without breakpoints activated. Also, when i build a .app of the project, the .app crashed exactly when the breakpoints previously stopped. I am very confused on how to continue, would like to get help from anyone regarding the issue. From what i can gather from the breakpoints and crash report, it's got something about UUIID registration? AV foundation? I am so confused. Please Help! Thanks.
0
0
388
Aug ’24
Camera Extension: Video Freezes When Previewing With QuickTime Player
I am developing a camera extension as described here Creating a camera extension as described in Creating a camera extension with Core Media I/O. To ensure this wasn't some issue with my code, I reverted to the sample code added when you choose File &gt; New &gt; Target &gt; Camera Extension in Xcode, in other words, I am using the example code provided by Apple. I am able to install the camera extension and see it in QuickTime Player, where I choose it as the video input. In QuickTime Player I see the white line generated by the sample code moving up and down. For some period of time, it works. But eventually, the video in QuickTime Player freezes. The thing that's really weird is if I add some NSLog() statements at the point in the code where it returns the newly created sample: [self-&gt;_streamSource.stream sendSampleBuffer:sbuf discontinuity:CMIOExtensionStreamDiscontinuityFlagNone hostTimeInNanoseconds:(uint64_t)(CMTimeGetSeconds(timingInfo.presentationTimeStamp) * NSEC_PER_SEC)]; the samples are still being generated and sent to the stream. But the apparently QuickTime Player has decided to stop consuming them. I thought maybe setting the discontinuity parameter to CMIOExtensionStreamDiscontinuityFlagTime or CMIOExtensionStreamDiscontinuityFlagSampleDropped if the delta time since the last sample was generated was off by a tiny bit, but this did not improve the situation. Finally, could this have something to do with frequently installing and uninstalling my camera extension as part of the debugging and testing process? Thanks in advance for any advice you might have!
0
0
517
Aug ’24