Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

How to change PHLivePhoto EXIF metadata
I have an app that allows the user to change a photo’s EXIF metadata. To do this, I request a content editing input, get the full size image, modify its properties, create a content editing output, write the output image to the rendered content URL, then call performChanges on the PHPhotoLibrary creating an asset change request for that asset setting its content editing output. This works as expected for regular photos but Live Photos get turned off converted to a regular photo. To address this, I’m doing something similar by changing the properties of the .photo image in the Live Photo. I detect when the content editing input has a Live Photo, create a Live Photo editing context, set a frame processor that returns the frame’s image after setting its properties to the updated properties when the frame type is photo, then I create the content editing output and save the Live Photo to that output. It modifies the Live Photo successfully, but the metadata is not updated. If you get the full size image again the properties are the original properties. If you look at the EXIF metadata using an app like Metapho it remains unchanged. What am I doing wrong here? Thanks! let imageURL = contentEditingInput.fullSizeImageURL! let inputImage = CIImage(contentsOf: imageURL, options: [.applyOrientationProperty: true])! var metadata: [AnyHashable: Any] = inputImage.properties // Edit the metadata as desired... let editingContext = PHLivePhotoEditingContext(livePhotoEditingInput: contentEditingInput)! editingContext.frameProcessor = { frame, error -> CIImage? in // Edit only the still photo if frame.type == .photo { return frame.image.settingProperties(metadata) } return frame.image } let contentEditingOutput = try await withCheckedThrowingContinuation { continuation in let editingOutput = PHContentEditingOutput(contentEditingInput: contentEditingInput) editingOutput.adjustmentData = adjustmentData editingContext.saveLivePhoto(to: editingOutput) { success, error in if success { continuation.resume(returning: editingOutput) } else { continuation.resume(throwing: error!) } } } try await PHPhotoLibrary.shared().performChanges { let request = PHAssetChangeRequest(for: asset) request.contentEditingOutput = contentEditingOutput }
0
0
521
Nov ’24
anyone have issues loading Photos in Europe (Germany, Netherlands)?
We have a new photo sharing app (https://photodare.ca). We've had no issues with photos loading in North America and Caribbean, but so far 2 users (Germany, Netherlands) are saying they can't load photos even though they've proven they have permissions for photos enabled. I can't reproduce this in Canada. Anyone know about other permissions we need to setup for european countries, or is anyone in GDPR countries willing to try this for us? They were on 17.6.1. Thanks either way
0
0
363
Oct ’24
Decode HLS livestream with VideoToolbox
Hi, I'm trying to decode a HLS livestream with VideoToolbox. The CMSampleBuffer is successfully created (OSStatus == noErr). When I enqueue the CMSampleBuffer to a AVSampleBufferDisplayLayer the view isn't displaying anything and the status of the AVSampleBufferDisplayLayer is 1 (rendering). When I use a VTDecompressionSession to convert the CMSampleBuffer to a CVPixelBuffer the VTDecompressionOutputCallback returns a -8969 (bad data error). What do I need to fix in my code? Do I incorrectly parse the data from the segment for the CMSampleBuffer? let segmentData = try await downloadSegment(from: segment.url) let (sps, pps, idr) = try parseH264FromTSSegment(tsData: segmentData) if self.formatDescription == nil { self.formatDescription = try CMFormatDescription(h264ParameterSets: [sps, pps]) } if let sampleBuffer = try createSampleBuffer(from: idr, segment: segment) { try self.decodeSampleBuffer(sampleBuffer) } func parseH264FromTSSegment(tsData: Data) throws -> (sps: Data, pps: Data, idr: Data) { let tsSize = 188 var pesData = Data() for i in stride(from: 0, to: tsData.count, by: tsSize) { let tsPacket = tsData.subdata(in: i..<min(i + tsSize, tsData.count)) guard let payload = extractPayloadFromTSPacket(tsPacket) else { continue } pesData.append(payload) } let nalUnits = parseNalUnits(from: pesData) var sps: Data? var pps: Data? var idr: Data? for nalUnit in nalUnits { guard let firstByte = nalUnit.first else { continue } let nalType = firstByte & 0x1F switch nalType { case 7: // SPS sps = nalUnit case 8: // PPS pps = nalUnit case 5: // IDR idr = nalUnit default: break } if sps != nil, pps != nil, idr != nil { break } } guard let validSPS = sps, let validPPS = pps, let validIDR = idr else { throw NSError() } return (validSPS, validPPS, validIDR) } func extractPayloadFromTSPacket(_ tsPacket: Data) -> Data? { let syncByte: UInt8 = 0x47 guard tsPacket.count == 188, tsPacket[0] == syncByte else { return nil } let payloadStart = (tsPacket[1] & 0x40) != 0 let adaptationFieldControl = (tsPacket[3] & 0x30) >> 4 var payloadOffset = 4 if adaptationFieldControl == 2 || adaptationFieldControl == 3 { let adaptationFieldLength = Int(tsPacket[4]) payloadOffset += 1 + adaptationFieldLength } guard adaptationFieldControl == 1 || adaptationFieldControl == 3 else { return nil } let payload = tsPacket.subdata(in: payloadOffset..<tsPacket.count) return payloadStart ? payload : nil } func parseNalUnits(from h264Data: Data) -> [Data] { let startCode = Data([0x00, 0x00, 0x00, 0x01]) var nalUnits: [Data] = [] var searchRange = h264Data.startIndex..<h264Data.endIndex while let range = h264Data.range(of: startCode, options: [], in: searchRange) { let nextStart = h264Data.range(of: startCode, options: [], in: range.upperBound..<h264Data.endIndex)?.lowerBound ?? h264Data.endIndex let nalUnit = h264Data.subdata(in: range.upperBound..<nextStart) nalUnits.append(nalUnit) searchRange = nextStart..<h264Data.endIndex } return nalUnits } private func createSampleBuffer(from data: Data, segment: HLSSegment) throws -> CMSampleBuffer? { var blockBuffer: CMBlockBuffer? let alignedData = UnsafeMutableRawPointer.allocate(byteCount: data.count, alignment: MemoryLayout<UInt8>.alignment) data.copyBytes(to: alignedData.assumingMemoryBound(to: UInt8.self), count: data.count) let blockStatus = CMBlockBufferCreateWithMemoryBlock( allocator: kCFAllocatorDefault, memoryBlock: alignedData, blockLength: data.count, blockAllocator: nil, customBlockSource: nil, offsetToData: 0, dataLength: data.count, flags: 0, blockBufferOut: &blockBuffer ) guard blockStatus == kCMBlockBufferNoErr, let validBlockBuffer = blockBuffer else { alignedData.deallocate() throw NSError() } var sampleBuffer: CMSampleBuffer? var timing = [calculateTiming(for: segment)] var sampleSizes = [data.count] let sampleStatus = CMSampleBufferCreate( allocator: kCFAllocatorDefault, dataBuffer: validBlockBuffer, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: formatDescription, sampleCount: 1, sampleTimingEntryCount: 1, sampleTimingArray: &timing, sampleSizeEntryCount: sampleSizes.count, sampleSizeArray: &sampleSizes, sampleBufferOut: &sampleBuffer ) guard sampleStatus == noErr else { alignedData.deallocate() throw NSError() } return sampleBuffer } private func decodeSampleBuffer(_ sampleBuffer: CMSampleBuffer) throws { guard let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) else { throw NSError() } if decompressionSession == nil { try setupDecompressionSession(formatDescription: formatDescription) } guard let session = decompressionSession else { throw NSError() } let flags: VTDecodeFrameFlags = [._EnableAsynchronousDecompression, ._EnableTemporalProcessing] var flagOut = VTDecodeInfoFlags() let status = VTDecompressionSessionDecodeFrame( session, sampleBuffer: sampleBuffer, flags: flags, frameRefcon: nil, infoFlagsOut: nil) if status != noErr { throw NSError() } } private func setupDecompressionSession(formatDescription: CMFormatDescription) throws { self.formatDescription = formatDescription if let session = decompressionSession { VTDecompressionSessionInvalidate(session) self.decompressionSession = nil } var decompressionSession: VTDecompressionSession? var callback = VTDecompressionOutputCallbackRecord( decompressionOutputCallback: decompressionOutputCallback, decompressionOutputRefCon: Unmanaged.passUnretained(self).toOpaque()) let status = VTDecompressionSessionCreate( allocator: kCFAllocatorDefault, formatDescription: formatDescription, decoderSpecification: nil, imageBufferAttributes: nil, outputCallback: &callback, decompressionSessionOut: &decompressionSession ) if status != noErr { throw NSError() } self.decompressionSession = decompressionSession } let decompressionOutputCallback: VTDecompressionOutputCallback = { ( decompressionOutputRefCon, sourceFrameRefCon, status, infoFlags, imageBuffer, presentationTimeStamp, presentationDuration ) in guard status == noErr else { print("Callback: \(status)") return } if let imageBuffer = imageBuffer { } }
1
0
641
Dec ’24
iOS 18.2 crash on CGImageDestinationFinalize
My app reports a lot of crashes from 18.2 users. I have been able to narrow down the issue to this line of code: CGImageDestinationFinalize(imageDestination) The error is Thread 93: EXC_BAD_ACCESS (code=1, address=0x146318000) But I have no idea why this suddently started to crash. Here is the code of the function: private func estimateSizeUsingThumbnailMethod(fromImageURL url: URL, imageSettings: ImageSettings) -> (Int, Int) { let sourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary guard let source = CGImageSourceCreateWithURL(url as CFURL, sourceOptions), let imageProperties = CGImageSourceCopyPropertiesAtIndex(source, 0, nil) as? [CFString: Any], let imageWidth = imageProperties[kCGImagePropertyPixelWidth] as? CGFloat, let imageHeight = imageProperties[kCGImagePropertyPixelHeight] as? CGFloat else { return (0, 0) } let maxImageSize = max(imageWidth, imageHeight) let thumbMaxSize = min(2400, maxImageSize) // Use original size if possible, but not if larger than 2400, in this case we'll extrapolate from thumbnail let downsampleOptions = [ kCGImageSourceCreateThumbnailFromImageAlways: true, kCGImageSourceCreateThumbnailWithTransform: true, kCGImageSourceThumbnailMaxPixelSize: thumbMaxSize as CFNumber, ] as CFDictionary guard let cgImage = CGImageSourceCreateThumbnailAtIndex(source, 0, downsampleOptions) else { DLog("CGImage thumb creation error") return (0, 0) } let data = NSMutableData() guard let imageDestination = CGImageDestinationCreateWithData(data, UTType.jpeg.identifier as CFString, 1, nil) else { DLog("CGImage destination creation error") return (0, 0) } let destinationProperties = [ kCGImageDestinationLossyCompressionQuality: imageSettings.quality.compressionRatio() // Set jpeg compression ratio ] as CFDictionary CGImageDestinationAddImage(imageDestination, cgImage, destinationProperties) CGImageDestinationFinalize(imageDestination) // <----- CRASHES HERE with EXC_BAD_ACCESS ... } So far, I'm stuck. Any idea that could help would be greatly appreciated, as I'm scared that this crash will propagate on the official release of 18.2
1
0
774
Nov ’24
Connect DSLR Camera issues By Using ImageCaptureCore Framework
Hi Apple Engineer, My App is using ImageCapture Framwork to connect DSLR Camera, Before iOS 18 this method is effective,but When I upgraded my iPhone and iPad, found my app can`t connect DSLR Camera, open Setting -> Privacy & Security -> Files and Folders permission, can‘t found my app, I swear it worked before iOS 18. I find other developers have the same problem. https://forums.developer.apple.com/forums/thread/756960 . https://developer.apple.com/forums/thread/765768. I also found a process for reproducing this problem in ios 18, Do reset all settings. Can you help me with this problem? Or tell me how to use the API properly.Look forward to your reply. Thank you very much.
2
0
446
Oct ’24
Debug MediaExtension plugin in system exectutable?
I am developing a macOS 15 MediaExtension plugin to enable additional codecs and container formats in AVFoundation My Plugin is sort of working, but i'd like to debug the XPC process that AVFoundation 'hoists' for me from the calling app (ie - the process hosting my plugin instance that is managing the MESampleBuffer protocol calls for example) Is there a method to configure XCode for interactive attaching to this background process for interactive debugging? Right now I have to use Console + Print which is not fun or productive. Does Apple have a working example of a MediaExtension anywhere? This is an exciting API that is very under-documented. I'm willing to spend a Code Review 'credit' for this, but my issues are not quite focused. Any assistance is highly appreciated!
0
0
493
Dec ’24
Airplay and Screen Mirroring
I cannot mirror or extend my screen from mac mini m2 to iPad 10 gen. Whenever I click on "mirror or extend screen" my external display for mac refreshes after showing "no signal" and comes back on meanwhile my iPad locks out and screen mirror or extending is unsuccessful. But I can mirror my iPad screen to mac mini m2. Earlier everything was working, suddenly it is not working
1
0
650
Oct ’24
Progress tracking does not work after the download has been paused and resumed
Hello, I recently started integrating HLS downloads into my application by using AVAssetDownloadTask and AVAssetDownloadConfiguration. I took an example from the documentation as a basis, with only one small difference: the minimum target for my application is iOS 16, so I replaced urlSession(_:assetDownloadTask:willDownloadTo:) with urlSession(_:assetDownloadTask:didFinishDownloadingTo:). And I encountered the following issue: after pausing a download and resuming it later, the progress no longer functions as expected. Could you, please, help me with this? What are the right approaches to implementing pause and progress tracking? Some details: I used devices with iOS 16.0.2 and 17.6.1 for testing. There was no code in the example that pauses the download and resumes it. So, I used the following methods to do this: suspend and resume Also, I have tried to track downloading progress using two different approaches: Using task.progress.observe(\.fractionCompleted) { ... }, which was presented in the example. In this scenario, after a pause, an observation callback will only be called once, when the download has completed, despite the fact that data is being successfully downloaded over the network. Using urlSession(_:assetDownloadTask:didLoad:totalTimeRangesLoaded:timeRangeExpectedToLoad:) and calculating progress as totalTimeRangesLoaded.reduce(0.0) { $0 + CMTimeGetSeconds($1.timeRangeValue.duration) / CMTimeGetSeconds(timeRangeExpectedToLoad.duration) }. In this scenario, I have noticed that the result of the calculation does not always increase, but sometimes there are outliers. Example of logs: 68%, 69%, 70%, 72%, 63%, 65%, 66%, 69%, 70%, 71%, 72%. Such fluctuations are most easily reproduced when I try to resume the download after pause. However, sometimes they occur spontaneously. It's important to mention, that this method marked as deprecated, perhaps for this reason. In both cases download is successful, the problem is with progress reporting only. Full version of code can be found here.
0
0
321
Sep ’24
How to Load Stereoscopic Video Using AVFoundation?
I’m currently working on an iOS project that involves loading and playing stereoscopic/spatial videos. I’m using the AVFoundation framework, specifically AVURLAsset, but I’m having trouble determining how to correctly load and handle stereoscopic videos. I would like to know: Any guidance or code snippets would be greatly appreciated, I´m not understanding pretty well the apple developer videos... Thank you in advance for your help! Best, Lau
1
0
550
Oct ’24
HLS CMAF/fMP4 CENC CBCS pattern encryption
Hello, I'm writing a program to create CMAF compliant HLS files, with encryption. I have a copy of ISO_IEC_23001-7_2023 to attempt to follow the spec. I am following the 1:9 pattern encryption using CBCS, so for every 16 bytes of encrypted NAL unit data (of type 1 and 5), there's 144 bytes of clear data. When testing my output in Safari with 'identity' keys Quickly Diagnosing Content Key and IV Issues, Safari will request the identity key from my test server and first few bytes of the CMAF renditions, but will not play and console gives away no clues to the error. I am setting the subsample bytesofclear/protected data in the senc boxes. What I'm not sure of, is whether HLS/Safari/iOS acknowledges the senc/saiz/saio boxes of the MP4. There are other third party packagers Bento4, who suggest that they do not: those clients ignore the explicit encryption layout metadata found in saio/saiz boxes, and instead rely purely on the video slice header size to determine the portions of the sample that is encrypted So now I'm fairly sure I need to decipher the video slice header size, and apply the protected blocks from that point on. My question is, is that all there is to it? And is there a better way to debug my output? mediastreamvalidator will only work against unencrypted variants (which I'm outputting okay). Thanks in advance!
0
0
605
Dec ’24
AirPlay fails when app is also installed on the receiver
We have a universal iOS/tvOS app that also supports iOS App on Mac. In our AVPlayer-based video player we support AirPlay with AVRouteDetector and AVRoutePickerView. We play HLS streams. When we try to AirPlay from an iOS device to an Apple TV or a Mac that has our app installed, it doesn't work. The receiver is marked as active in the route picker UI but the video doesn't show up on the receiver and playback stops. When our app isn't installed on the receiver device, everything works as expected. Has anyone encountered the same issue? Any solutions available for this?
0
0
563
Nov ’24
AVAudioUnitTimePitch: speeding up introduces artifacts
For an upcoming update of one of my apps, I’m facing an issue: The .rate parameter of a AVAudioUnitTimePitch allows me to slow down an audio track without any issues: setting .rate to 0.7 or 0.8 results in an almost perfect playback without changing pitch. However, whenever the .rate parameter is greater than 1 (e.g. 1.1 or 1.15), I’m starting to hear audio artifacts (“flattering”) in the audio output which is not so nice (even at .overlap = 32). Intuitively, I’d’ve thought that speeding up the file should contain less artifacts than slowing it down?? I’ve tried different sample rates (44.1 kHz and 48 kHz), but same result. Grateful for any input on this 🙏
0
0
404
Nov ’24
Ford Puma Sync 3 problems with iOS 18
Good afternoon since I’ve installed ios 18 on me iphone 15 pro I have problems using Apple car play with my Ford Puma with Sync 3. More in detail, problems with audio commands, selecting audio track, bluetooth, etc.. Are you aware about it? Thanks a lot Regards Alberto
0
0
383
Oct ’24
aumi AUv3 with AvAudioEngine ConnectMIDI multiple
Hi! I am creating a aumi AUv3 extension and I am trying to achieve simultaneous connections to multiple other avaudionodes. I would like to know it is possible to route the midi to different outputs inside the render process in the AUv3. I am using connectMIDI(_:to:format:eventListBlock:) to connect the output of the AUv3 to multiple AvAudioNodes. However, when I send midi out of the AUv3, it gets sent to all the AudioNodes connected to it. I can't seem to find any documentation on how to route the midi only to one of the connected nodes. Is this possible?
3
0
582
Dec ’24
IOS 18 Voicemail feature
I am facing an Issue regarding the voicemail feature. when someone calls me, it would go to voicemail only if I click on the voicemail icon. If I do not respond at all to the incoming call, it does not give the caller an option to record voicemail. I have tried switching the voicemail on and off, its working on other devices Iphone 14 and 15 for my family member, just not for me. How can i resolve this?
1
0
630
Oct ’24
MPRemoteCommand play conflicting with .playPause gesture
We develop a video playback app on Apple TV which has the two following features: Its content browsing screen has installed a gesture recognizer for presses on the PlayPause Siri remote button in order to directly launch a playback. The gesture recognizer is attached to the content browsing UIViewController view. It presents its own custom playback UI with an AVPlayerLayer for the video and supports MPNowPlayingSession in order to publish current playback information and respond to remote commands. It also supports switching between fullscreen and Picture in Picture playback. Both features work fine, ie. the playback is launched when pressing the PlayPause Siri remote button and, during playback, the playback info are properly advertised on other devices and remote commands are also triggered as expected. However, when pressing the PlayPause Siri remote button while the video is playing in PiP, the "pause" remote command is sometimes triggered instead of the .playPause gesture recognizer. The issue may not occur the first time but for subsequent PlayPause presses. Navigating a bit in the app UI seems to help preventing the issue to occur. Finally, the issue only occurs if the video is playing. If the video is paused, the PlayPause Siri remote button gesture is always recognized instead of the remote command. Please note that, before using MPNowPlayingSession (and the corresponding MPRemoteCommandCenter), the app was using the default MPRemoteCommandCenter to support remote commands and the issue did not occur. We don't reproduce this issue with the Apple TV app so there's probably something we are not doing right. Has someone any clue?
0
0
589
Nov ’24
FxPlug4.3 & Window
1.In the FxRemoteWindowAPI protocol, there is no way to set window.frame.origin. 2.When using NSWindow, you cannot set [Window setLevel:NSFloatingWindowLevel]. 3.How can I keep the window in front of Final Cut Pro without affecting the normal use of Final Cut Pro?
1
0
413
Oct ’24
Why doesn't sometimes recommendedVideoSettings have recommend settings?
I am talking about AVCaptureVideoDataOutput.recommendedVideoSettings. I found sometimes it return nil, there is my test result. hevc .mov with activeColorSpace sRGB 60FPS -> ok 120FPS -> ok hevc .mov with activeColorSpace displayP3_HLG 60FPS -> nil 120FPS -> nil h264 .mov 30FPS -> ok 60FPS -> nil 120FPS -> nil so, if you don't give a recommend setting, and you don't give a document, how does developer to use it?
0
0
378
Nov ’24