Hello Devs!
Anyone has an idea if it is feasible to override the native camera in apple?
So if a user have an app called "xyz" installed, when the user open his native camera and qr code is detected, we display a pop if he wants to continue with "xyz"
Thanks
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I’m experiencing an unusual audio issue with AirPods on macOS Sequoia while developing VoIP applications like Zoom and FaceTime.
When AirPods are connected, the other party’s voice sometimes sounds unnaturally stretched (approximately twice as long).
This problem can be temporarily fixed by switching the sound output settings from AirPods to speakers and then back to AirPods.
From our analysis, the issue appears to be related to the sample rate provided by AudioObjectGetPropertyData.
Here’s what we’ve observed:
When the issue occurs, the AudioStreamBasicDescription.sampleRate for AirPods is reported as 48000.
Under normal conditions, it’s reported as 24000.
It seems like the system is mistakenly returning a sample rate that doesn’t match the AirPods’ actual settings, perhaps defaulting to a system speaker value.
Once the output setting is toggled, the correct sampleRate (24000) is retrieved.
This discrepancy causes our application to transmit the audio stream at 48000, leading to the distorted playback.
Has anyone encountered a similar issue or knows how to resolve it?
My iphone 15 plus suddenly turns black and a losing icon keeps spinning. Then it turns off and I can use it again, it is only for a few seconds.
I have updated to iOS 18.1 beta, could this be the issue. Is my phone broken?
I have tried restarting my phone
Topic:
Media Technologies
SubTopic:
General
Hi,
I have a IOS app and we are using fairplay DRM to play videos. In IOS app we are allowing offline download of the videos and hence we are getting a persistent fairplay license. In IOS app everything is working fine.
Now we have used the same app and built for MacOS catalyst. In MAC OS catalyst app we are not able to play the video and getting error code -42650
We are able to get the persistent license from server, but when we play the video with the license we are getting the error. Below are the logs:
2024-12-06 22:05:48.911266+0530 0x4dffe2 Default 0x0 85505 0 teachonline: (MediaToolbox) [com.apple.coremedia:] <<<< FigPKDKeyManager >>>> keyManager_processOfflineKeyInternal: 0x600000322000 160D4519-C60B-4FD0-B69A-20B2A4597017 created decrypt context:0x0 with offline key; updated offline key:0x0 err:-42650
2024-12-06 22:05:48.911369+0530 0x4dffe2 Default 0x0 85505 0 teachonline: (MediaToolbox) [com.apple.coremedia:player] <<<< FigStreamPlayer >>>> fpfs_ensureDecryptorHasStarted: [0x7fc44e4dc520|P/NW] <0x7fc44fa44000|I/SRA.01>: track 1 latching decryptorFailure -42650
85505 0 teachonline: (MediaToolbox) [com.apple.coremedia:player] <<<< FigStreamPlayer >>>> fpfs_StopPlayingItem: [0x7fc44e4dc520|P/NW] <0x7fc44fa44000|I/SRA.01>: Pausing, err=Error Domain=CoreMediaErrorDomain Code=-42650 "(null)"
I have copied only the lines which has errors. You can download the full logs from https://drive.google.com/file/d/1feb9pKZERUr--PMt6m-6IrO_mDvoFbjO/view?usp=sharing
Can you please help me to fix the issue.
I've seen the Multiview feature on tvOS that displays a small grid icon when available. However, I only see this functionality in VisionOS using the AVMultiviewManager. Does a different name refer to this feature on tvOS?
Relevant Links:
https://www.reddit.com/r/appletv/comments/12opy5f/handson_with_the_new_multiview_split_screen/
https://www.pocket-lint.com/how-to-use-multiview-apple-tv/#:~:text=You'll%20see%20a%20grid,running%20at%20the%20same%20time.
I am creating an AVComposition and using it with an AVPlayer. The player works fine and doesn't consume much memory when I do not set playerItem.videoComposition. Here is the code that works without excessive memory usage:
func configurePlayer(composition: AVMutableComposition, videoComposition: AVVideoComposition) {
player.pause()
player.replaceCurrentItem(with: nil)
let playerItem = AVPlayerItem(asset: composition)
player.play()
}
However, when I add playerItem.videoComposition = videoComposition, as in the code below, the memory usage becomes excessive:
func configurePlayer(composition: AVMutableComposition, videoComposition: AVVideoComposition) {
player.pause()
player.replaceCurrentItem(with: nil)
let playerItem = AVPlayerItem(asset: composition)
playerItem.videoComposition = videoComposition
player.play()
}
Issue Details:
The memory usage seems to depend on the number of video tracks in the composition, rather than their duration. For instance, two videos of 30 minutes each consume less memory than 40 videos of just 2 seconds each.
The excessive memory usage is showing up in the Other Processes section of Xcode's debug panel.
For reference, 42 videos, each less than 30 seconds, are using around 1.4 GB of memory.
I'm struggling to understand why adding videoComposition causes such high memory consumption, especially since it happens even when no layer instructions are applied. Any insights on how to address this would be greatly appreciated. Before After
I initially thought the problem might be due to having too many layer instructions in the video composition, but this doesn't seem to be the case. Even when I set a videoComposition without any layer instructions, the memory consumption remains high.
I explored several methods to trigger a 35mm camera connected via USB:
1- ICCameraDevice: Unable to make it work with Canon cameras (details).
2- Canon's EDSDK: Works but is complex to implement.
3- gPhoto2 (command-line): Simple to use but requires gPhoto2 to be installed.
In your opinion, what is the most efficient way to trigger and download images via USB from Canon cameras?
I’m building a camera app using SwiftUI and UIKit (with UIViewControllerRepsrwsentable). My app already is able to capture photos, but I also want to implement the important feature - apply my custom image filter to the image for live preview in camera and when this image is saving to the photo library (like in the default Apple camera app with Photographic styles).
My image filter must be pretty advanced because I’m a photographer and I trying to achieve the same colours as I have with my custom image preset in Lightroom. I want to control the image parameters such as basic (exposure, contrast, shadows, etc.), tone curves for each channel (Red, Green, Blue channels separately), HSL (for Red, Orange, Yellow, Green, Blue, Aqua, Purple and Magenta), apply colour grading and more.
Currently I’m straggling with implementation of this. I tried to create a custom image filter using Metal (it works with saturation) but I’m not sure if it is the best approach. I need help and recommendations of how developers implement this complex thing in their apps (what technologies should I use and etc.)
Hello!
I am having trouble setting start times for songs when using the ApplicationMusicPlayer.
When I initialize a new MusicPlayer.Queue.Entry using the following constructor, I am seeing strange results:
init(
_ playableMusicItem: PlayableMusicItem,
startTime: TimeInterval? = nil,
endTime: TimeInterval? = nil
)
It appears that any value I provide for startTime is also applied to the endTime. For example:
MusicPlayer.Queue.Entry(playable, startTime: TimeInterval(30), endTime: TimeInterval(183))
provides the following console output:
MusicPlayer.Queue.Entry(id: "3D6A3DA3-595E-4657-8DBA-DDD245BBB7EF", transientItem: PlayableMusicItem, startTime: 30.0, endTime: 30.0)
I have also tried setting the endTime to nil with the same result. Does anyone have any experience setting start times for songs using the MusicKit ApplicationMusicPlayer?
Any feedback is greatly appreciated!
Hi everyone,
I am working on a 3D reconstruction project.
Recently I have been able to retrieve the intrinsics from the two cameras on the back of my iPhone.
One consideration is that I want this app to run regardless if there is no LiDAR, but at least two cameras on the back. IF there is a LiDAR that is something I have considered to work later on the course of the project.
I am using a AVCaptureSession with the two cameras AVCaptureDevice:
builtInWideAngleCamera
builtInUltraWideCamera
The intrinsic matrices seem to be correct. However, the when I retrieve the extrinsics, e.g., builtInWideAngleCamera w.r.t. builtInUltraWideCamera the matrix I get looks like this:
Extrinsic Matrix (Ultra-Wide to Wide):
[0.9999968, 0.0008149305, -0.0023960583, 0.0]
[-0.0008256607, 0.9999896, -0.0044807075, 0.0]
[0.002392382, 0.0044826716, 0.99998707, 0.0].
[-14.277955, -8.135408e-10, -0.3359985, 0.0]
The extrinsic matrix of the form: [R | t], seems to be correct for the rotational part, but the translational vector is ALL ZEROS. Which suggests that the cameras are physically overlapped as well the last element not being 1 (homogeneous coordinates).
Has anyone encountered this 'issue' before?
Is there a flaw in my reasoning or something I might be missing?
Any comments are very much appreciated.
I have a photo editing app which uses a simple Metal Render to display CIFilter output images. It works just fine in Swift 5 but in Swift 6 it crashes on starting the Metal command buffer with an error in the Queue : com.Metal.CompletionQueueDispatch (serial).
The crash is occurring before I can debug.. I changed the command buffer to report
MTLCommandBufferDescriptorStatus errorOptions = .encoderExecutionStatus.
No luck with getting insight into the source of the crash..
Likewise the error is happening before any of the usual Metal debug tools are enabled.
The Metal render works just fine in Swift 5 and also works fine with almost all of the Swift Compiler Upcoming feature flags set to Yes. [The "Default Internal Imports" flag is still No. (the number of compile errors with this setting is absolutely scary! but that's another topic)
Do you have any suggestions on debugging or ideas on why the Metal library is crashing in Swift 6???
Everything is current release versions and hardware.
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code.
So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock;
Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback:
[playerNode installTapOnBus:bus
bufferSize:bufferSize
format:format
block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
//Inspect current audio here and fire...
}];
[playerNode scheduleBuffer:fullbuffer
atTime:startTime
options:0
completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack
completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType)
{
// some code is here, not important to this question.
}];
The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled).
Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
Topic:
Media Technologies
SubTopic:
Audio
Tags:
AVAudioNode
AVAudioSession
AVAudioEngine
AVFoundation
I'd like to find out: Can backgrounded apps record audio?
In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything.
However I'm not familiar with the current state of affairs.
With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone?
Thanks.
Topic:
Media Technologies
SubTopic:
Audio
Description
As of iOS 18, AVAudioSession.setPreferredIOBufferDuration ignores the requested buffer size when Sound Recognition or Vocal Shortcuts is enabled. This results in 1) much larger buffer sizes and 2) mismatched buffer sizes between input and output buffers, which causes ‘glitchy’ audio and increased latency.
Additionally, when this issue occurs AVAudioSession.setPreferredIOBufferDuration continues to return ‘true’ and no error is produced.
Steps to Reproduce:
Enable Vocal Shortcuts on a device running iOS 18. Enable at least one shortcut (e.g. Control Center).
Open or clone the example project (https://github.com/cwalo/SoundRecognitionBug)
Build and install the example project
Attach a headset and launch the application
Observe console logs showing
a requested buffer size of 0.005805 (256 samples @ 48k)
an actual buffer size of 0.023220 (1104 samples @48k - this is regularly the resulting buffer size in all of our tests)
Quit the app and detach the headset. Enable mutesOutput in AudioSystem.mm (to avoid feedback)
Launch the application
Observe
Same result from step 4
Mismatched hardware buffer size of 1104 and recorded frame count of 1024
Mismatched playbackCount and recordCount
Quit the app and disable vocal shortcuts
Launch the app
Observe IOBufferDuration matching the requested duration and matched buffer sizes (expected behavior)
Expected results:
Requested IOBufferDuration is respected or AVAudioSession returns false or error is produced
Input and output buffer sizes match
Device(s): iPhone 11 Pro, iPad Pro
OS: iOS 18.0.1
Environment: Xcode 16.1
FB: FB15715421
Related to: https://forums.developer.apple.com/forums/thread/765477
Hello
Our application is backing up the user photos to some back end.
When retrieving the asset data from the Photo Library, we set the flag 'accessNetworkAllowed' to true to get the assets that might be optimized in iCloud.
In the application logs, we can see the message below, and it shows as coming from com.apple.photos.backend (PhotoKit)
Missing prefetched properties for PHAssetAdjustmentProperties on <PHAsset: 0x160b1ec00> BCF5688F-F7A7-4196-AFC7-A84E8BD95F3E/L0/001 mediaType=1/0, sourceType=1, (5601x3734), creationDate=2022-01-24 23:36:05 +0000, location=0, hidden=0, favorite=0, adjusted=0 . Fetching on demand on the main queue, which may degrade performance.
In particular, the message says 'Fetching on demand on the main queue' but I'm not sure if that means that PhotoKit will fetch on main queue or if that mean that our application is requesting the data on main queue.
Anyone could clarify?
thanks
I'm trying to add metadata every second during video capture in the Swift sample App "AVMultiCamPiP". A simple string that changes every second with a write function triggered by a Timer. Can't get it to work, no matter how I arrange it, always ends up with the error "Cannot create a new metadata adaptor with an asset writer input that has already started writing".
This is the setup section:
// Add a metadata input
let assetWriterMetaDataInput = AVAssetWriterInput(mediaType: .metadata, outputSettings: nil, sourceFormatHint: AVTimedMetadataGroup().copyFormatDescription())
assetWriterMetaDataInput.expectsMediaDataInRealTime = true
assetWriter.add(assetWriterMetaDataInput)
self.assetWriterMetaDataInput = assetWriterMetaDataInput
This is the timed metadata creation which gets triggered every second:
let newNoteMetadataItem = AVMutableMetadataItem()
newNoteMetadataItem.value = "Some string" as (NSCopying & NSObjectProtocol)?
let metadataItemGroup = AVTimedMetadataGroup.init(items: [newNoteMetadataItem], timeRange: CMTimeRangeMake( start: CMClockGetTime( CMClockGetHostTimeClock() ), duration: CMTime.invalid ))
movieRecorder?.recordMetaData(meta: metadataItemGroup)
This function is supposed to add the metadata to the track:
func recordMetaData(meta: AVTimedMetadataGroup) {
guard isRecording,
let assetWriter = assetWriter,
assetWriter.status == .writing,
let input = assetWriterMetaDataInput,
input.isReadyForMoreMediaData else {
return
}
let metadataAdaptor = AVAssetWriterInputMetadataAdaptor(assetWriterInput: input)
metadataAdaptor.append(meta)
}
I have an older code example in objc which works OK, but it uses "AVCaptureMetadataInput appendTimedMetadataGroup" and writes to an identifier called "quickTimeMetadataLocationNote". I'd like to do something similar in the above Swift code ...
All suggestions are appreciated !
I have a low latency hls with fragmented mp4 setup. When I try to validate it with mediavalidatorstream tool, it gives following error:
Detail: '(null)' is not a valid URL
Source: media playlisturl - segment url in that playlist
What does that error mean?
Topic:
Media Technologies
SubTopic:
Streaming
In our app we have implemented a AVAssetResourceLoaderDelegate to handle encrypted downloaded files. We have it working on all iOS versions but we are seeing issues on iOS 15 (15.8.3) with large files (> 1 Gb). We have so far seen two cases where either the load method on the AVURLAsset fails early and throws an unknown error code or starts requesting more data than the device has available RAM. The CPU usage is almost always over 100%, even after pausing playback. The memory issue can happen even though the player has successfully started playback.
When running this on devices running iOS 16 and above we set the isEntireLengthAvailableOnDemand to true on the AVAssetResourceLoadingContentInformationRequest. This seems to be key to solving the issue those devices that support it. If we set the property to false we see the same memory issue as on iOS 15.
So we have a solution for iOS 16 and upwards but are at a loss for how to handle iOS 15. Is there something we have overlooked or is it in fact an issue with that iOS version?
Hello,
I'm writing a program to create CMAF compliant HLS files, with encryption.
I have a copy of ISO_IEC_23001-7_2023 to attempt to follow the spec.
I am following the 1:9 pattern encryption using CBCS, so for every 16 bytes of encrypted NAL unit data (of type 1 and 5), there's 144 bytes of clear data.
When testing my output in Safari with 'identity' keys Quickly Diagnosing Content Key and IV Issues, Safari will request the identity key from my test server and first few bytes of the CMAF renditions, but will not play and console gives away no clues to the error.
I am setting the subsample bytesofclear/protected data in the senc boxes. What I'm not sure of, is whether HLS/Safari/iOS acknowledges the senc/saiz/saio boxes of the MP4. There are other third party packagers Bento4, who suggest that they do not:
those clients ignore the explicit encryption
layout metadata found in saio/saiz boxes, and instead rely purely on the
video slice header size to determine the portions of the sample that is
encrypted
So now I'm fairly sure I need to decipher the video slice header size, and apply the protected blocks from that point on.
My question is, is that all there is to it? And is there a better way to debug my output? mediastreamvalidator will only work against unencrypted variants (which I'm outputting okay).
Thanks in advance!
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
HTTP Live Streaming
AVFoundation
I cannot find anything documentation re: isPrivacySensitiveAlbum. I've granted my app access to all photos. Not sure what else to try
Code that triggers the crash:
let options = PHFetchOptions()
options.fetchLimit = 1
let assetColl = PHAssetCollection.fetchAssetCollections(withLocalIdentifiers: [localId], options: options)
if assetColl.count > 0 {
if let asset = PHAsset.fetchKeyAssets(in: assetColl.firstObject!, options: options)
stack trace from here on
`2023-04-15 06:34:41.628537-0700 DPF[33615:6484880] -[PHCollectionList isPrivacySensitiveAlbum]: unrecognized selector sent to instance 0x7ff09232aec0
2023-04-15 06:34:41.632378-0700 DPF[33615:6484880] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[PHCollectionList isPrivacySensitiveAlbum]: unrecognized selector sent to instance 0x7ff09232aec0'
*** First throw call stack:
(
0 CoreFoundation 0x00007ff80045478b __exceptionPreprocess + 242
1 libobjc.A.dylib 0x00007ff80004db73 objc_exception_throw + 48
2 CoreFoundation 0x00007ff8004638c4 +[NSObject(NSObject) instanceMethodSignatureForSelector:] + 0
3 CoreFoundation 0x00007ff800458c66 ___forwarding___ + 1443
4 CoreFoundation 0x00007ff80045ae08 _CF_forwarding_prep_0 + 120
5 Photos 0x00007ff80b8480e1 +[PHAsset fetchKeyAssetsInAssetCollection:options:] + 86
6 DPF 0x0000000100791029 $s3DPF16AlbumListFetcherV22loadKeyImageForLocalIdySo7UIImageCSgSSYaFTY0_ + 569`