Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Audio Muted After Building Unity App to iOS Device – Only Resetting Device Settings Fixes It
Hi, I'm facing an issue with my Unity-based app when deploying it to the AVP. Often, after building and running the app on the device, the audio gets muted. I couln't find any setting that let me unmute it. The only solution I've found is to reset the device settings, which makes the audio work again. Here are a few things I’ve noticed: The sound works fine when I reset my device’s settings. I haven't changed any sound or audio settings on the device before or after deploying the app. The issue doesn’t always occur immediately, but when it does, resetting settings seems to be the only fix. Could there be something in the AVP audio configuration that causes this problem? I’d appreciate any advice or suggestions. Thanks!
0
0
281
Sep ’24
Camera USB
am new to using Swift for a Mac Application. I am trying to control an external UVC-compliant camera focus and other capabilities. However, I'm having trouble with this and don't know where to start. I have downloaded an application from the App Store and it can control the focus and other capabilities. I've tried IOKit but this seems to be complicated and this does not return any capabilities or control the camera. I also tried AVfoundation and was able to open the camera, but using the following code did not work for me. as a device.isFocusPointOfInterestSupported returns false and without checking the app crashes. @IBAction func focusChanged(_ sender: NSSlider) { do { guard let device = videoDevice else { return } try device.lockForConfiguration() // Check if focus mode and point of interest are supported if device.isFocusModeSupported(.locked) { device.focusMode = .locked } if device.isFocusPointOfInterestSupported { // Map the slider value (0.0 to 1.0) to the focus point's X coordinate let focusX = CGFloat(sender.doubleValue) let focusPoint = CGPoint(x: focusX, y: 0.5) // Y coordinate is typically 0.5 (centered vertically) device.focusPointOfInterest = focusPoint } else { print("Focus point of interest is not supported on this device.") } device.unlockForConfiguration() // Log focus settings print("Focus point: \(device.focusPointOfInterest)") print("Focus mode: \(device.focusMode.rawValue)") } catch { print("Error adjusting focus: \(error)") } Any help or advice is much appreciated.
0
0
440
Jan ’25
Ford Puma Sync 3 problems with iOS 18
Good afternoon since I’ve installed ios 18 on me iphone 15 pro I have problems using Apple car play with my Ford Puma with Sync 3. More in detail, problems with audio commands, selecting audio track, bluetooth, etc.. Are you aware about it? Thanks a lot Regards Alberto
0
0
383
Oct ’24
Is it possible to retrieve EXIF metadata from PHAsset without downloading photos (even if offloaded to iCloud Photo Library)?
iOS (Official) Photos app can display some EXIF-related metadata (e.g. camera and lens info, ISO, shutter speed, F-number) even when photos are offloaded to iCloud and the device is not connected to internet (e.g. airplane mode). However, with the Photos.framework, we need to download photos to retrive those metadata (which means it will not work with airplane mode). I tried the following methods, but none of those worked when photos were offloaded to iCloud and the device was in airplane mode: Requesting data with PHImageManager.default().requestImageDataAndOrientation Result: It does not return Data if the photo is not stored locally on the device, even with options.deliveryMode = .fastFormat Converting PHAsset#localIdentifier to an AssetsLibrary.framework URL (assets-library://asset/...) (I am aware that AssetsLibrary.framework is deprecated, but this was just a test.) Result: If PHImageManager does not returns Data, ALAsset#defaultRepresentation().metadata() returns an empty NSDictionary
0
1
583
Nov ’24
Time Limit
Hi Team, When we select the screen Timing option and if the screen limit is exists for the day limit we have an option to click Ignore Limit—-> Ignore limit for today. After we select this option, suppose if we are watching any videos via Third party apps (YouTube) at that moment it got hang after we clicked multiple times the play button the video started to play but the volume is not working, again if we click previous or forward button then only the sounds are coming. Kindly look into this issue and address it. Thanks, Pemkumar S
0
0
325
Sep ’24
Torch Freezes Ultra-Wide Camera When Switching Between Wide & Ultra-Wide Lenses (AVFoundation Bug?)
I'm developing an iOS app using AVFoundation for real-time video capture and object detection. While implementing torch functionality with camera switching (between Wide and Ultra-Wide lenses), I encountered a critical issue where the camera freezes when toggling the torch while the Ultra-Wide camera is active. Issue If the torch is ON and I switch from Wide to Ultra-Wide, the camera freezes If the Ultra-Wide camera is active and I try to turn the torch ON, the camera freezes The iPhone Camera app allows using the torch while recording video with the Ultra-Wide lens, so this should be possible via AVFoundation as well. Code snippet DispatchQueue.global(qos: .userInitiated).async { [weak self] in guard let self = self else { return } let isSwitchingToUltraWide = !self.isUsingFisheyeCamera let cameraType: AVCaptureDevice.DeviceType = isSwitchingToUltraWide ? .builtInUltraWideCamera : .builtInWideAngleCamera let cameraName = isSwitchingToUltraWide ? "Ultra Wide" : "Wide" guard let selectedCamera = AVCaptureDevice.default(cameraType, for: .video, position: .back) else { DispatchQueue.main.async { self.showAlert(title: "Camera Error", message: "\(cameraName) camera is not available on this device.") } return } do { let currentInput = self.videoCapture.captureSession.inputs.first as? AVCaptureDeviceInput self.videoCapture.captureSession.beginConfiguration() if isSwitchingToUltraWide && self.isFlashlightOn { self.forceEnableTorchThroughWide() } if let currentInput = currentInput { self.videoCapture.captureSession.removeInput(currentInput) } let videoInput = try AVCaptureDeviceInput(device: selectedCamera) self.videoCapture.captureSession.addInput(videoInput) self.videoCapture.captureSession.commitConfiguration() self.videoCapture.updateVideoOrientation() DispatchQueue.main.async { if let barButton = sender as? UIBarButtonItem { barButton.title = isSwitchingToUltraWide ? "Wide" : "Ultra Wide" barButton.tintColor = isSwitchingToUltraWide ? UIColor.systemGreen : UIColor.white } print("Switched to \(cameraName) camera.") } self.isUsingFisheyeCamera.toggle() } catch { DispatchQueue.main.async { self.showAlert(title: "Camera Error", message: "Failed to switch to \(cameraName) camera: \(error.localizedDescription)") } } } } Expected Behavior Torch should be able to work when Ultra-Wide is active, just like the iPhone Camera app does. The camera should not freeze when switching between Wide and Ultra-Wide with the torch ON. AVCaptureSession should not crash when toggling the torch while Ultra-Wide is active. Questions & Help Needed Is this a known issue with AVFoundation? How does the iPhone Camera app allow using the torch while recording in Ultra-Wide? What’s the correct way to switch between Wide and Ultra-Wide cameras without freezing when the torch is active? Info Device tested: iPhone 13 Pro / iPhone 15 Pro / Iphone 15 iOS Version: iOS 17.3 / iOS 18.0 Xcode Version: 16.2
0
0
437
Feb ’25
MPRemoteCommand play conflicting with .playPause gesture
We develop a video playback app on Apple TV which has the two following features: Its content browsing screen has installed a gesture recognizer for presses on the PlayPause Siri remote button in order to directly launch a playback. The gesture recognizer is attached to the content browsing UIViewController view. It presents its own custom playback UI with an AVPlayerLayer for the video and supports MPNowPlayingSession in order to publish current playback information and respond to remote commands. It also supports switching between fullscreen and Picture in Picture playback. Both features work fine, ie. the playback is launched when pressing the PlayPause Siri remote button and, during playback, the playback info are properly advertised on other devices and remote commands are also triggered as expected. However, when pressing the PlayPause Siri remote button while the video is playing in PiP, the "pause" remote command is sometimes triggered instead of the .playPause gesture recognizer. The issue may not occur the first time but for subsequent PlayPause presses. Navigating a bit in the app UI seems to help preventing the issue to occur. Finally, the issue only occurs if the video is playing. If the video is paused, the PlayPause Siri remote button gesture is always recognized instead of the remote command. Please note that, before using MPNowPlayingSession (and the corresponding MPRemoteCommandCenter), the app was using the default MPRemoteCommandCenter to support remote commands and the issue did not occur. We don't reproduce this issue with the Apple TV app so there's probably something we are not doing right. Has someone any clue?
0
0
588
Nov ’24
High bitrate video streaming in avplayer sometimes audio disappears
Hello, I used AVPlayer in my project to play network movie. Most movie could play normally, but I found the sound will disappear sometimes if I play specified 4K video network stream. The video will continue playing but audio stops after video is played for a while. If I pause player and then resume, the sound will be back but disappeared again after several seconds Check AVPlayerItem status: isPlaybackLikelyToKeepUp` == true isPlaybackBufferEmpty` = false player.volume > 0 According the value above, it seems not cause by empty playback buffer or volume issue. I am so confused for this situation. Movie information Video Format : AVC Format/Info : Advanced Video Codec Format profile : High L5.1 Codec ID : avc1 Codec ID/Info : Advanced Video Coding Bit rate mode : Variable Bit rate : 100.0 Mb/s Width : 3 840 pixels Height : 2 160 pixels Display aspect ratio : 16:9 Frame rate mode : Constant Frame rate : 29.970 (30000/1001) FPS Audio Format : AAC LC Format/Info : Advanced Audio Codec Low Complexity Codec ID : mp4a-40-2 Duration : 5 min 19 s Bit rate mode : Constant Bit rate : 192 kb/s Nominal bit rate : 48.0 kb/s Channel(s) : 2 channels Channel layout : L R Sampling rate : 48.0 kHz Frame rate : 46.875 FPS (1024 SPF) Does anyone know if AVPlayer has this limitations when playing high-bitrate movie streams, and are there any solutions?
0
0
547
Nov ’24
No charts genres for some storefronts
Hello, This is about the Get Catalog Top Charts Genres endpoint : GET https://api.music.apple.com/v1/catalog/{storefront}/genres I noticed that for some storefronts, no genre is returned. You can try with the following storefront values : France (fr) Poland (pl) Kyrgyzstan (kg) Uzbekistan (uz) Turkmenistan (tm) Is that a bug or is it on purpose ? Thank you.
0
0
417
Dec ’24
Help with CoreAudio Input level monitoring
I have spent the past 2 weeks diving into CoreAudio and have seemingly run into a wall... For my initial test, I am simply trying to create an AUGraph for monitoring input levels from a user chosen Audio Input Device (multi-channel in my case). I was not able to find any way to monitor input levels of a single AUHAL input device - so I decided to create a simple AUGraph for input level monitoring. Graph looks like: [AUHAL Input Device] -> [B1] -> [MatrixMixerAU] -> [B2] -> [AUHAL Output Device] Where B1 is an audio stream consisting of all the input channels available from the input device. The MatrixMixer has metering mode turned on, and level meters are read from the each submix of the MatrixMixer using kMatrixMixerParam_PostAveragePower. B2 is a stereo (2 channel) stream from the MatrixMixerAU to the default audio device - however, since I don't really want to pass audio through to an actual output, I have the volume muted on the MatrixMixerAU output channel. I tried using a GenericOutputAU instread of the default system output, however, the GenericOutputAU never seems to pull date from the ringBuffer (the graph renderProc is never called if a GenericOutputAU is used instead of AUHAL default output device). I have not been able to get this simple graph to work. I do not see any errors when creating the graph and initializing the graph, and I have verified that the inputProc is being called for filling up the ringBuffer - but when I read the level of the MatrixMixer, the levels are always -758 (silence). I have posted my demo project on github in hopes I can find someone with CoreAudio expertise to help with this problem. I am willing to move this to DTS Code Level support if there is someone in DTS with CoreAudio experience. Notes: My App is not sandboxed in this test I have tried with and without hardened runtime with Audio Input checked The multichannel audio device I am using for testing is the Audient iD14 USB-C audio device. It supports 12 input channels and 6 output channels. All input channels have been tested and are working in Ableton Live and Logic Pro. Of particular interest, is that I can't even get the Apple CAPlayThrough demo to work on my system. I see no errors when creating the graph, but all I hear is silence. The MatrixMixerTest from the Apple documentation archives does work - but note, that that demo does not use Audio Input devices - it reads audio into the graph from an audio file. Link to github project page. Diagram of AUGraph for initial test (code that is on github) Once I get audio input level metering to work, my plan is to implement something like in Phase 2 below - with the purpose of capturing a stereo input stream, mixing to mono, and sending to lowpass, bandpass, hipass AUs - and will again use MatrixMixer for level monitoring of the levels out of each filter. I have no plans on passthough audio (sending actual audio out to devices) - I am simple monitoring input levels Diagram of ultimate scope - rendering audio levels of a stereo to mono stream after passing through various filters
0
0
373
Nov ’24
Firewire video OUTPUT to hardware 2024
Looking to output dv video to my JVC SR-VS30 video deck. I used to be able to do this, but with most firewire stuff being deprecated, I'm not sure how to go about this. I found this old developer sample code that seems to do exactly what I'd like. Surely this could be rolled or updated for current macOS? https://developer.apple.com/library/archive/samplecode/SimpleVideoOut/Introduction/Intro.html#//apple_ref/doc/uid/DTS10000809-Intro-DontLinkElementID_2
0
0
336
Nov ’24
AVAudioEngine change playback rate in real time (AVAudioUnitVarispeed is non real time)
I am using the AVAudioEngine to play back samples in an iOS game. I would like to change the play back rate of a sample in real time. When using AVAudioUnitVarispeed for chaging the play back rate it creates stutters in the game as it isn't processed in real time (as stated here:AVAudioUnitTimeEffect) The other option I found to change the rate is by using an AVAudioEnvironmentNode and change the rate of the AVAudioPlayerNode. That works without creating stutters but limits the valid values for the rate from 0.5 to 2.0 (I need higher rates then 2.0). See here: AVAudio3DMixing. Are there any other ways to play back a sample with a rate control in real time?
0
0
317
Oct ’24
Swift iOS CallKit audio resource contention
I noticed the following behavior with CallKit when receiving a VolP push notification: When the app is in the foreground and a CallKit incoming call banner appears, pressing the answer button directly causes the speaker indicator in the CallKit interface to turn on. However, the audio is not actually activated (the iPhone's orange microphone indicator does not light up). In the same foreground scenario, if I expand the CallKit banner before answering the call, the speaker indicator does not turn on, but the orange microphone indicator does light up, and audio works as expected. When the app is in the background or not running, the incoming call banner works as expected: I can answer the call directly without expanding the banner, and the speaker does not turn on automatically. The orange microphone indicator lights up as it should. Why is there a difference in behavior between answering directly from the banner versus expanding it first when the app is in the foreground? Is there a way to ensure consistent audio activation behavior across these scenarios?
0
0
370
Dec ’24
MusicKit: How do we search by title or name only?
I can't find any way to search for a song by title only. You can search for songs, but any term you provide appears to be applied to any metadata associated with the song. Look at the largely nonsensical results when I search for a song with the letters "de": In many cases, that string doesn't appear anywhere. I used MusicCatalogSearchRequest(term: searchTerm, types: [Song.self]) Likewise it stands to reason that people want to search for artist and album names using text strings. How do we do that?
0
0
411
Nov ’24
AVContentKeySession reuse
Context We develop an iOS/Apple TV app that allows to play HLS+FP Live streams (custom playback UI), some of which use the same FairPlay content key id. All FairPlay content keys are requested to the same content key server. Implementation Despite Apple documentation warning to not reuse AVContentKeySessions, we use only one AVContentKeySession for all channels which allows the system to reuse the content key when a content key id is met again. As seen in another thread, people seems to think this is OK. Issue When reusing the AVContentKeySession and the user quickly tunes channels multiple times (up to 2 or 3 times per second using gestures), an inconsistency may occur where the content key request for a previous streams is asked to the delegate after a new stream is already being prepared and its AVURLAsset already assigned as the content key session AVContentKeyRecipient. Note that the previous content key recipient is removed before the new one is added. We also have been reported for crashes (though I haven't experienced it myself) when performing multiple channels tunings which makes us think that the AVContentKeySession should definitely not been reused. Note: On the other hand if a new AVContentKeySession is used for each stream, the system systematically requests a content key even if previous streams have used the same content key id. In this case, neither the crash nor the inconsistency issue are observed but it dramatically increases the number of calls to the content key server. Questions Should AVContentKeySessions definitely not be reused? Otherwise, how to handle the inconsistency issue described above?
0
0
387
Feb ’25
How can I use iPhone true depth front camera to detect if the captured depth map of a face is a true 3d face or spoofed 2d image
I'm trying to implement anti-spoofing in iOS app using iphone true depth front camera. I have checked the following questions still can't find a proper working solution. I trained a coreML model using 22000 depth human face images and 22000 non-human face(objects,food etc) images. The accuracy of the model is very less. When testing out with flat 2d images shown on a smartphone screen I found that I get depth map even for flat 2D images like this. Even though the image is flat how does it give the depth map for the person shown in the flat 2D picture so the model thinks that it is a real face instead of a spoofed one. I implemented depth capture by following this documentation and I made sure that I get depth map instead of disparity map https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_photos_with_depth My next approach was to use NCNN framework to implement anti-spoofing by using the model used in the Mini-vision android anti-spoofing sample. I rewrote their library in iOS by using the objective C++ wrapper for C++ as the sample was only available for android app. And I tested by feeding 80x80 UI-Image in a open cv matrix format it's accurracy is less than the android one. How can I solve this problem.
0
0
601
Nov ’24
How to write RGB & Depth Frames Without Losing Synchronization
I’m currently working on a project where I capture both depth frames and RGB frames using AVCaptureDataOutputSynchronizer. Depth frames are stored as raw binary data and RGB frames are saved with AVAssetWriter. The issue I’m facing is that AVAssetWriter enforces a fixed framerate, meaning it adds or discards frames to maintain that rate (as I understand it). This causes a desynchronization between the depth and RGB frames, which is a problem because I need each depth frame to be exactly matched with the corresponding RGB frame as they were captured. How can I ensure that the RGB frames are saved without AVAssetWriter modifying the frame count?
0
0
336
Feb ’25