Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics

Post

Replies

Boosts

Views

Activity

Use Vision framework to detect a graph in Swift
I would like to offer the functionality that the user aims the camera at a graph (including axes and scales) and the app detects the graph and the app replicates the graph using the image. I have the whole camera setup finished with a AVCaptureSession, VNDetectContoursRequest, VNImageRequestHandler, etc. However, now I get many many results so I guess I will now need to tell the image processing process what I am looking for. i.e. filter the VNContoursObservations. I 'think' I first need to detect two perpendicular lines (the two axes). How do I do that? If I do not see them, I can just ignore that input and wait for the next VNContoursObservation. When I found the axes of the graph, I will need to find the curve (graph) that I need to scan. Any tips on how I can find that curve and turn that curve into a bunch of coordinates? Thanks! Wouter
1
0
315
Oct ’24
Inquiry About Background Volume Button Event in iOS App Development
I’m working on an iOS app for a client, and I have a question regarding a specific feature we're looking to implement. We want the app to respond to a user pressing the volume button three times while the app is in the background. The goal is to allow users to discreetly trigger a safety feature without drawing attention, particularly in situations where they may be in danger or at risk. This feature is critical for the app and would be a valuable addition, as it could potentially help protect users in emergency situations. However, I haven’t found much information on whether iOS allows background listening for volume button presses. Therefore, I would greatly appreciate your insights on the following: Is it possible to listen for volume button presses when the app is in the background, or are there system-level restrictions that prevent this? If it's not directly possible, are there any special provisions, APIs, or entitlements that can be requested from Apple to enable this functionality? In case this feature is not supported, are there alternative approaches to achieve a similar discreet activation mechanism? If this is something that requires special permission or a process, could you please guide me on how to proceed? I understand that maintaining user privacy and security is a priority for iOS, and I want to ensure that any implementation fully complies with Apple's guidelines. Thanks in advance for your help!
1
0
195
Oct ’24
Noise occurs when playing on iOS 18.0 device + AirPods Pro 2
When we tested the audio quality of our VoIP App, we found that when the iOS18.0 device was played with AirPods Pro 2, we could hear noises similar to peak clipping and distortion, especially when the sound source played was loud and high-pitched. Here is the device information we tested: Model: iPhone 16 Pro Max, iPhone 15 Pro System version: iOS 18.0 (22A3354) Bluetooth headset model: AirPods Pro 2 Bluetooth firmware version: 6F8 We tested multiple apps (including phone calls, FaceTime, Zoom, WeChat, Tencent Meeting), and they all had the above noise problem. We also found two phenomena: If we use the same iOS 18 device to connect HUAWEI FreeBuds Pro or FreeBuds 2, there is no such noise problem; If we use an iOS 17 device to connect to the same AirPods Pro 2 for testing, there is no such noise problem; Therefore, we suspect that it is caused by the compatibility problem between iOS 18.0 and AirPods firmware 6F8. The firmware version of our AirPods Pro 2 is 6F8, which was released on June 26, and iOS 18.0 was released on September 16. Maybe they are not very compatible. I hope that subsequent firmware updates can fix this problem.
1
0
273
Oct ’24
Files and Folders permission of App keeps denied, even from Settings.
Hi Apple Engineer, My app is using ImageCaptureCore framework to communicate to external DSLR Camera. When I connect my device to a camera, I execute the requestContentsAuthorization(completion:) to request for Access Files on Connected Cameras. This is the dialog when the request is executed: When I tap "OK", the status of content authorization keeps "Denied". even when I open "Files and Folders" permission in "Privacy & Security" Settings. When I switched ON the permission, the switch keeps back to turned off. You could see the reproduce in this GoogleDrive video https://drive.google.com/file/d/15B-R5TONgMWg8qFiYUGK0hTy62dsVGUX/view?usp=sharing The occurrence keeps happen even: I uninstall and install the app back Do "Reset Location & Privacy" Do "Reset All Settings" I attached the sysdiagnose files in this GoogleDrive file https://drive.google.com/file/d/11lovl_xC95AKXQTkZ1_e6UbEgS5md0Z3/view?usp=sharing I firstly experience this issue after researching ImageCaptureCore's API. I executed resetContentsAuthorizationWithCompletion:. After that, my permission request keeps denied as described above :( There are other developer that experiences the same as mine https://forums.developer.apple.com/forums/thread/756960 . There is a simple sample project there and it's reproducible in my case. Could you help me how to accomplished my app can be granted for permission to "Files and Folders" permission when using ImageCaptureCore? Could it be a bug from the system?
1
1
242
Oct ’24
Handling YOLOv8 Object Detection in 60FPS UltraWideCamera on iOS: Frame Processing Query
I am developing an iOS app that uses YOLOv8 for object detection and aims to detect objects at 60 FPS using the UltraWide camera. My goal is to process every frame within captureOutput and utilize the detected data (such as coordinates) for each one. I have a question regarding how background thread processing behaves in this scenario. Does the size of the YOLO model (n, s, m, etc.) or the weight of the operations inside captureOutput affect the number of frames that can be successfully processed? Specifically, I would like to know if all frames will be processed sequentially with a delay due to heavy processing in the background, or if some frames will be dropped and not processed at all. Any insights on how to handle this would be greatly appreciated. Thank you!
2
0
382
Oct ’24
Compatibility Between ARKit and Optical Zoom
Hello, I am a developer currently working on an AR application using ARKit. I aim to implement a Zoom feature that allows users to enlarge and reduce objects within the AR scene while simultaneously measuring the distance to those objects. Specifically, I want to incorporate Optical Zoom to provide a more natural and precise user experience. I have considered several approaches and would appreciate your advice on the most effective methods. Approaches Being Considered: Using UIPinchGestureRecognizer to Adjust the Camera's Field of View Modifying the scale Property of SCNNode to Enlarge/Reduce Specific Objects Leveraging AVFoundation to Control the Camera's Optical Zoom Questions: Compatibility Between ARKit and Optical Zoom: Is it feasible to control the camera's optical zoom using AVFoundation while utilizing ARKit's features? What should be considered when integrating these two frameworks? Integrating Object Distance Measurement with Zoom Functionality: What is the most effective approach to measure and display the distance to an object in real-time when a user zooms in on it? User Experience Considerations: Do you have any UI/UX design tips for implementing optical zoom to ensure a natural and intuitive experience? For example, how can visual feedback for zoom actions and distance measurements be effectively presented to users? Performance Optimization: What optimization strategies can minimize potential performance issues when implementing both optical zoom and distance measurement features simultaneously? Example Code and Reference Materials: Could you share any example code or reference materials that demonstrate similar functionalities? Thank you. Example Code Request: If possible, providing sample code that integrates optical zoom with distance measurement would be extremely helpful. Reference Links: Please share any tutorials or resources that demonstrate the combined use of ARKit and AVFoundation.
1
0
249
Oct ’24
Get View Full HDR state from Settings > Photos to properly set preferredImageDynamicRange in editing extension
I'm updating my Photo Editing Extension to support HDR. To do this I set imageView.preferredImageDynamicRange = .high. But you can turn off the option to view HDR photos in the complete dynamic range in Settings > Photos. When you do that, open a photo, and tap the edit button, it does not appear in the full range as expected, but when you select my app from More > Extensions, it does appear in the complete dynamic range unexpectedly. I need to set imageView.preferredImageDynamicRange = .standard when View Full HDR is off, but I don't see any way to get that in my PHContentEditingController.
1
0
343
Oct ’24
Toggling AVMusicTrack isMuted
Hi! I have an AVAudioSequencer with some AVMusicTracks that are filled with AVParameterEvents. If I toggle the isMuted property of a track, it will instantly mute when changed to true. However, after turning the muting to false, the events will only triggers on the next round of a loop and not instantly. Is this intended behaviour, and is there some way to get the events to trigger immediately after toggling the isMuted to be false?
1
0
241
Oct ’24
What format for writeHEIFRepresentation preserves HDR?
In the WWDC 24 session "Use HDR for dynamic image experiences in your app" it's noted this is how you save edits for Adaptive HDR: SDR + HDR: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrImage: hdrImage]) SDR + Gain: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrGainMapImage: gainImage]) This won't compile because the format argument is missing. What format should be used? In the WWDC 23 session "Support HDR images in your app" RGBAf, RGBAh, and RGBA16, and RGB10 were mentioned but I'm not sure which one to use. If relevant, I'm editing photos from the user's photo library, so the image was probably taken on iPhone but perhaps not. Thanks!
1
0
288
Oct ’24
Audio Interruption Not Being Intercepted in AVAudioSession with Classification
Hi everyone, I’m experiencing an issue where audio interruptions (e.g., phone calls) are not being intercepted while running sound classification in an app that uses the AVAudioSession. Classification works fine, but interruptions aren’t handled, even though I’ve followed Apple’s guidelines on handling audio interruptions [1_Document]. The classification was initially based on [2_Classifer], where it worked perfectly. However, when I adopted classification in a more camera-focused app using [3_Cam], the interruption behavior stopped working. The classification setup is functioning with [3_Cam], but audio interruptions are not triggered. The listener is invoked before starting sound analysis as suggested in [2_Classifier]. startListeningForAudioSessionInterruptions() try startAnalyzing([(request, observer)]) FYI, one change I have made for classifications is following. This works fine in all cases. // try audioSession.setCategory(.record, mode: .default) try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth]) I suspect the issue might be related to the AVAudioSession configuration or how the app handles recording and playback together. Is there anything else I should check related to AVAudioSession? Are there additional APIs I could use to pre-check or better handle audio interruptions? Any suggestions or guidance would be greatly appreciated! Platform: Swift 5, Xcode 16, iOS 18. References: Document Classifier Cam Best Regards
1
0
248
Oct ’24
iOS18 Apple Car Play Audio Error
After updating iOS18.1 Beta Version, I have a lot of issues with my Apple Car Play as per following. Audio Quality Really Bad (it’s not playing with media instead voice channel. Sometime, it’s playing with the phone speakers even though I have connected the car play via cable. Its not operating well with car steering control such as Volume up & down, skip button. After receiving the phone call, it’s go back to the original audio quality but when my phone screen locked, its go bad again. I expect to fix these problems asap as I love to play music when I drive around.
1
0
258
Oct ’24
Writing video using AVAssetWriter, AVAssetReader, and AVSPEECHSYNTHESIZER
Hello, First, some version and software details: Software: iOS 18.1 Hardware: iPhone 14 Pro Max and later Xcode: 16.0 Summary: AVAssetReader is not concatenating a video at the beginning of the output video. The output video should contain a scene of me introducing the content, followed by a blue screen with AVSpeechSynthesizer reading out a text that I pasted above the "Generate Video" button. Details: Now, let's talk about the app. Basically, I’m developing an app that generates a video with the following features: My app will create an output video that is split into an opening scene followed by a fully blue screen. The opening scene will be taken from a video I choose from my gallery. I will read the opening video using AVAssetReader as usual. After the opening scene, I will use the content of a text read by AVSpeechSynthesizer.write(). After the opening scene, the synthesized audio will start playing while the blue screen is displayed. All of this is already defined in the attached project. Each project file has a comment at the beginning introducing its content. How to test: Write something in the field above the "Generate Video" button. For example, type "Hello, world!" Then, press the "Library" button and select a video from the gallery, about 30 seconds long. That’s it. Press the "Generate Video" button. The result I’ve experienced is a crash or failure to generate the video. Practical example of what I want to achieve: Suppose I record a 30-second video where I say, "I’m going to tell you the story of Snow White." Then, I paste the "Snow White" story into the field above the "Generate Video" button. The output video should contain me saying, "I’m going to tell you the story of Snow White." After that, the AVSpeechSynthesizer will read the story I pasted, while displaying a blue screen. I look forward to a solution. Thank you very much! convertToCMSampleBuffer.swift convertToPixelBuffer.swift createInputs.swift createVideo.swift test.swift saveVideo.swift TestApp.swift editingVideo.swift sampleReaderProvider.swift misc.swift sampleProvider.swift
8
0
582
Oct ’24
SoundRecognition causes Input/Output callbacks to have varying Buffer sizes and introduces Glitching
Hello, We have noticed an issue with SoundRecognition that causes glitching with our AudioUnit setup in Smule. Input and output frame sizes are inconsistent. Input frame size does not match [AVAudioSession sharedInstance].IOBufferDuration My best guess is that SoundRecognition influences the input frame size and not the output frame size. To reproduce use the example app here: https://github.com/MarkoGill/SoundRecognitionBug Hardware/OS iPhone 14 Pro on iOS 18 -> Experiences the problem iPhone 11 on iOS 18 -> Experiences the problem iPhone 15 on iOS 18 -> Not experiencing the problem Reproduction Steps Enable Sound Recognition (Settings > Accessibility > Sound Recognition > On) Enable a Sound for detection (Sounds > Dog > On) Open the example app with headset (it routes input to output) Notice glitching occurs Check the logs. Record and Playback buffer sizes vary Example Log: AU input sample rate: 48000.000000 AU output sample rate: 48000.000000 hardware sample rate: 48000.000000 hardware buffer size: 1104.000000 updated record frame counts: 1024 updated playback frame counts: 1104 Notes: You can disable Sound Recognition, restart the app, and playback behaves correctly.
4
1
551
Oct ’24
Distorted Audio When Recording External Mics With AVCaptureSession and AVAssetWriter
I’m working on a macOS app, written in Swift. My goal is to record audio from an external microphone, e.g., one connected via USB. For this, I’m using an AVCaptureSession and recording its output with an AVAssetWriter. This works perfectly in principle (and reliably with internal microphones, for example). The problem occurs after my app has successfully completed the first recording and I then want to make additional recordings (which makes me think it might be process-dependent, because it works again after restarting the app). The problem: Noisy or distorted-sounding audio files. In addition, the following error message appears in the Console from CoreAudio / its AudioConverter: Input data proc returned inconsistent 512 packets for 2048 bytes; at 3 bytes per packet, that is actually 682 packets It is easy to reproduce. This problem is reproducible even if I don’t configure the AVAssetWriter manually and instead let it receive its audioSettings using a preset from an AVOutputSettingsAssistant. I’m running on macOS 15.0 (24A335). I’ve filed a feedback including a demo project → FB15333298 🎟️ I would greatly appreciate any help! Have a great day, Martin
5
0
384
Oct ’24
PHPickerViewControllerDelegate didCancel
Hello. In my app I have selection of photos and videos and also selection of PDF So I use PHPickerViewController for picking photos and videos and UIDocumentPickerViewController for picking documents. I found out that there's not documentPickerWasCancelled in PHPickerViewController delegate. So when a user presses Cancel, delegate's picker function fires the dialog is dismissed and system return a selected value of nil. But when I swipe the dialog down no event is generated so in my app I can't understand whether a user selected a photo or cancelled the dialog. In UIDocumentPickerViewController there's no such problems as it have didCancel as a separate funciton Is there any way to bypass this this?
1
0
197
Oct ’24
'You don’t have permission. - The AVPlayerItem instance has failed with the error code 257 and domain "NSCocoaErrorDomain".'
[[PHImageManager defaultManager] requestAVAssetForVideo:asset options:videoOptions resultHandler:^(AVAsset *_Nullable avAsset, AVAudioMix *_Nullable audioMix, NSDictionary *_Nullable info) { if ([avAsset isKindOfClass:[AVURLAsset class]]) { AVURLAsset *urlAsset = (AVURLAsset *)avAsset; NSURL *videoURL = urlAsset.URL; mediaInfo[@"path"] = videoURL.absoluteString; } else { // Failed to get video asset completion(nil); } }];``` Before iOS 18, i could able access AVAsset video using the method mentioned above with the url, but starting from the iOS 18 version, the following error appears 'You don’t have permission. - The AVPlayerItem instance has failed with the error code 257 and domain "NSCocoaErrorDomain".'
2
0
393
Oct ’24