Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

How to generate thumbnails for protected content using AVAssetImageGenerator?
I have a FairPlay-encrypted HLS stream and played the video in an AVPlayer.And I want to generate scrubbing thumbnails using the AVAssetImageGenerator. Also, I am able to generate thumbnails for clear streams but get errors for protected content. *How to generate thumbnails for protected content. func getImageThumbnail(forTime: CMTime) { let generator = AVAssetImageGenerator(asset: asset) generator.appliesPreferredTrackTransform = true generator.cancelAllCGImageGeneration() generator.generateCGImagesAsynchronously(forTimes: [NSValue(time: forTime)]) { [weak self] requestedTime, image, actualTime, result, error in if let error = error { print("Error generate: \(error.localizedDescription)") return } if let image = image { DispatchQueue.main.async { let image = UIImage(cgImage: image).jpegData(compressionQuality: 1.0) self?.playerImg.image = UIImage(data: image!) } } } }
1
0
762
Sep ’24
CIFilter chain failing to render parts of output
I’ve built a iOS camera app that applies many CIFilters to an image captured by the camera. Some of my users have reported that on occasion the images have large parts that are blank, see below: Frustratingly, I can’t reproduce this myself! Does anyone know what could he causing it, is it a memory issue? I haven’t posted the code as there’s a lot to look over and I’m not sure it would help diagnose it. Thanks for any pointers.
1
0
652
Sep ’24
Why isReadyForMoreMediaData is false sometimes?
Hi, Recording Videos with AVAssetWriter, capture fps(camera output fps) is ok, but final result video fps was lower, the reason is AVAssetWriterInput.isReadyForMoreMediaData is false sometimes. Yes, I have read document many times, it said need to set expectsMediaDataInRealTime to true and balabala... I really be tortured by this problem for a long time, can I debug this problem? or any advice?
2
0
623
Sep ’24
MIDIFlushOutput does not cancel events scheduled over network
Calling MIDIFlushOutput on a network endpoint is not cancelling events scheduled with future timestamps -- they continue to send. e.g. func send(eventLists: [MIDIEventList]) { let outputPortRef = ... let networkDestination = ... for var eventList in eventLists { MIDISendEventList(outputPortRef, networkDestination.objectRef, &eventList) } } ... MIDIFlushOutput(networkDestination.objectRef) I'm seeing that MIDIFlushOutput does successfully cancel scheduled events on a non-network endpoint. How can I clear all scheduled outgoing events over a MIDI Network connection?
2
2
608
Sep ’24
AVPlayer.replaceCurrentItem(with:) "Incorrect actor executor assumption" runtime crash when building for iOS 18
Hi there, I have some code that's been working fine for the last few versions of iOS and macOS and all the others, and now causes a runtime crash in iOS 18/macOS 15 etc. I have an actor called Player which is basically a big wrapper around an AVPlayer. It all gets compiled down to a Framework, and my clients use it by dropping it in to their video player app code. It handles everything needed for them to be able to talk to our media infrastructure and handles telemetry. It has its own property called avplayer which is an AVPlayer. Gets created at the init(). It has a function called load(_ avPlayerItem: AVPlayerItem) which the clients use to load a new video into player. The offending code (which used to work!) looks like this: Task { @MainActor in avplayer.replaceCurrentItem(with: avPlayerItem) } No warnings in Xcode. When you run it, it crashes on iOS 18 and macOS 15 with this error in the debugger: Incorrect actor executor assumption I thought, "Okay well maybe replaceCurrentItem has changed and doesn't need to be on the main actor anymore, so even if you say this outside of a Main Actor-scoped task: avplayer.replaceCurrentItem(with: avPlayerItem) ...it still crashes the exact same way. Does anyone have any ideas? I'm under some heavy pressure here to get this working and I don't even know where to start with this. Big thanks in advance.
4
0
850
Sep ’24
Capture Extension Icon
Hello, I seem to be having an issue assigning my Capture Extension an icon. It works fine using a system icon, for example: Image(systemName: "star") But it fails when I use my custom icon, such as: Image(uiImage: UIImage(named: "widget-icon")!) The "widget-icon" is located in both my Assets collection and the widget folder for good measure, and yet, my Widget always has a "?" icon. I am able to use "widget-icon" just fine for other Lock Screen widgets, but it is not working for the Camera Extension Widget. Any thoughts? Thank you for your help!
2
0
486
Sep ’24
Using non-modular libraries in an audio-unit
I've got a bunch of Audio Units I've been developing over the last few months - in Xcode 14 based on the Audio Unit template that ships in Xcode. The DSP heavy-lifting is all done in Rust libraries, which I build for Intel and Apple Silicon, merge using lipo and build XCFrameworks from. There are several of these - one with the DSP code, and several others used from the UI - a mix of SwiftUI and Cocoa. Getting the integration of Rust, Objective C, C++ and Swift working and automated took a few weeks (my first foray into C++ since the 1990s), but has been solid, reliable and working well - everything loads in Logic Pro and Garage Band and works. But now I'm attempting the (woefully underdocumented) process of making 13 audio unit projects able to be loaded in-process - which means moving all of the actual code into a Framework. And therein lies the problem: None of the xcframeworks are modular, which leads to the dreaded "Include of non-modular header inside framework module". Imported directly into the app extension project with "Allow Non-modular Includes in Framework Modules" works fine - but in a framework this seems to be verboten. So, the obvious solution would be to add a module map to each xcframework, and then, poof, they're modular. BUT... due to a peculiar limitation of Xcode's build system I've spent days searching for a workaround for, it is only possible to have ONE xcframework containing a module.modulemap file in any project. More than that and xcodebuild will try to clobber them with each other and the build will fail. And there appears to be NO WAY to name the file anything other than module.modulemap inside an xcframework and have it be detected. So I cannot modularize my frameworks, because there is more than one of them. How the heck does one work around this? A custom module map file somewhere (that the build should find and understand applies to four xcframeworks - how?)? Something else? I've seen one dreadful workaround - https://medium.com/@florentmorin/integrating-swift-framework-with-non-modular-libraries-d18098049e18 - given that I'm generating a lot of the C and Objective C code for the audio in Rust, I suppose I could write a tool that parses the header files and generates Objective C code that imports each framework and declares one method for every single Rust call. But it seems to me there has to be a correct way to do this without jumping through such hoops. Thoughts?
2
0
662
Sep ’24
Moving Photos
How can it be that you still don't have the option to move photos into an album instead of just copying them? This is a bad joke, right? The entire Photos app is absolutely untidy and a nightmare for people who like order. I want car photos in the car folder. Vacation photos in the vacation folder without them being visible in the recent folder. Cant be so difficult???
3
0
336
Sep ’24
tvOS 18.0 update harmed passthrough of multichannel audio for previously compatible hardware
tvOS 18 doesn't provide passthrough of multichannel audio for streaming apps offering content where it it promoted as available. This is true for devices for which the functionality existed before the 18.0 tvOS update. What's more, the 18.1 Public Beta did not provide a resolution for the issue. All streaming apps appear to be affected. Notably, Home Sharing does not appear to be affected, and continues to provide multichannel audio as it did before the 18.0 update.
1
2
846
Sep ’24
Silent output MP4 when using AVAssetReaderHello, I am trying to create an MP4 by obtaining the content from another source MP4. The source MP4 would be read with `AVAssetReader` and the output written with `AVAssetWriter`. I wanted to do partial t
Hello, I am trying to create an MP4 by obtaining the content from another source MP4. The source MP4 would be read with AVAssetReader and the output written with AVAssetWriter. I wanted to do partial tests: first, I placed only the video in the output MP4. Now, I am trying to place only the audio in the output MP4. I even managed to get the output MP4 to have the same length (in seconds) as the source MP4. But the problem is simple: the output MP4 is simply silent. Naturally, I want it to have audio. Below are two excerpts from the source code. Reading and writing. Note: The variable videoURL is from the class where the function writeVideo() is located. Its assignment happens in another scope, already debugged. Snippet: let semaphore = DispatchSemaphore(value: 0) func writeVideo() { var audioReaderBuffers = [CMSampleBuffer]() // File directory url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0].appendingPathComponent("*****/output.mp4") guard let url = url else { return } try FileManager.default.createDirectory(at: url.deletingLastPathComponent(), withIntermediateDirectories: true) if FileManager.default.fileExists(atPath: url.path()) { try FileManager.default.removeItem(at: url) } if let videoURL = videoURL { let videoAsset = AVAsset(url: videoURL) Task { let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first! let reader = try AVAssetReader(asset: videoAsset) let audioSettings = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2 ] as [String : Any] let audioOutputTest = try await audioTrack.getAudioSettings() let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings) reader.add(readerAudioOutput) reader.startReading() while let sampleBuffer = readerAudioOutput.copyNextSampleBuffer() { audioReaderBuffers.append(sampleBuffer) } semaphore.signal() } } semaphore.wait() let audioInput = createAudioInput(sampleBuffer: audioReaderBuffers[0]) let assetWriter = try AVAssetWriter(outputURL: url, fileType: .mp4) assetWriter.add(audioInput) assetWriter.startWriting() assetWriter.startSession(atSourceTime: .zero) for (index, buffer) in audioReaderBuffers.enumerated() { while !audioInput.isReadyForMoreMediaData { usleep(1000) } audioInput.append(buffer) } assetWriter.finishWriting { switch assetWriter.status { case .completed: print("Operation completed successfully: \(url.absoluteString)") case .failed: if let error = assetWriter.error { print("Error description: \(error.localizedDescription)") } else { print("Error not found.") } default: print("Error not found.") } } } Below is the createAudioInput method: func createAudioInput(sampleBuffer: CMSampleBuffer) -> AVAssetWriterInput { let audioSettings = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: 48000, AVEncoderBitRatePerChannelKey: 64000, AVNumberOfChannelsKey: 1 ] as [String : Any] let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: sampleBuffer.formatDescription) audioInput.expectsMediaDataInRealTime = false return audioInput } I await your help, please.
0
0
566
Sep ’24
iOS 18 arm64 simulator disables audio output with unknown "AudioConverterService" error
Hello, I'm getting an unknown, never-before-seen error at application launch, when running my iOS SpriteKit game on the iOS 18 arm64 simulator from Xcode 16.0 (16A242d) — AudioConverterOOP.cpp:847 Failed to prepare AudioConverterService: -302 This is occurs on all iOS 18 simulator devices, between application(_:didFinishLaunchingWithOptions:) and the first applicationDidBecomeActive(_:) — the SKScene object may have been already initialized by SpriteKit, but the scene's didMove(to:) method hasn't been called yet. Also, note that the error message is being emitted from a secondary (non-main) thread, obviously not created by the app. After the error occurs, no SKScene is able to play audio — this had never occurred on iOS versions prior to 18, neither on physical devices nor on the simulator. Has anyone seen anything like this on a physical device running 18? Unfortunately, at the moment I cannot test myself on an 18 device, only on the simulator... Thank you, D.
2
0
859
Sep ’24
RPScreenRecorder startCapture issues on generated file
Hello all, This is my first post on the developer forums. I am developing an app that records the screen of my app, using AVAssetWriter and RPScreenRecorder startCapture. Everything is working as it should on most cases. There are some seemingly random times where the file generated is of some kb and it is corrupted. There seems to be no pattern on what the device is or the iOS version is. It can happen on various phones and iOS versions. The steps I have followed in order to create the file are: configuring the AssetWritter videoAssetWriter = try? AVAssetWriter(outputURL: url!, fileType: AVFileType.mp4) let size = UIScreen.main.bounds.size let width = (Int(size.width / 4)) * 4 let height = (Int(size.height / 4)) * 4 let videoOutputSettings: Dictionary<String, Any> = [ AVVideoCodecKey : AVVideoCodecType.h264, AVVideoWidthKey : width, AVVideoHeightKey : height ] videoInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoOutputSettings) videoInput?.expectsMediaDataInRealTime = true guard let videoInput = videoInput else { return } if videoAssetWriter?.canAdd(videoInput) ?? false { videoAssetWriter?.add(videoInput) } let audioInputsettings = [ AVFormatIDKey: Int(kAudioFormatMPEG4AAC), AVSampleRateKey: 12000, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioInputsettings) audioInput?.expectsMediaDataInRealTime = true guard let audioInput = audioInput else { return } if videoAssetWriter?.canAdd(audioInput) ?? false { videoAssetWriter?.add(audioInput) } The urlForVideo returns the URL to the documentDirectory, after appending and creating the folders needed. This part seems to be working as it should as the directories are created and the video file exists on them. Start the recording if RPScreenRecorder.shared().isRecording { return } RPScreenRecorder.shared().startCapture(handler: { [weak self] sample, bufferType, error in if let error = error { onError?(error.localizedDescription) } else { if (!RPScreenRecorder.shared().isMicrophoneEnabled) { RPScreenRecorder.shared().stopCapture { error in if let error = error { return } } onError?("Microphone was not enabled") } else { succesCompletion?() succesCompletion = nil self?.processSampleBuffer(sample, with: bufferType) } } }) { error in if let error = error { onError?(error.localizedDescription) } } Process the sampleBuffers guard CMSampleBufferDataIsReady(sampleBuffer) else { return } DispatchQueue.main.async { [weak self] in switch sampleBufferType { case .video: self?.handleVideoBaffer(sampleBuffer) case .audioMic: self?.add(sample: sampleBuffer, to: self?.audioInput) self?.audioInput) default: break } } // The add function from above fileprivate func add(sample: CMSampleBuffer, to writerInput: AVAssetWriterInput?) { if writerInput?.isReadyForMoreMediaData ?? false { writerInput?.append(sample) } // The handleVideoBaffer function from above fileprivate func handleVideoBaffer(_ sampleBuffer: CMSampleBuffer) { if self.videoAssetWriter?.status == AVAssetWriter.Status.unknown { self.videoAssetWriter?.startWriting() self.videoAssetWriter?.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) } else { if (self.videoInput?.isReadyForMoreMediaData) ?? false { if self.videoAssetWriter?.status == AVAssetWriter.Status.writing { self.videoInput?.append(sampleBuffer) } } } } } Finally the stop recording func stopRecording(completion: @escaping (URL?, URL?, Error?) -> Void) { RPScreenRecorder.shared().stopCapture { error in if let error = error { completion(nil, nil, error) return } self.finish { videoURL, _ in completion(videoURL, nil, nil) } } } // The finish function mentioned above fileprivate func finish(completion: @escaping (URL?, URL?) -> Void) { let dispatchGroup = DispatchGroup() dispatchGroup.enter() finishRecordVideo { dispatchGroup.leave() } dispatchGroup.notify(queue: .main) { print("Finish with url:\(String(describing: self.urlForVideo()))") completion(self.urlForVideo(), nil) } } // The finishRecordVideo mentioned above fileprivate func finishRecordVideo(completion: @escaping ()-> Void) { videoInput?.markAsFinished() audioInput?.markAsFinished() videoAssetWriter?.finishWriting { if let writer = self.videoAssetWriter { if writer.status == .completed { completion() } else if writer.status == .failed { // Print the error to find out what went wrong if let error = writer.error { print("Video asset writing failed with error: \(error.localizedDescription). Url: \(writer.outputURL.path)") } else { print("Video asset writing failed, but no error description available.") } completion() }else { completion() } } } } What could it be the reason of the corrupted files generated? This issue has never happened to my devices so there is no way to debug using xcode. Also there are no errors popping out on the logs. Can you spot any issues on the code that can create this kind of issue? Do you have any suggestions on the problem at hand? Thanks
0
0
580
Sep ’24
Understanding the number of input channels in Core Audio
Hello everyone, I'm new to Core Audio and still haven't found my footing. I'm learning how to capture audio from the default device, using Audio Units. On my MacBook, the default audio input is mono. But when I write a piece of code to capture audio using AUHAL, I'm discovering that I need to provide an AudioBufferList with two channels, not one. Also, when I try to capture audio from an audio interface with 20 audio inputs, I must provide an AudioBufferList with two channels, and not with 20 channels. To investigate the issue, I wrote a small diagnostic program, which opens the default audio device and probes it for the number of channels. Depending on which way I'm probing, I'm getting different results. When I probe the stream format, I'm getting information that there is 1 channels. But when I probe the input audio unit, I'm getting information that there are 2 input channels. Here's my program to demonstrate the issue: // InputDeviceChannels.m // Compile with: // clang -framework CoreAudio -framework AudioToolbox -framework CoreFoundation -framework AudioUnit -o InputDeviceChannels InputDeviceChannels.m // // On my system, this prints: // Device Name: MacBook Pro Microphone // Number of Channels (Stream Format): 1 // Number of Elements (Element Count): 2 #import <AudioToolbox/AudioToolbox.h> #import <AudioUnit/AudioUnit.h> #import <CoreAudio/CoreAudio.h> #import <Foundation/Foundation.h> void printDeviceInfo(AudioUnit audioUnit) { UInt32 size; OSStatus err; AudioStreamBasicDescription streamFormat; size = sizeof(streamFormat); err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &streamFormat, &size); if (err != noErr) { printf("Error getting stream format\n"); exit(1); } int numChannels = streamFormat.mChannelsPerFrame; UInt32 elementCount; size = sizeof(elementCount); err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, &elementCount, &size); if (err != noErr) { printf("Error getting element count\n"); exit(1); } printf("Number of Channels (Stream Format): %d\n", numChannels); printf("Number of Elements (Element Count): %d\n", elementCount); } void printDeviceName(AudioDeviceID deviceID) { UInt32 size; OSStatus err; CFStringRef deviceName = NULL; size = sizeof(deviceName); err = AudioObjectGetPropertyData( deviceID, &(AudioObjectPropertyAddress){kAudioDevicePropertyDeviceNameCFString, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain}, 0, NULL, &size, &deviceName); if (err != noErr) { printf("Error getting device name\n"); exit(1); } char deviceNameStr[256]; if (!CFStringGetCString(deviceName, deviceNameStr, sizeof(deviceNameStr), kCFStringEncodingUTF8)) { printf("Error converting device name to C string\n"); exit(1); } CFRelease(deviceName); printf("Device Name: %s\n", deviceNameStr); } int main(int argc, const char *argv[]) { @autoreleasepool { OSStatus err; // Get the default input device ID AudioDeviceID input_device_id = kAudioObjectUnknown; { UInt32 property_size = sizeof(input_device_id); AudioObjectPropertyAddress input_device_property = { kAudioHardwarePropertyDefaultInputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain, }; err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &input_device_property, 0, NULL, &property_size, &input_device_id); if (err != noErr || input_device_id == kAudioObjectUnknown) { printf("Error getting default input device ID\n"); exit(1); } } // Print the device name using the input device ID printDeviceName(input_device_id); // Open audio unit for the input device AudioComponentDescription desc = {kAudioUnitType_Output, kAudioUnitSubType_HALOutput, kAudioUnitManufacturer_Apple, 0, 0}; AudioComponent component = AudioComponentFindNext(NULL, &desc); AudioUnit audioUnit; err = AudioComponentInstanceNew(component, &audioUnit); if (err != noErr) { printf("Error creating AudioUnit\n"); exit(1); } // Enable IO for input on the AudioUnit and disable output UInt32 enableInput = 1; UInt32 disableOutput = 0; err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &enableInput, sizeof(enableInput)); if (err != noErr) { printf("Error enabling input on AudioUnit\n"); exit(1); } err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &disableOutput, sizeof(disableOutput)); if (err != noErr) { printf("Error disabling output on AudioUnit\n"); exit(1); } // Set the current device to the input device err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &input_device_id, sizeof(input_device_id)); if (err != noErr) { printf("Error setting device for AudioUnit\n"); exit(1); } // Initialize AudioUnit err = AudioUnitInitialize(audioUnit); if (err != noErr) { printf("Error initializing AudioUnit\n"); exit(1); } // Print device info printDeviceInfo(audioUnit); // Clean up AudioUnitUninitialize(audioUnit); AudioComponentInstanceDispose(audioUnit); } return 0; } It prints: Device Name: MacBook Pro Microphone Number of Channels (Stream Format): 1 Number of Elements (Element Count): 2 I tried to set the number of channels to 1 on the input unit, but it didn’t change anything. After calling setNumberOfChannels(1, audioUnit), I’m still getting the same output. Note 1: I know that I can ignore one channel, etc, etc. My purpose here is not to "somehow get it to work", I already did that. My purpose is to understand the API, so that I'll be able to write code that handles any number of audio inputs. Note 2: I already read a bunch of documentation, especially this here: https://developer.apple.com/library/archive/technotes/tn2091/ - perhaps the channel map could help here, but I can’t make sense of it - I tried to use it based on my understanding but I only got the -50 OSStatus. How should I understand this? Is it that that audio unit is an abstraction layer and automatically converts mono input into stereo input? Can I ask AUHAL to provide me the same number of input channels that the audio device has?
1
0
1k
Sep ’24
Launching an app with Camera Control
I've just received my iPhone 16 Pro to develop some of the Camera Control features. I am trying to set up my app to be launched from a button press, and from my research in the documents this is only possible if I develop a LockedCameraCaptureExtension. Is this correct? My app is written in React Native, so to build an extension would require me to re-create the entire UI in Swift which just isn't possible with my resources. Ideally I could build a simple extension that requires Authentication to open the app but I'n not sure that will work: The app extension terminates shortly after launch if it doesn’t have an active camera view that uses AVCaptureEventInteraction to handle events from the hardware buttons, or if access to the camera hasn’t been requested. This is a bit frustrating for something so simple as to just opening an app. Thanks, Alex
1
0
1.1k
Sep ’24
CarPlay issue after iOS 18 update
After upgrading to iOS 18 CarPlay with 2023 Lexus and iPhone 15 Pro Max shows multiple issues: • speakers reduced to Mono sound (going back to normal after some minutes and then reducing again) • no speaker sound at all • touching / moving phone while driving resulting in “on and off” sound No Reboot / Shutdown helps No Cable connection works @Apple: do you test your software professionally or is this outsourced to the community? Doesn’t look at all like a professional approach? Please solve this dangerous (traffic!) and annoying topic ASAP! Thanks - Torsten
1
0
728
Sep ’24
AVFoundation error when making a window full screen
I am working on a macOS app that uses AVFoundation to record the screen. During a recording if I make a window full screen, AVFoundation stops capturing screen frames (or does it at a very slow rate). In my logs I get the following error: Error Domain=AVFoundationErrorDomain Code=-11844 note that I have had instances where I could not reproduce the error but they were rare. The screen recording sometimes resumes normally if I switch desktops or minimize the full screen window. Did anyone ever run across a similar issue or knows how to fix it ?
1
0
371
Sep ’24
Bug: Duplicate audio playback in QuickLook with .reality files in iOS 18
I'm experiencing an issue with QuickLook in iOS 18 where.reality files with audio playback are affected. When I open a.reality file that includes audio, the audio track plays twice: once from the moment the file is opened, and again from the start of the animation. This results in a duplicate audio playback. I've tested this issue on multiple devices running iOS 16, 17, and 18, and the problem only occurs on iOS 18. I've tried restarting the devices and checking for any software updates, but the issue persists. Steps to reproduce: Open a.reality file with audio playback in QuickLook on an iOS 18 device. Observe the audio playback. Expected result: The audio track should play only once, from the start of the animation. Actual result: The audio track plays twice, once from the moment the file is opened and again from the start of the animation. Device and iOS version: I've tested this issue on iPhone 12 Pro, iPhone 13 Pro running iOS 18, iPhone 13 running iOS 16 and iPhone 11 Pro running iOS 17,
5
5
607
Sep ’24