Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

AVAssetExportSession failed to export audio
I am trying to use AVAssetExportSession to export audio form video but every time I try it, it fails and I don't know why ?! this is the code import AVFoundation protocol AudioExtractionProtocol { func extractAudio(from fileUrl: URL, to outputUrl: URL) } final class AudioExtraction { private var avAsset: AVAsset? private var avAssetExportSession: AVAssetExportSession? init() {} } //MARK: - AudioExtraction conforms to AudioExtractionProtocol extension AudioExtraction: AudioExtractionProtocol { func extractAudio(from fileUrl: URL, to outputUrl: URL) { createAVAsset(for: fileUrl) createAVAssetExportSession(for: outputUrl) exportAudio() } } //MARK: - Private Methods extension AudioExtraction { private func createAVAsset(for fileUrl: URL) { avAsset = AVAsset(url: fileUrl) } private func createAVAssetExportSession(for outputUrl: URL) { guard let avAsset else { return } avAssetExportSession = AVAssetExportSession(asset: avAsset, presetName: AVAssetExportPresetAppleM4A) avAssetExportSession?.outputURL = outputUrl } private func exportAudio() { guard let avAssetExportSession else { return } print("I am here \n") avAssetExportSession.exportAsynchronously { if avAssetExportSession.status == .failed { print("\(avAssetExportSession.status)\n") } } } } func test_AudioExtraction_extractAudioAndWriteItToFile() { let videoUrl = URL(string: "https://storage.googleapis.com/gtv-videos-bucket/sample/ForBiggerMeltdowns.mp4")! let audioExtraction: AudioExtractionProtocol = AudioExtraction() audioExtraction.extractAudio(from: videoUrl, to: FileMangerTest.audioFile) FileMangerTest.tearDown() } class FileMangerTest { private static let fileManger = FileManager.default private static var directoryUrl: URL { fileManger.urls(for: .cachesDirectory, in: .userDomainMask).first! } static var audioFile: URL { directoryUrl.appendingPathComponent("audio", conformingTo: .mpeg4Audio) } static func tearDown() { try? fileManger.removeItem(at: audioFile) } static func contant(at url: URL) -> Data? { return fileManger.contents(atPath: url.absoluteString) } }
0
0
347
Jul ’24
API to get AVSpeechSynthesisVoice that are available to download (via Settings.app) but not yet downloaded to a device
Here is the use case, I have a language learning app that uses AVSpeechSynthesizer ❤️. When a user listens to a phrase with the AVSpeechSynthesizer using a AVSpeechSynthesisVoice with a AVSpeechSynthesisVoiceQuality of default it sounds much much worse than voices with enhanced or premium, really affecting usability. There appears to be no API for the app to know if there are enhanced or premium voices available to download (via Settings.app) but not yet downloaded to a device. The only API I could find is AVSpeechSynthesisVoice.speechVoices() which returns all available voices on the device, but not a full list of voices available via download. So the app cannot know if it should inform the user "hey this voice your listening to is a much lower quality than enhanced or premium, go to settings and download the enhanced or premium version". Any ideas? Do I need to send in an enhancement request via Feedback Assistant? Thank you for helping my users ears and helping them turn speech synthesis voice quality up to 11 when it's available to them with just a couple of extra taps! (I suppose the best workaround is to display a warning every time the user is using a default quality voice, I wonder what % of voices have enhanced or premium versions...)
2
0
665
Jul ’24
Voice recorder app recording in dual mono instead of stereo
Hi y'all, After getting mono recording working, I want to differentiate my app from the standard voice memos to allow for stereo recording. I followed this tutorial (https://developer.apple.com/documentation/avfaudio/capturing_stereo_audio_from_built-in_microphones) to get my voice recorder to record stereo audio. However, when I look at the waveform in Audacity, both channels are the same. If I look at the file info after sharing it, it says the file is in stereo. I don't exactly know what's going on here. What I suspect is happening is that the recorder is only using one microphone. Here is the relevant part of my recorder: // MARK: - Initialization override init() { super.init() do { try configureAudioSession() try enableBuiltInMicrophone() try setupAudioRecorder() } catch { // If any errors occur during initialization, // terminate the app with a fatalError. fatalError("Error: \(error)") } } // MARK: - Audio Session and Recorder Configuration private func enableBuiltInMicrophone() throws { let audioSession = AVAudioSession.sharedInstance() let availableInputs = audioSession.availableInputs guard let builtInMicInput = availableInputs?.first(where: { $0.portType == .builtInMic }) else { throw Errors.NoBuiltInMic } do { try audioSession.setPreferredInput(builtInMicInput) } catch { throw Errors.UnableToSetBuiltInMicrophone } } private func configureAudioSession() throws { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.record, mode: .default, options: [.allowBluetooth]) try audioSession.setActive(true) } catch { throw Errors.FailedToInitSessionError } } private func setupAudioRecorder() throws { let date = Date() let dateFormatter = DateFormatter() dateFormatter.locale = Locale(identifier: "en_US_POSIX") dateFormatter.dateFormat = "yyyy-MM-dd, HH:mm:ss" let timestamp = dateFormatter.string(from: date) self.recording = Recording(name: timestamp) guard let fileURL = recording?.returnURL() else { fatalError("Failed to create file URL") } self.currentURL = fileURL print("Recording URL: \(fileURL)") do { let audioSettings: [String: Any] = [ AVFormatIDKey: Int(kAudioFormatMPEG4AAC), AVLinearPCMIsNonInterleaved: false, AVSampleRateKey: 44_100.0, AVNumberOfChannelsKey: isStereoSupported ? 2 : 1, AVLinearPCMBitDepthKey: 16, AVEncoderAudioQualityKey: AVAudioQuality.max.rawValue ] audioRecorder = try AVAudioRecorder(url: fileURL, settings: audioSettings) } catch { throw Errors.UnableToCreateAudioRecorder } audioRecorder.delegate = self audioRecorder.prepareToRecord() } //MARK: update orientation public func updateOrientation(withDataSourceOrientation orientation: AVAudioSession.Orientation = .front, interfaceOrientation: UIInterfaceOrientation) async throws { let session = AVAudioSession.sharedInstance() guard let preferredInput = session.preferredInput, let dataSources = preferredInput.dataSources, let newDataSource = dataSources.first(where: { $0.orientation == orientation }), let supportedPolarPatterns = newDataSource.supportedPolarPatterns else { return } isStereoSupported = supportedPolarPatterns.contains(.stereo) if isStereoSupported { try newDataSource.setPreferredPolarPattern(.stereo) } try preferredInput.setPreferredDataSource(newDataSource) try session.setPreferredInputOrientation(interfaceOrientation.inputOrientation) } Here is the relevant part of my SwiftUI view: RecordView() .onAppear {             Task {                 if await AVAudioApplication.requestRecordPermission() {                     // The user grants access. Present recording interface.                     print("Permission granted")                 } else {                     // The user denies access. Present a message that indicates                     // that they can change their permission settings in the                     // Privacy & Security section of the Settings app.                     model.showAlert.toggle()                 }                 try await recorder.updateOrientation(interfaceOrientation: deviceOrientation)             }         }         .onReceive(NotificationCenter.default.publisher(for: UIDevice.orientationDidChangeNotification)) { _ in                     if let windowScene = UIApplication.shared.connectedScenes.first as? UIWindowScene,                        let orientation = windowScene.windows.first?.windowScene?.interfaceOrientation {                         deviceOrientation = orientation                         Task {                             do {                                 try await recorder.updateOrientation(interfaceOrientation: deviceOrientation)                             } catch {                                 throw Errors.UnableToUpdateOrientation                             }                         }                     }                 } Here is the full repo: https://github.com/aabagdi/MemoMan/tree/MemoManStereo Thanks for any leads!
1
0
476
Jul ’24
Input location of AVAudioSession are different between iPhone
Position of AVAudioSession is different when I use the speaker. try session.setCategory(.playAndRecord, mode: .voiceChat, options: []) try session.overrideOutputAudioPort(.speaker) try session.setActive(true) let route = session.currentRoute route.inputs.forEach{ input in print(input.selectedDataSource?.location) } In iPhone 11(iOS 17.5.1), AVAudioSessionLocation: Lower In iPhone 7 Plus(iOS 15.8.2), AVAudioSessionLocation: Upper What causes this difference in behavior?
0
0
376
Jul ’24
AVAudioSessionErrorCodeCannotInterruptOthers
We are to judge the AVAudioSessionInterruptionOptionShouldResume, to restore the audio playback. We have been online for a long time and have been able to resume audio playback normally. But recently we've had a lot of user feedback as to why the audio won't resume playing. Based on this feedback, we checked and found that there were some apps that did not play audio but occupied audio all the time. For example, when a user was using the wechat app, after sending a voice message, we received a notification to resume audio playback, and wechat did not play audio either. But we resume play times wrong AVAudioSessionErrorCodeCannotInterruptOthers. After that, we gave feedback to the wechat app and fixed the problem. But we still have some users feedback this problem, we do not know which app is maliciously occupying audio, so we do not know which aspect to troubleshoot the problem. We pay close attention to user feedback and hope it can help us solve user experience problems.
0
0
354
Jul ’24
Exporting Audio with Scaled Segments Causes copyNextSampleBuffer to Hang
I am trying to export an AVMutableComposition with a single audio track. This track has a scaled AVCompositionTrackSegment to simulate speed changes up to 20x. I need to use AVAssetWriter and AVAssetReader classes for this task. When I scale the source duration up to a maximum of 5x, everything works fine. However, when I scale it to higher speeds, such as 20x, the app hangs on the copyNextSampleBuffer method. I'm not sure why this is happening and how to prevent it. Also, this often happens if the exported audio track has segments with different speeds. (The duration of the audio file in the example is 47 seconds.) Example of code: class Export { func startExport() { let inputURL = Bundle.main.url(forResource: "Piano", withExtension: ".m4a")! let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! let outputURL = documentsDirectory.appendingPathComponent("Piano20x.m4a") try? FileManager.default.removeItem(at: outputURL) print("Output URL: \(outputURL)") changeAudioSpeed(inputURL: inputURL, outputURL: outputURL, speed: 20) } func changeAudioSpeed(inputURL: URL, outputURL: URL, speed: Float) { let urlAsset = AVAsset(url: inputURL) guard let assetTrack = urlAsset.tracks(withMediaType: .audio).first else { return } let composition = AVMutableComposition() let compositionAudioTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid) do { try compositionAudioTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: assetTrack.timeRange.duration), of: assetTrack, at: .zero) } catch { print("Failed to insert audio track: \(error)") return } let scaledDuration = CMTimeMultiplyByFloat64(assetTrack.timeRange.duration, multiplier: Double(1.0 / speed)) compositionAudioTrack?.scaleTimeRange(CMTimeRangeMake(start: .zero, duration: assetTrack.timeRange.duration), toDuration: scaledDuration) print("Scaled audio from \(assetTrack.timeRange.duration.seconds)sec to \(scaledDuration.seconds) sec") compositionAudioTrack?.segments do { let compositionAudioTracks = composition.tracks(withMediaType: .audio) let assetReader = try AVAssetReader(asset: composition) let audioSettings: [String: Any] = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2, AVLinearPCMBitDepthKey: 16, AVLinearPCMIsBigEndianKey: false, AVLinearPCMIsFloatKey: false, AVLinearPCMIsNonInterleaved: false ] let readerOutput = AVAssetReaderAudioMixOutput(audioTracks: compositionAudioTracks, audioSettings: audioSettings) assetReader.add(readerOutput) let assetWriter = try AVAssetWriter(outputURL: outputURL, fileType: .m4a) let writerInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings) assetWriter.add(writerInput) assetReader.startReading() assetWriter.startWriting() assetWriter.startSession(atSourceTime: .zero) let conversionQueue = DispatchQueue(label: "ConversionQueue") writerInput.requestMediaDataWhenReady(on: conversionQueue) { while writerInput.isReadyForMoreMediaData { if let sampleBuffer = readerOutput.copyNextSampleBuffer() { // APP hangs here!!! writerInput.append(sampleBuffer) } else { writerInput.markAsFinished() assetWriter.finishWriting { print("Export completed successfully") } break } } } } catch { print("Failed with error: \(error)") } } }
1
0
351
Jul ’24
Getting the amount of audio loaded in an AVPlayer
I have AVPlayer loading an MP3 from a URL. While it is playing, how can I tell how much of the MP3 has been actually downloaded so far? I have tried using item.loadedTimeRanges, but it does not seem to be accurate. I get a few notifications but it usualy stops sending notifications around 80 seconds and doesn't keep up to even the current position of the player. Any idea? Also, is there any way to get the total duration of the audio? All the methods I've tried return NAN.
0
0
416
Jun ’24
Trouble Getting RealityKit audio to play
I can't figure out how to get audio from my RealityKitContentBundle to play on Vision Pro... I have a scene in Reality Composer Pro called "WinterVivarium" which contains a 3D model of a tree, a particle emitter, a ChannelAudio entity, and an audio file (m4a) with 30 minutes of nature sounds. The 3D model and particle emitter load up just fine on my device, but I'm getting an error when I try to load the audio... Swift file below. When I run the app and this file gets called it throws the following error: "Error loading winter vivarium model and/or audio: The operation couldn’t be completed. (RealityKit.__REAsset.LoadError error 2.)" ChatGPT tells me error code 2 likely means "file not found" but I'm not sure on that one... Please help! import SwiftUI import RealityKit import RealityKitContent struct WinterVivarium: View { @State private var angle: Angle = .degrees(0) var body: some View { RealityView { content in let audioFilePath = "/Root/back-yard-feb-7am.m4a" let audioEntity = Entity() do { let entity = try await Entity(named: "WinterVivarium", in: realityKitContentBundle) content.add(entity) let resource = try await AudioFileResource.load(named: audioFilePath, from: "WinterVivarium.usda", in: RealityKitContent.RealityKitContentBundle) let audioController = audioEntity.playAudio(resource) } catch { print("Error loading winter vivarium model and/or audio: \(error.localizedDescription)") } } } #Preview { WinterVivarium() }
5
0
1.5k
Feb ’24
Bluetooth audio becomes choppy on iOS with entitlement error but works just fine on MacCatalyst
I am converting some old objective-C code deployed on ios 12 to swift in a WKWebView app. Im also developing the app for Mac via MacCatalyst. the issue im experiencing relates to a programmable learning bot that is programmed via block coding and the app facilitates the read and writes back and forth. the audio works via a A2DP connection the user sets manually in their settings, while the actual movement of the robot is controlled via a BLE connection. Currently the code works as intended on MacCatalyst, while on iPhone, the audio being sent back to the robot is very choppy and sometimes doesn't play at all. I apologize for the length of this, but there is a bit to unpack here. First, I know there has been a few threads posted about this issue, this one that seems similar but went unsolved https://forums.developer.apple.com/forums/thread/740354 as well as this one where apple says it is "log noise" https://forums.developer.apple.com/forums/thread/742739 However I just find it hard to believe that this issue seems to be log noise in this case. Mac Catalyst uses a legacy header file for WebKit, and im wondering if that could be part of the issue here.I have enable everything relating to bluetooth in my info plist file as the developer documents say. In my app sandbox for mac catalyst I have the permissions set for bluetooth as well there. Here are snippets of my read and write function func readFunction(session: String){ // Wait if we are still waiting to hear from the robot if self.serialRxBuf == ""{ self.emptyReadCount += 1 } if (!self.serialRxWaiting){ return } // Make sure we are waiting for the correct session if (Int(session) != self.serialRxSession){ return } self.serialRxWaiting = false self.serialRxSession += 1 let buf = self.serialRxBuf self.serialRxBuf = "" print("sending Read: \(buf)") self.MainWebView.evaluateJavaScript(""" if (serialPort.onRead) { serialPort.onRead("\(buf)"); } serialPort.onRead = null; """ ,completionHandler: nil) } // ----- Write function for javascript bluetooth interface ----- func writeFunction(buf: String) -> Bool { emptyReadCount = 0 if((self.blePeripheral == nil) || (self.bleCharacteristic == nil) || self.blePeripheral?.state != .connected){ print("write result: bad state, peripheral, or connection ") // in case we recieve an error that will freeze react side, safely navigate and clear bluetooth information. if MainWebView.canGoBack{ MainWebView.reload() showDisconnectedAlert() self.centralManager = nil // we will just start over next time self.blePeripheral = nil self.bleCharacteristic = nil self.connectACD2Failed() return false } return false } var data = Data() var byteStr = "" for i in stride(from: 0, to: buf.count, by: 2) { let startIndex = buf.index(buf.startIndex, offsetBy: i) let endIndex = buf.index(startIndex, offsetBy: 2) byteStr = String(buf[startIndex..<endIndex]) let byte = UInt8(byteStr, radix: 16)! data.append(byte) } guard let connectedCharacteristic = self.bleCharacteristic else { print("write result: Failure to assign bleCharacteristic") return false } print("sending bleWrite: \(String(describing: data))") self.blePeripheral.writeValue(data, for: connectedCharacteristic, type: .withoutResponse) print("write result: True") return true } Here is what the log looks like when running on mac catalyst, which works just fine sending bleWrite: 20 bytes write result: True sending Read: sending Read: 55AA55AA0B0040469EE6000000000000000000ED sending bleWrite: 20 bytes write result: True sending Read: sending Read: sending Read: 55AA55AA0B0040469EE6000000000000000000ED sending bleWrite: 20 bytes write result: True sending Read: 55AA55AA0B0040469EE6000000000000000000ED sending bleWrite: 20 bytes write result: True sending Read: 55AA55AA0B0040EDCB09000000000000000000ED sending bleWrite: 20 bytes write result: True sending Read: sending Read: 55AA55AA0B00407A7B96000000000000000000ED sending bleWrite: 20 bytes write result: True Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)}> 0x12c0380e0 - ProcessAssertion::acquireSync Failed to acquire RBS assertion 'WebKit Media Playback' for process with PID=36540, error: Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)} and here is the log from when we are running the code on iPhone (trying to save space here) I apologize for the length of this post, however submitting a test project to apple developer support just isn't possible with the device thats in use. Any help at all is appreciated. i've looked at every permission, entitlement, background processing, and tried every solution that I could find to no avail.
0
0
573
Jun ’24
ERROR 1852797029
I have a new iPhone 15 PRO and some new Usb-C Earphones too, both are 3 days old. Since the first day I have been having this Error 1852797029. I can be listening to music on Apple Music for a while but when I stop it and a while passes without resuming playback, when I resume it it gives me this error and I have to close the App and disconnecting and connecting the earphones again. It's very annoying and I'm very angry that this is happening to me from day one. With both devices completely new. Does anyone have a solution other than connecting and disconnecting the earphones?
1
0
858
Jun ’24
Disabling the Ability to use recorder while using my app
I have an application which is based on a video streaming service. A critical thing about this app is that these videos must be secured all the way and can't be pirated. The current problem I have is that anyone can use voice recorder to capture my videos audio which isn't suitable this app purpose. My question is Is there anyway that I can disable using any voice recording apps (or the microphone) and detect if someone tried to while streaming videos from my app? Thanks in advance
0
1
371
Jun ’24
Music is not being played through BLE connected headphone on iPhone
I have created a demo iOS app to create BLE connection with surrounding headphone. I am able to connect to headphone successfully through my demo iOS app. I can also see in iPhone Bluetooth Setting that headphone is connected but when i am playing music from Spotify/YouTube then music is not being played through headphone. It is still using iPhone speakers. First i am scanning sarounding bluetooth Devices through CBCentralManager and then connecting one of the found device. cBCenteralManager.scanForPeripherals(withServices: nil, options: nil) For connecting: cBCenteralManager.connect(peripheral, options: nil) Do i need to make any code changes while connecting via BLE? I am expecting when i am connecting to headphone via my Demo app. Same connection is visible in iPhone Bluetooth setting too then when i play music on spotify/youtube then sound should be played on headphone and not on iPhone speakers.
2
0
439
Jun ’24
Thread safety of AudioObject APIs
Are the AudioObject APIs (such as AudioObjectGetPropertyData, AudioObjectSetPropertyData, etc.) thread-safe? Meaning, for the same AudioObjectID is it safe to do things like: Get a property in one thread while setting the same property in another thread Set the same property in two different threads Add and remove property listeners in different threads Put differently, is there any internal synchronization or mutex for this kind of usage or is the burden on the caller? I was unable to find any documentation either way which makes me think that the APIs are not thread-safe.
0
2
475
Jun ’24
ImmersiveSpaceを切り替えるとAVAudioPlayerで再生していたBGMの音が聞こえなくなる / When switching ImmersiveSpace, background music played by AVAudioPlayer is not heard.
【手順】 1.アプリを起動する。 2.ImmersiveSpace1がopenされ、3Dオブジェクトのアニメーションが再生される。 3.アニメーションが終了するとImmersiveSpace1をdismissしてImmersiveSpace2をopenする。 【期待値】 ImmersiveSpace1をopenするとBGMが再生され、ImmersiveSpace2がopenしても引き続きBGMが再生されていること。 【結果】 ImmersiveSpace1をopenするとBGMが再生され、ImmersiveSpace2がopenするとBGMの再生が止まる。 【環境】 ・実機(VisionOS2)にて発生。 ・シミュレータでは発生しない。 ・Xcode:Version 15.2 (15C500b) 【ログ】 ImmersiveSpace2をopenした際に実機で出力されている。シミュレータでは出力されない。 AVAudioSession_iOS.mm:2223 Server returned an error from destroySession:. Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service with pid 39 named com.apple.audio.AudioSession was invalidated from this process." UserInfo={NSDebugDescription=The connection to service with pid 39 named com.apple.audio.AudioSession was invalidated from this process.} 【Procedure】 Launch the application. ImmersiveSpace1 is opened and the animation of the 3D object is played. When the animation finishes, ImmersiveSpace1 is dismissed and ImmersiveSpace2 is opened. 【Expected value】 When ImmersiveSpace1 is opened, the background music should play, and when ImmersiveSpace2 is opened, the background music should continue to play. 【Result】 When ImmersiveSpace1 is opened, the BGM is played, and when ImmersiveSpace2 is opened, the BGM stops playing. 【Environment】 This problem occurs on an actual machine (VisionOS2). It does not occur on the simulator. Xcode: Version 15.2 (15C500b) 【Log】 Output on actual device when ImmersiveSpace is opened. It is not output on the simulator. AVAudioSession_iOS.mm:2223 Server returned an error from destroySession:. Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service with pid 39 named com.apple.audio.AudioSession was invalidated from this UserInfo={NSDebugDescription=The connection to service with pid 39 named com.apple.audio.AudioSession was invalidated from this AudioSession was invalidated from this process.}
0
2
363
Jun ’24
AVPlayer with multiple audio tracks plays audio differently when start
Hi, I'm trying to play multiple video/audio file with AVPlayer using AVMutableComposition. Each video/audio file can process simultaneously so I set each video/audio in individual tracks. I use only local file. let second = CMTime(seconds: 1, preferredTimescale: 1000) let duration = CMTimeRange(start: .zero, duration: second) var currentTime = CMTime.zero for _ in 0...4 { let mutableTrack = composition.addMutableTrack( withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid ) try mutableTrack?.insertTimeRange( duration, of: audioAssetTrack, at: currentTime ) currentTime = currentTime + second } When I set many audio tracks (maybe more than 5), the first part sounds a little different from original when it starts. It seems like audio's front part is skipped. But when I set only two tracks, AVPlayer plays as same as original file. avPlayer.play() How can I fix it? Why do audio tracks affect that don't have any playing parts when start? Please let me know.
1
2
1.1k
Dec ’23
Audio transition using MPMusicPlayerApplicationController
Hi. I saw that in iOS 18 Beta there is a property "transition" on the Music Kit's ApplicationMusicPlayer. However, in my app I am using MPMusicPlayerApplicationController because I want to play Apple Music songs, local songs and podcasts. But I didn't find an analogue property on MPMusicPlayerApplicationController to specify transitions between songs. Am I missing something? Thanks, Dirk
0
0
450
Jun ’24