AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

Posts under AVAudioSession tag

83 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Why is AVAudioRecorder creating corrupt files?
I'm attempting to record from a device's microphone (under iOS) using AVAudioRecorder. The examples are all quite simple, and I'm following the same method. But I'm getting error messages on attempts to record, and the resulting M4A file (after several seconds of recording) is only 552 bytes long and won't load. Here's the recorder usage: func startRecording() { let settings = [ AVFormatIDKey: Int(kAudioFormatMPEG4AAC), AVSampleRateKey: 22050, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] do { recorder = try AVAudioRecorder(url: tempFileURL(), settings: settings) recorder?.delegate = self recorder!.record() recording = true } catch { recording = false recordingFinished(success: false) } } The immediate sign of trouble appears to be the following, in the console. Note the 0 bits per channel and irrelevant 8K sample rate: AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 8000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 8000 Hz, Int16 A subsequent attempt to load the file into AVAudioPlayer results in: MP4_BoxParser.cpp:1089 DataSource read failed MP4AudioFile.cpp:4365 MP4Parser_PacketProvider->GetASBD() failed AudioFileObject.cpp:105 OpenFromDataSource failed AudioFileObject.cpp:80 Open failed But that's not surprising given that it's only 500+ bytes and we had the earlier error. Anybody have an idea here? Every example on the Web shows essentially this exact method. I've also tried constructing the recorder with let audioFormat = AVAudioFormat.init(standardFormatWithSampleRate: 44100, channels: 1) if audioFormat == nil { print("Audio format failed.") } else { do { recorder = try AVAudioRecorder(url: tempFileURL(), format: audioFormat!) ... with mostly the same result. In that case the instantiation error message was the following, which at least mentions the requested sample rate: AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 44100 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 44100 Hz, Int32
1
0
425
Mar ’24
How do you control the volume of ApplicationMusicPlayer?
I need to duck the audio coming from ApplicationMusicPlayer while playing a local file using AVAudioPlayer. I've tried using the duckOthers option as follows, but it doesn't work: let appAudioSession = AVAudioSession.sharedInstance() do { try appAudioSession.setCategory(.playAndRecord, mode: .default, options: .duckOthers) Maybe this is because there's one session for the entire app, and ApplicationMusicPlayer is using it? This is a fairly critical problem for my application, since Music content is always much louder than locally recorded content. Any insight appreciated.
0
0
289
Mar ’24
Using mixToTelephonyUplink to allow speech synthesizer to be audible during a phone call
I'd like to allow the speech synthesizer to play on the device speaker while simultaneously mixing with a phone call. I've worked with a number of different configurations but am unable to find a configuration that achieves the functionality I am trying to achieve - or allows mixing with a phone call at all. There is a flag: mixToTelephonyUplink that seems to suggest that at least some mixing with a phone call is possible using the speech synthesizer, but I'm currently unable to find almost any documentation about this flag besides basic API docs. I've had some some luck at least getting the synthesizer to always play to the speaker with the following audio session configuration - but the sound never is mixed with a phone call. Instead, it is ducked and muted while the phone call takes place. I've tried quite a few configuration combinations for the category and overrides, but nothings seems to work quite as I'd expect it to. synthesizer.mixToTelephonyUplink = true try? audioSession.setCategory(.playback, mode: .voicePrompt, options: [.mixWithOthers, .defaultToSpeaker]) try? audioSession.setActive(true, options: []) try? audioSession.overrideOutputAudioPort(.speaker) Is there some kind of documentation for this that's off the beaten path that I'm somehow missing? I'm going to continue with guess and check, but I'm starting to think this flag - and the functionality it implies, actually wasn't ever fully implemented.
1
0
1k
Mar ’24
AVAudioSession errorcode : AVAudioSessionErrorCodeCannotInterruptOthers
Background When I receive the InterruptionBegan notification (the interruption type is AVAudioSessionInterruptionTypeBegan) , I pause playing music. When I receive the InterruptionEnded notification (the interruption type is AVAudioSessionInterruptionTypeEnded), I resume playing music. however, sometimes i has got the error code: AVAudioSessionErrorCodeCannotInterruptOthers (560557684) Some Solutions I searched stackoverflow, there's some similar questions, and some solutions here are not very satisfying as : I don't want my app to mix with others, and once again, it all works most of the time. My app already uses remote control events so this doesn't solve anything. Questions 1.Have someone ever encountered this problem ? 2.Can we solve this problem and how ? 3.In addition, I noticed that there's property named otherAudioPlaying in AVAudioSession, we can know there's another app is playing,the quetion is if we can know which app is playing ?
0
0
281
Mar ’24
AVAudioEngine: Is there a way to play audio at full volume while having an active input tap?
My project has uses an AVAudioEngine with a very simple setup: A Speech recognizer running on a tap on the engine's input with separate AVAudioPlayerNodes handling playback. try session.setCategory(.playAndRecord, mode: .default, options: []) try session.setActive(true, options: .notifyOthersOnDeactivation) try session.setAllowHapticsAndSystemSoundsDuringRecording(true) filePlayerNode ---> engine.mainMixerNode bufferPlayerNode --> engine.mainMixerNode engine.mainMixerNode --> engine.outputNode //bufferPlayer.scheduleBuffer() is called on its own queue The input works fine since the buffers can be collected into a file and plays back correctly, and also because the recognizer works fine; but when I try to play the live audio by sending the buffer to the bufferPlayer on this or another device, the buffer audio plays at a very low volume, sometimes with severe distortions. If I lower the sample rate via AVAudioConverter, the distortions get worse. I've tried experimenting with the AVAudioSession category options, having separate AVAudioEngines, and much, much more, yet I still haven't figured this out. It's gotten to the point where I've fixed almost all the arcane and minor issues in my audio system, yet I still can't play back my voice properly. The ability to both play and record simultaneously is a basic feature of phones--when on speaker mode, a phone doesn't need to behave like a walkie-talkie. In my mind, it's inconceivable that the relatively new AVAudioEngine doesn't have a implementation for this, since the main issue (feedback loops) can be dealt with via a simple primitive circuit. Live video chat apps like FaceTime wouldn't be possible without this, yet to my surprise I found no answers online (what I did find were articles explaining how to write a file while playback is occurring). Is there truly no way to do this on AVAudioEngine? Am I missing something fundamental? Any pointers would be greatly appreciated
1
0
516
Mar ’24
iOS Callkit system screen, no audio if I wait for connection before calling CXAnswerCallAction::fulfill
I'm trying to integrate Callkit into a Flutter app that uses webRTC for calls and I have an issue with taking calls on locked screen. CXAnswerCallAction requires to have the action.fulfill() method called after the connection is established. Here is a pice of code without waiting for establishment of the connection: guard let call = self.callManager?.callWithUUID(uuid: action.callUUID) else{ action.fail() return } call.data.isAccepted = true self.answerCall = call self.callManager?.updateCall(call) sendEvent(SwiftCallKeepPlugin.ACTION_CALL_ACCEPT, call.data.toJSON()) DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1200)) { self.configureAudioSession() } action.fulfill() } This causes the connection time counter to be immediately visible on the screen, but the user still has to wait for connection establishment and can't hear anything. Here is the code that waits for the establishment of the connection before calling action.fulfill(): if(self.awaitedConnection.uuid != uuid) { action.fail() } else if(self.awaitedConnection.isConnected) { DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1200)) { self.configureAudioSession() } action.fulfill() } else { DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1000)) { self.waitForConnection(uuid: uuid, action: action) } } } public func provider(_ provider: CXProvider, perform action: CXAnswerCallAction) { guard let call = self.callManager?.callWithUUID(uuid: action.callUUID) else{ action.fail() return } call.data.isAccepted = true self.answerCall = call self.callManager?.updateCall(call) self.awaitedConnection.uuid = action.callUUID self.awaitedConnection.isConnected = false sendEvent(wiftCallKeepPlugin.ACTION_CALL_ACCEPT, call.data.toJSON()) waitForConnection(uuid: action.callUUID, action: action) } Unfortunately, though it works great on iOS 15.7, on 17.3 it causes lack of audio, no sound and no recording. I also can't enable it later when the call is ongoing. For reference: let session = AVAudioSession.sharedInstance() do{ try session.setCategory(AVAudioSession.Category.playAndRecord, options: AVAudioSession.CategoryOptions.allowBluetooth) try session.setMode(self.getAudioSessionMode(data?.audioSessionMode ?? "voiceChat")) try session.setActive(data?.audioSessionActive ?? true) try session.setPreferredSampleRate(data?.audioSessionPreferredSampleRate ?? 44100.0) try session.setPreferredIOBufferDuration(data?.audioSessionPreferredIOBufferDuration ?? 0.005) }catch{ print(error) } } I can see in the docs of action.fulfill() that "You should only call this method from the implementation of a CXProviderDelegate method". I this the reason for the issue? But how can I do it if I need to wait for the connection asynchronously and the provider method is synchronous?
1
0
401
Mar ’24
Interference occurs between MTAudioProcessingTaps when using MTAudioProcessingTap from iOS17.1 and later.
There is a CustomPlayer class and inside it is using the MTAudioProcessingTap to modify the Audio buffer. Let's say there are instances A and B of the Custom Player class. When A and B are running, the process of B's MTAudioProcessingTap is stopped and finalize callback is coming up when A finishes the operation and the instance is terminated. B is still experiencing this with some parts left to proceed. Same code same project is not happening in iOS 17.0 or lower. At the same time when A is terminated, B can complete the task without any impact on B. What changes to iOS 17.1 are resulting in these results? I'd appreciate it if you could give me an answer on how to avoid these issues. let audioMix = AVMutableAudioMix() var audioMixParameters: [AVMutableAudioMixInputParameters] = [] try composition.tracks(withMediaType: .audio).forEach { track in let inputParameter = AVMutableAudioMixInputParameters(track: track) inputParameter.trackID = track.trackID var callbacks = MTAudioProcessingTapCallbacks( version: kMTAudioProcessingTapCallbacksVersion_0, clientInfo: UnsafeMutableRawPointer( Unmanaged.passRetained(clientInfo).toOpaque() ), init: { tap, clientInfo, tapStorageOut in tapStorageOut.pointee = clientInfo }, finalize: { tap in Unmanaged<ClientInfo>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).release() }, prepare: nil, unprepare: nil, process: { tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut in var timeRange = CMTimeRange.zero let status = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, &timeRange, numberFramesOut) if noErr == status { .... } }) var tap: Unmanaged<MTAudioProcessingTap>? let status = MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks, kMTAudioProcessingTapCreationFlag_PostEffects, &tap) guard noErr == status else { return } inputParameter.audioTapProcessor = tap?.takeUnretainedValue() audioMixParameters.append(inputParameter) tap?.release() } audioMix.inputParameters = audioMixParameters return audioMix
1
0
445
Feb ’24
Using AVAudioEngine to monitor audio in VisionOS
I am writing code to monitor the incoming audio levels in VisionOS. It works properly in the simulator, but gets an error on the device. Curious if anyone has any tips. I took out some of the code so it's a bit shorter, as it fails in setupAudioEngine when I try to start the engine with this error: Error starting audio engine: The operation couldn’t be completed. (com.apple.coreaudio.avfaudio error 561145187.) Thanks in advance! Here is my code: class AudioInputMonitor: ObservableObject { private var audioEngine: AVAudioEngine? @Published var inputLevel: Float = 0 init() { requestMicrophonePermission() } private func requestMicrophonePermission() { AVAudioApplication.requestRecordPermission { granted in DispatchQueue.main.async { if granted { self.setupAudioSessionAndEngine() } else { print("Microphone permission not granted") // Handle the case where permission is not granted } } } } private func setupAudioSessionAndEngine() { do { let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playAndRecord, mode: .measurement, options: []) try audioSession.setActive(true) self.setupAudioEngine() } catch { print("Failed to set up the audio session: \(error)") } } private func setupAudioEngine() { audioEngine = AVAudioEngine() guard let inputNode = audioEngine?.inputNode else { print("Failed to get the audio input node") return } let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { [weak self] (buffer, _) in self?.analyzeAudio(buffer: buffer) } do { try audioEngine?.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } private func analyzeAudio(buffer: AVAudioPCMBuffer) { // removed to be brief } func stopMonitoring() { // removed to be brief } }
1
0
515
Feb ’24
Playing audio in a Custom QL Preview
Hi all, I have created a QuickLook Preview for my custom datatype in my app. I use SwiftUI wrapped in UIKit for the preview. My issue is that when I try and play audio using AVAudioPlayer, I receive a status code 50 error. Does anyone know if there are seperate permissions I need to request before being able to do this? Here are the errors I get while trying to set my audio session as active and play on the avaudioplayer Thanks for your help and advice! The operation couldn’t be completed. (OSStatus error -50.) nwi_state: registration failed (9) connection <connection: 0x100e0b270> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } : error <dictionary: 0x251524530> { count = 1, transaction: 0, voucher = 0x0, contents = "XPCErrorDescription" => <string: 0x2515246c8> { length = 18, contents = "Connection invalid" } } auto-cancelling <connection: 0x100e0b270> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } 0x2816bf680 reply: XPC_ERROR_CONNECTION_INVALID throwing swix::exception: !(is_valid()) AQ_API_V2Impl.cpp:134 AudioQueueNew: <-AudioQueueNew failed -302 rebuilding null connection 0x2816bf680 reply: XPC_ERROR_CONNECTION_INVALID connection <connection: 0x100822a90> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } : error <dictionary: 0x251524530> { count = 1, transaction: 0, voucher = 0x0, contents = "XPCErrorDescription" => <string: 0x2515246c8> { length = 18, contents = "Connection invalid" } } throwing swix::exception: !(is_valid()) auto-cancelling <connection: 0x100822a90> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } AQ_API_V2Impl.cpp:134 AudioQueueNew: <-AudioQueueNew failed -302
0
0
393
Feb ’24
Background audio issues with AVPictureInPictureController
I know that if you want background audio from AVPlayer you need to detatch your AVPlayer from either your AVPlayerViewController or your AVPlayerLayer in addition to having your AVAudioSession configured correctly. I have that all squared away and background audio is fine until we introduce AVPictureInPictureController or use the PiP behavior baked into AVPlayerViewController. If you want PiP to behave as expected when you put your app into the background by switching to another app or going to the homescreen you can't perform the detachment operation otherwise the PiP display fails. On an iPad if PiP is active and you lock your device you continue to get background audio playback. However on an iPhone if PiP is active and you lock the device the audio pauses. However if PiP is inactive and you lock the device the audio will pause and you have to manually tap play on the lockscreen controls. This is the same between iPad and iPhone devices. My questions are: Is there a way to keep background-audio playback going when PiP is inactive and the device is locked (iPhone and iPad) Is there a way to keep background-audio playback going when PiP is active and the device is locked? (iPhone)
1
0
1.1k
Feb ’24
airpod2 a2dp option issue
Hello, I am having issue with the setting my avaudiosession output to bluetooth a2dp device. I want to use built in mic for the input and a2dp device (airpod pro 2) for the output route. Whenever I set the .allowBluetoothA2DP for my avaudioSession option, the output changes to speaker. the mode is default and category is playandrecord. If I do the same procedure with airpod pro 1, the output sets to the airpod pro 1. I am having the trouble when I use airpod pro 2 with iphone with ios 17. It seems like there is no issue with ios version below 17. Anyone went through this kind of issue? Thank you in advance.
0
0
292
Feb ’24
Resetting categoryOptions when setting the mode in Swift AVAudioSession
When setting the mode during the configuration of an audio session in Swift, the previously configured categoryOptions get reset. For example, if you perform setMode as shown below, you will observe that all previously set categoryOptions are cleared. Example: try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [.allowBluetooth, .defaultToSpeaker]) try AVAudioSession.sharedInstance().setMode(.voiceChat) If you need to change the mode while maintaining the categoryOptions, you have to perform setCategory once again. Although the exact reason for this behavior has not been identified, the practical impact on the application's functionality is not yet clear. Why do you think this handling is in place?
0
0
388
Feb ’24
Handle AVAudioEngineConfigurationChange when record audio with AVAudioEngine
Hi everyone, I was working on some code that involves recording audio with AVAudioEngine and got an issue that just crashes the app: EXC_BREAKPOINT Exception 6, Code 1, Subcode 4304279688 +0x009888 AudioRecordModule.setupAudioEngine +0x009788 AudioRecordModule.setupAudioEngine +0x00c5bc AudioRecordModule.handleConfigurationChange Below is the relevant code in the Recorder class. public class AudioRecordModule: Module { private var audioEngine: AVAudioEngine? private func startRecording(options recordingOptions: RecordingOptions) { try AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: .mixWithOthers) try AVAudioSession.sharedInstance().setActive(true) outputFormat = AVAudioFormat( commonFormat: recordingOptions.bitDepth == 32 ? .pcmFormatInt32 : .pcmFormatInt16, sampleRate: Double(recordingOptions.sampleRate), channels: AVAudioChannelCount(recordingOptions.channels), interleaved: true )! let fileUri = URL(string: recordingOptions.fileUri)! let formatSettings: [String: Any] = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: recordingOptions.sampleRate, AVNumberOfChannelsKey: recordingOptions.channels, AVEncoderBitRateStrategyKey: AVAudioBitRateStrategy_Constant, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue, ] self.recordedFile = try AVAudioFile( forWriting: fileUri, settings: formatSettings, commonFormat: outputFormat.commonFormat, interleaved: outputFormat.isInterleaved ) if !hadSetupNotification { setupNotifications() } } func handleConfigurationChange() { DispatchQueue.main.async { self.releaseAudioEngine() self.setupAudioEngine() if self.state == "recording" { // we could attempt to keep recording do { try self.audioEngine?.start() } catch { self.internalPauseRecording() self.sendInterruptEvent() } } } } func setupNotifications() { nc.addObserver( forName: Notification.Name.AVAudioEngineConfigurationChange, object: nil, queue: nil ) { [weak self] _ in guard let weakself = self else { return } if weakself.state != "inactive" { weakself.handleConfigurationChange() } } } private func setupAudioEngine() { self.audioEngine = nil let audioEngine = AVAudioEngine() self.audioEngine = audioEngine let inputNode = audioEngine.inputNode let inputFormat = inputNode.inputFormat(forBus: 0) let converter = AVAudioConverter(from: inputFormat, to: outputFormat)! inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat) { (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in do { let inputBlock: AVAudioConverterInputBlock = { _, outStatus in outStatus.pointee = AVAudioConverterInputStatus.haveData return buffer } let frameCapacity = AVAudioFrameCount(self.outputFormat.sampleRate) * buffer.frameLength / AVAudioFrameCount(buffer.format.sampleRate) let outputBuffer = AVAudioPCMBuffer( pcmFormat: self.outputFormat, frameCapacity: frameCapacity )! var error: NSError? converter.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock) if let error = error { throw error } else { try self.recordedFile?.write(from: outputBuffer) } } catch { print(error) } } } private func releaseAudioEngine() { if let audioEngine = self.audioEngine { audioEngine.inputNode.removeTap(onBus: 0) audioEngine.stop() } audioEngine = nil } } Beside that, the record module works normally. It is just the configuration change that it does not handle well. I understand that when configuration changes, I need to reinit the audio engine to have the correct input format (since the new config/audio device can have different sample rate and such). If I don't do that, the app also crashes perhaps due to the mismatch. AVAudioRecorder is not an option for me. Thank you for your help.
0
0
489
Jan ’24
I'd like to exclude the app audio from the screen recording
:( We are currently in the process of developing a video calling app using WebRTC. We initiate one-to-one video calls with the AVAudioSession configured as follows: do { if audioSession.category != .playAndRecord { try audioSession.setCategory( AVAudioSession.Category.playAndRecord, options: [ .defaultToSpeaker ] ) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) } if audioSession.mode != .videoChat { try audioSession.setMode(.videoChat) } } catch { logger.error(msg: "AVAudioSession: \(error.localizedDescription)") } After initiating a video call, we recorded this app's video call using the iOS default screen recording feature. As a result, the recorded video includes system audio. However, iOS/iPad apps with similar features (Zoom, Skype, Slack) do not include audio in their recordings. Why does this difference occur? Is this behavior a security feature of iOS, and are there specific conditions required? Is there a need for some sort of configuration in AVAudioSession? additional :( I also reached out to Apple Developer Technical Support, and they responded, "We were able to reproduce it, but since we don't understand the issue, we will investigate it." What's that about...
1
0
388
Jan ’24
Audio Drops Out When Setting Category to .playAndRecord
I am creating an app where you can record a video and listen to music in the background. At the top of my viewDidLoad I set the AVAudioSession Category to .playAndRecord let audioSession = AVAudioSession.sharedInstance() AVCaptureSession().automaticallyConfiguresApplicationAudioSession = false do { try audioSession.setCategory(AVAudioSession.Category.playAndRecord, options: [.mixWithOthers, .allowAirPlay, .allowBluetoothA2DP]) try audioSession.setActive(true) } catch { print("error trying to record and play audio") } However when I do this the audio cuts out for a second or less at app open and app close. I would like the audio to continue playing and not cutout. Is there anything I can do to ensure the audio continues to play?
0
0
341
Jan ’24
Unable to upload recording of more than 15 mins to AWS server
Hi There, I am trying to record a meeting and upload it to AWS server. The recording is in .m4a format and the upload request is a URLSession request. The following code works perfectly for recordings less than 15 mins. But then for greater recordings, it gets stuck Could you please help me out in this? func startRecording() { let audioURL = getAudioURL() let audioSettings = [ AVFormatIDKey: Int(kAudioFormatMPEG4AAC), AVSampleRateKey: 12000, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] do { audioRecorder = try AVAudioRecorder(url: audioURL, settings: audioSettings) audioRecorder.delegate = self audioRecorder.record() } catch { finishRecording(success: false) } } func uploadRecordedAudio{ let _ = videoURL.startAccessingSecurityScopedResource() let input = UploadVideoInput(signedUrl: signedUrlResponse, videoUrl: videoURL, fileExtension: "m4a") self.fileExtension = "m4a" uploadService.uploadFile(videoUrl: videoURL, input: input) videoURL.stopAccessingSecurityScopedResource() } func uploadFileWithMultipart(endPoint: UploadEndpoint) { var urlRequest: URLRequest urlRequest = endPoint.urlRequest uploadTask = URLSession.shared.uploadTask(withStreamedRequest: urlRequest) uploadTask?.delegate = self uploadTask?.resume() }
3
0
446
Jan ’24
AVAudioEngine: audio input does not work on iOS 17 simulator
Hello, I'm facing an issue with Xcode 15 and iOS 17: it seems impossible to get AVAudioEngine's audio input node to work on simulator. inputNode has a 0ch, 0kHz input format, connecting input node to any node or installing a tap on it fails systematically. What we tested: Everything works fine on iOS simulators <= 16.4, even with Xcode 15. Nothing works on iOS simulator 17.0 on Xcode 15. Everything works fine on iOS 17.0 device with Xcode 15. More details on this here: https://github.com/Fesongs/InputNodeFormat Any idea on this? Something I'm missing? Thanks for your help 🙏 Tom PS: I filed a bug on Feedback Assistant, but it usually takes ages to get any answer so I'm also trying here 😉
5
5
1.8k
Jan ’24
Play Music While Camera Is Open
I am creating a camera app where I would like music from another app (Apple Music, Spotify, etc.) to continue playing once the app is opened. Currently I am using .mixWithOthers to do this in my viewDidLoad. let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(AVAudioSession.Category.playback, options: [.mixWithOthers]) try audioSession.setActive(true) } catch { print("error trying to record and play audio") } However I am running into an issue where the music only plays if you resume music playback once you start recording a video. Otherwise, when you open the app music will stop when you see the preview. The interesting thing is that if you start playing music while recording, then once you stop music continues to play in the preview view. If you close the app (not force close) and reopen then music play back continues as expected. However, once you force close the app then it returns to the original behavior. I've tried to do research on this and I have not been able to find anything. Any help is appreciated. Let me know if more details are needed.
1
0
529
Jan ’24