Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

io buffer sizes for audio driver based on IOUserAudioDevice
Dear Sirs, I've written an audio driver based on IOUserAudioDevice. In my IOOperationHandler I can receive and send the audio samples as expected. Is there any way to configure the number of samples transferred in each call? Currently it seem to be around 512 samples per call, which relates to 10.7 millisecs when operating on 48 kHz samplerate. I'd like to achieve something like 48 or 96 samples per call. I did some experiments and tried calls to SetOutputLatency() etc. but so far I didn't find the right way to change the in_io_buffer_frame_size in the callback. I'd like to do this as smaller buffer sizes would allow lower latencies for the subsequent audio processing. Thanks and best regards, Johannes
0
0
48
19h
PHASE on Vision OS 2.0 beta in Unity
Hi, I'm looking to implement PHASEStreamNode in Unity, but the current provided PHASE library for Unity doesn't contain this new typos of nodes yet. https://developer.apple.com/documentation/phase/phasestreamnode When you will be looking into releasing the beta of the Unity Plugins as well? This is very important for spatial audio in Unity to be consistent with Apple's standards. Best, Antonio
0
0
53
1d
Audio recording issue in my App after updating to iOS 17.2.1 and it exists even now
In my iOS app, I've the functionality to record audio and video using the AVFoundation framework. While audio recording works smoothly on some devices, such as iPads and certain others, I'm encountering issues with newer models like iPhone 14, 14 Pro, and 15 series. Specifically, when attempting to initiate audio recording by tapping the microphone icon, the interface becomes unresponsive and remains static. This issue surfaced following an update to iOS 17.2.1. It seems to affect only a subset of devices, despite video recording functioning correctly across all devices.
0
0
77
1d
Volume API returns 0
I am working on a VoIP based PTT app. Uses 'voip' apns notification type to get to know about new incoming PTT call. When my app receives a PTT call, the app plays audio. But the call audio is not heard. While checking the phone volume, the API [[AVAudioSession sharedInstance] outputVolume] returns 0. But clearly the phone volume is not zero. On checking the phone volume by pressing side volume button, the volume is above 50%. This behavior is observed in both app foreground and background scenario. Why does the API return zero volume level ? Is there any other reason why the app volume is not heard ?
0
0
85
2d
Looking for the simplest possible remoteIO example
I have an app (iPhone / iPad) that currently uses miniaudio and I'd to transition to a pure CoreAudio solution, and I cannot for the life of me get it to work. I want to set up a remoteIO with microphone and speaker callbacks, so that I get a callback when the microphone (USB microphone at 384kHz, mono) has samples for me, and I get a callback when the speakers (49kHz, stereo) need more samples. It should be insanely simple. It's Objective C as I have never got round to Swift and can't see it happening this late in life, but that shouldn't change things. So if anyone can tell me what I'm doing wrong here it would be GREATLY appreciated. My playbackCallback is never being fired, only the recordCallback. `-(void) launchRemoteIOandSleep { NSError *error; [ [AVAudioSession sharedInstance] setPreferredIOBufferDuration:(1024.0f / 48000.0f) error:&error ]; OSStatus status; AudioComponentInstance audioUnit; // set up callback structures - different sanity clauses to identify in breakpoints renderCallBackHandle sanityClause666; sanityClause666.remoteIO = audioUnit; sanityClause666.sanityCheck666 = 666; renderCallBackHandle sanityClause667; sanityClause667.remoteIO = audioUnit; sanityClause667.sanityCheck666 = 667; // set up audio formats AudioStreamBasicDescription audioFormatInput; FillOutASBDForLPCM(audioFormatInput,384000.0,1,16,16,0,0,0); AudioStreamBasicDescription audioFormatOutput; FillOutASBDForLPCM(audioFormatOutput,48000.0,2,16,16,0,0,0); // set up callback structs AURenderCallbackStruct callbackStructRender; callbackStructRender.inputProc = playbackCallback; callbackStructRender.inputProcRefCon = &sanityClause666; AURenderCallbackStruct callbackStructRecord; callbackStructRecord.inputProc = recordCallback; callbackStructRecord.inputProcRefCon = &sanityClause667; // grab remoteIO AudioComponentDescription desc; desc.componentType = kAudioUnitType_Output; desc.componentSubType = kAudioUnitSubType_RemoteIO; desc.componentManufacturer = kAudioUnitManufacturer_Apple; desc.componentFlags = 0; desc.componentFlagsMask = 0; AudioComponent component = AudioComponentFindNext(NULL, &desc); // Get audio unit status = AudioComponentInstanceNew(component, &audioUnit); checkStatus(status); // Enable IO for both recording and playback // this enables the OUTPUT side of the OUTPUT bus which is the speaker (I thnk ... ) UInt32 flag = 1; status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, kOutputBus, &flag, sizeof(flag)); checkStatus(status); // this enables the INPUT side of the INPUT bus which is the mic (I thnk ... ) flag = 1; status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, kInputBus, &flag, sizeof(flag)); checkStatus(status); // Apply format - INPUT bus of OUTPUT SCOPE which is my samples into remoteIO status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, &audioFormatOutput, sizeof(audioFormatOutput)); checkStatus(status); // Apply format - OUTPUT bus of INPUT SCOPE which is where I pick up my samples from mic status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, kOutputBus, &audioFormatInput, sizeof(audioFormatInput)); checkStatus(status); // set output callback status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, kInputBus, &callbackStructRender, sizeof(callbackStructRender)); checkStatus(status); // Set input callback status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Input, kOutputBus, &callbackStructRecord, sizeof(callbackStructRecord)); checkStatus(status); // Disable buffer allocation for the recorder flag = 0; status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_ShouldAllocateBuffer, kAudioUnitScope_Input, kInputBus, &flag, sizeof(flag)); // Initialise status = AudioUnitInitialize(audioUnit); checkStatus(status); status = AudioOutputUnitStart(audioUnit); checkStatus(status); [ self waitForAudioStabilisation ]; while (1) sleep(2); }`
1
0
53
2d
AVAssetExportSession failed to export audio
I am trying to use AVAssetExportSession to export audio form video but every time I try it, it fails and I don't know why ?! this is the code import AVFoundation protocol AudioExtractionProtocol { func extractAudio(from fileUrl: URL, to outputUrl: URL) } final class AudioExtraction { private var avAsset: AVAsset? private var avAssetExportSession: AVAssetExportSession? init() {} } //MARK: - AudioExtraction conforms to AudioExtractionProtocol extension AudioExtraction: AudioExtractionProtocol { func extractAudio(from fileUrl: URL, to outputUrl: URL) { createAVAsset(for: fileUrl) createAVAssetExportSession(for: outputUrl) exportAudio() } } //MARK: - Private Methods extension AudioExtraction { private func createAVAsset(for fileUrl: URL) { avAsset = AVAsset(url: fileUrl) } private func createAVAssetExportSession(for outputUrl: URL) { guard let avAsset else { return } avAssetExportSession = AVAssetExportSession(asset: avAsset, presetName: AVAssetExportPresetAppleM4A) avAssetExportSession?.outputURL = outputUrl } private func exportAudio() { guard let avAssetExportSession else { return } print("I am here \n") avAssetExportSession.exportAsynchronously { if avAssetExportSession.status == .failed { print("\(avAssetExportSession.status)\n") } } } } func test_AudioExtraction_extractAudioAndWriteItToFile() { let videoUrl = URL(string: "https://storage.googleapis.com/gtv-videos-bucket/sample/ForBiggerMeltdowns.mp4")! let audioExtraction: AudioExtractionProtocol = AudioExtraction() audioExtraction.extractAudio(from: videoUrl, to: FileMangerTest.audioFile) FileMangerTest.tearDown() } class FileMangerTest { private static let fileManger = FileManager.default private static var directoryUrl: URL { fileManger.urls(for: .cachesDirectory, in: .userDomainMask).first! } static var audioFile: URL { directoryUrl.appendingPathComponent("audio", conformingTo: .mpeg4Audio) } static func tearDown() { try? fileManger.removeItem(at: audioFile) } static func contant(at url: URL) -> Data? { return fileManger.contents(atPath: url.absoluteString) } }
0
0
89
3d
All SystemSoundID
In the AudioServicesPlaySystemSound function of AudioToolbox, you can enter the corresponding SystemSoundID to play some sound effects that come with the system. However, I can't be sure what sound effect each number corresponds to, so I want to know all the sound effects in visionOS and its corresponding SystemSoundID.
0
0
106
4d
Voice recorder app recording in dual mono instead of stereo
Hi y'all, After getting mono recording working, I want to differentiate my app from the standard voice memos to allow for stereo recording. I followed this tutorial (https://developer.apple.com/documentation/avfaudio/capturing_stereo_audio_from_built-in_microphones) to get my voice recorder to record stereo audio. However, when I look at the waveform in Audacity, both channels are the same. If I look at the file info after sharing it, it says the file is in stereo. I don't exactly know what's going on here. What I suspect is happening is that the recorder is only using one microphone. Here is the relevant part of my recorder: // MARK: - Initialization override init() { super.init() do { try configureAudioSession() try enableBuiltInMicrophone() try setupAudioRecorder() } catch { // If any errors occur during initialization, // terminate the app with a fatalError. fatalError("Error: \(error)") } } // MARK: - Audio Session and Recorder Configuration private func enableBuiltInMicrophone() throws { let audioSession = AVAudioSession.sharedInstance() let availableInputs = audioSession.availableInputs guard let builtInMicInput = availableInputs?.first(where: { $0.portType == .builtInMic }) else { throw Errors.NoBuiltInMic } do { try audioSession.setPreferredInput(builtInMicInput) } catch { throw Errors.UnableToSetBuiltInMicrophone } } private func configureAudioSession() throws { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.record, mode: .default, options: [.allowBluetooth]) try audioSession.setActive(true) } catch { throw Errors.FailedToInitSessionError } } private func setupAudioRecorder() throws { let date = Date() let dateFormatter = DateFormatter() dateFormatter.locale = Locale(identifier: "en_US_POSIX") dateFormatter.dateFormat = "yyyy-MM-dd, HH:mm:ss" let timestamp = dateFormatter.string(from: date) self.recording = Recording(name: timestamp) guard let fileURL = recording?.returnURL() else { fatalError("Failed to create file URL") } self.currentURL = fileURL print("Recording URL: \(fileURL)") do { let audioSettings: [String: Any] = [ AVFormatIDKey: Int(kAudioFormatMPEG4AAC), AVLinearPCMIsNonInterleaved: false, AVSampleRateKey: 44_100.0, AVNumberOfChannelsKey: isStereoSupported ? 2 : 1, AVLinearPCMBitDepthKey: 16, AVEncoderAudioQualityKey: AVAudioQuality.max.rawValue ] audioRecorder = try AVAudioRecorder(url: fileURL, settings: audioSettings) } catch { throw Errors.UnableToCreateAudioRecorder } audioRecorder.delegate = self audioRecorder.prepareToRecord() } //MARK: update orientation public func updateOrientation(withDataSourceOrientation orientation: AVAudioSession.Orientation = .front, interfaceOrientation: UIInterfaceOrientation) async throws { let session = AVAudioSession.sharedInstance() guard let preferredInput = session.preferredInput, let dataSources = preferredInput.dataSources, let newDataSource = dataSources.first(where: { $0.orientation == orientation }), let supportedPolarPatterns = newDataSource.supportedPolarPatterns else { return } isStereoSupported = supportedPolarPatterns.contains(.stereo) if isStereoSupported { try newDataSource.setPreferredPolarPattern(.stereo) } try preferredInput.setPreferredDataSource(newDataSource) try session.setPreferredInputOrientation(interfaceOrientation.inputOrientation) } Here is the relevant part of my SwiftUI view: RecordView() .onAppear {             Task {                 if await AVAudioApplication.requestRecordPermission() {                     // The user grants access. Present recording interface.                     print("Permission granted")                 } else {                     // The user denies access. Present a message that indicates                     // that they can change their permission settings in the                     // Privacy & Security section of the Settings app.                     model.showAlert.toggle()                 }                 try await recorder.updateOrientation(interfaceOrientation: deviceOrientation)             }         }         .onReceive(NotificationCenter.default.publisher(for: UIDevice.orientationDidChangeNotification)) { _ in                     if let windowScene = UIApplication.shared.connectedScenes.first as? UIWindowScene,                        let orientation = windowScene.windows.first?.windowScene?.interfaceOrientation {                         deviceOrientation = orientation                         Task {                             do {                                 try await recorder.updateOrientation(interfaceOrientation: deviceOrientation)                             } catch {                                 throw Errors.UnableToUpdateOrientation                             }                         }                     }                 } Here is the full repo: https://github.com/aabagdi/MemoMan/tree/MemoManStereo Thanks for any leads!
1
0
171
1w
Input location of AVAudioSession are different between iPhone
Position of AVAudioSession is different when I use the speaker. try session.setCategory(.playAndRecord, mode: .voiceChat, options: []) try session.overrideOutputAudioPort(.speaker) try session.setActive(true) let route = session.currentRoute route.inputs.forEach{ input in print(input.selectedDataSource?.location) } In iPhone 11(iOS 17.5.1), AVAudioSessionLocation: Lower In iPhone 7 Plus(iOS 15.8.2), AVAudioSessionLocation: Upper What causes this difference in behavior?
0
0
145
1w
AVAudioSessionErrorCodeCannotInterruptOthers
We are to judge the AVAudioSessionInterruptionOptionShouldResume, to restore the audio playback. We have been online for a long time and have been able to resume audio playback normally. But recently we've had a lot of user feedback as to why the audio won't resume playing. Based on this feedback, we checked and found that there were some apps that did not play audio but occupied audio all the time. For example, when a user was using the wechat app, after sending a voice message, we received a notification to resume audio playback, and wechat did not play audio either. But we resume play times wrong AVAudioSessionErrorCodeCannotInterruptOthers. After that, we gave feedback to the wechat app and fixed the problem. But we still have some users feedback this problem, we do not know which app is maliciously occupying audio, so we do not know which aspect to troubleshoot the problem. We pay close attention to user feedback and hope it can help us solve user experience problems.
0
0
112
1w
Exporting Audio with Scaled Segments Causes copyNextSampleBuffer to Hang
I am trying to export an AVMutableComposition with a single audio track. This track has a scaled AVCompositionTrackSegment to simulate speed changes up to 20x. I need to use AVAssetWriter and AVAssetReader classes for this task. When I scale the source duration up to a maximum of 5x, everything works fine. However, when I scale it to higher speeds, such as 20x, the app hangs on the copyNextSampleBuffer method. I'm not sure why this is happening and how to prevent it. Also, this often happens if the exported audio track has segments with different speeds. (The duration of the audio file in the example is 47 seconds.) Example of code: class Export { func startExport() { let inputURL = Bundle.main.url(forResource: "Piano", withExtension: ".m4a")! let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! let outputURL = documentsDirectory.appendingPathComponent("Piano20x.m4a") try? FileManager.default.removeItem(at: outputURL) print("Output URL: \(outputURL)") changeAudioSpeed(inputURL: inputURL, outputURL: outputURL, speed: 20) } func changeAudioSpeed(inputURL: URL, outputURL: URL, speed: Float) { let urlAsset = AVAsset(url: inputURL) guard let assetTrack = urlAsset.tracks(withMediaType: .audio).first else { return } let composition = AVMutableComposition() let compositionAudioTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid) do { try compositionAudioTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: assetTrack.timeRange.duration), of: assetTrack, at: .zero) } catch { print("Failed to insert audio track: \(error)") return } let scaledDuration = CMTimeMultiplyByFloat64(assetTrack.timeRange.duration, multiplier: Double(1.0 / speed)) compositionAudioTrack?.scaleTimeRange(CMTimeRangeMake(start: .zero, duration: assetTrack.timeRange.duration), toDuration: scaledDuration) print("Scaled audio from \(assetTrack.timeRange.duration.seconds)sec to \(scaledDuration.seconds) sec") compositionAudioTrack?.segments do { let compositionAudioTracks = composition.tracks(withMediaType: .audio) let assetReader = try AVAssetReader(asset: composition) let audioSettings: [String: Any] = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2, AVLinearPCMBitDepthKey: 16, AVLinearPCMIsBigEndianKey: false, AVLinearPCMIsFloatKey: false, AVLinearPCMIsNonInterleaved: false ] let readerOutput = AVAssetReaderAudioMixOutput(audioTracks: compositionAudioTracks, audioSettings: audioSettings) assetReader.add(readerOutput) let assetWriter = try AVAssetWriter(outputURL: outputURL, fileType: .m4a) let writerInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings) assetWriter.add(writerInput) assetReader.startReading() assetWriter.startWriting() assetWriter.startSession(atSourceTime: .zero) let conversionQueue = DispatchQueue(label: "ConversionQueue") writerInput.requestMediaDataWhenReady(on: conversionQueue) { while writerInput.isReadyForMoreMediaData { if let sampleBuffer = readerOutput.copyNextSampleBuffer() { // APP hangs here!!! writerInput.append(sampleBuffer) } else { writerInput.markAsFinished() assetWriter.finishWriting { print("Export completed successfully") } break } } } } catch { print("Failed with error: \(error)") } } }
0
0
107
1w
API to get AVSpeechSynthesisVoice that are available to download (via Settings.app) but not yet downloaded to a device
Here is the use case, I have a language learning app that uses AVSpeechSynthesizer ❤️. When a user listens to a phrase with the AVSpeechSynthesizer using a AVSpeechSynthesisVoice with a AVSpeechSynthesisVoiceQuality of default it sounds much much worse than voices with enhanced or premium, really affecting usability. There appears to be no API for the app to know if there are enhanced or premium voices available to download (via Settings.app) but not yet downloaded to a device. The only API I could find is AVSpeechSynthesisVoice.speechVoices() which returns all available voices on the device, but not a full list of voices available via download. So the app cannot know if it should inform the user "hey this voice your listening to is a much lower quality than enhanced or premium, go to settings and download the enhanced or premium version". Any ideas? Do I need to send in an enhancement request via Feedback Assistant? Thank you for helping my users ears and helping them turn speech synthesis voice quality up to 11 when it's available to them with just a couple of extra taps! (I suppose the best workaround is to display a warning every time the user is using a default quality voice, I wonder what % of voices have enhanced or premium versions...)
1
0
186
1w
Getting the amount of audio loaded in an AVPlayer
I have AVPlayer loading an MP3 from a URL. While it is playing, how can I tell how much of the MP3 has been actually downloaded so far? I have tried using item.loadedTimeRanges, but it does not seem to be accurate. I get a few notifications but it usualy stops sending notifications around 80 seconds and doesn't keep up to even the current position of the player. Any idea? Also, is there any way to get the total duration of the audio? All the methods I've tried return NAN.
0
0
137
1w
Bluetooth audio becomes choppy on iOS with entitlement error but works just fine on MacCatalyst
I am converting some old objective-C code deployed on ios 12 to swift in a WKWebView app. Im also developing the app for Mac via MacCatalyst. the issue im experiencing relates to a programmable learning bot that is programmed via block coding and the app facilitates the read and writes back and forth. the audio works via a A2DP connection the user sets manually in their settings, while the actual movement of the robot is controlled via a BLE connection. Currently the code works as intended on MacCatalyst, while on iPhone, the audio being sent back to the robot is very choppy and sometimes doesn't play at all. I apologize for the length of this, but there is a bit to unpack here. First, I know there has been a few threads posted about this issue, this one that seems similar but went unsolved https://forums.developer.apple.com/forums/thread/740354 as well as this one where apple says it is "log noise" https://forums.developer.apple.com/forums/thread/742739 However I just find it hard to believe that this issue seems to be log noise in this case. Mac Catalyst uses a legacy header file for WebKit, and im wondering if that could be part of the issue here.I have enable everything relating to bluetooth in my info plist file as the developer documents say. In my app sandbox for mac catalyst I have the permissions set for bluetooth as well there. Here are snippets of my read and write function func readFunction(session: String){ // Wait if we are still waiting to hear from the robot if self.serialRxBuf == ""{ self.emptyReadCount += 1 } if (!self.serialRxWaiting){ return } // Make sure we are waiting for the correct session if (Int(session) != self.serialRxSession){ return } self.serialRxWaiting = false self.serialRxSession += 1 let buf = self.serialRxBuf self.serialRxBuf = "" print("sending Read: \(buf)") self.MainWebView.evaluateJavaScript(""" if (serialPort.onRead) { serialPort.onRead("\(buf)"); } serialPort.onRead = null; """ ,completionHandler: nil) } // ----- Write function for javascript bluetooth interface ----- func writeFunction(buf: String) -> Bool { emptyReadCount = 0 if((self.blePeripheral == nil) || (self.bleCharacteristic == nil) || self.blePeripheral?.state != .connected){ print("write result: bad state, peripheral, or connection ") // in case we recieve an error that will freeze react side, safely navigate and clear bluetooth information. if MainWebView.canGoBack{ MainWebView.reload() showDisconnectedAlert() self.centralManager = nil // we will just start over next time self.blePeripheral = nil self.bleCharacteristic = nil self.connectACD2Failed() return false } return false } var data = Data() var byteStr = "" for i in stride(from: 0, to: buf.count, by: 2) { let startIndex = buf.index(buf.startIndex, offsetBy: i) let endIndex = buf.index(startIndex, offsetBy: 2) byteStr = String(buf[startIndex..<endIndex]) let byte = UInt8(byteStr, radix: 16)! data.append(byte) } guard let connectedCharacteristic = self.bleCharacteristic else { print("write result: Failure to assign bleCharacteristic") return false } print("sending bleWrite: \(String(describing: data))") self.blePeripheral.writeValue(data, for: connectedCharacteristic, type: .withoutResponse) print("write result: True") return true } Here is what the log looks like when running on mac catalyst, which works just fine sending bleWrite: 20 bytes write result: True sending Read: sending Read: 55AA55AA0B0040469EE6000000000000000000ED sending bleWrite: 20 bytes write result: True sending Read: sending Read: sending Read: 55AA55AA0B0040469EE6000000000000000000ED sending bleWrite: 20 bytes write result: True sending Read: 55AA55AA0B0040469EE6000000000000000000ED sending bleWrite: 20 bytes write result: True sending Read: 55AA55AA0B0040EDCB09000000000000000000ED sending bleWrite: 20 bytes write result: True sending Read: sending Read: 55AA55AA0B00407A7B96000000000000000000ED sending bleWrite: 20 bytes write result: True Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)}> 0x12c0380e0 - ProcessAssertion::acquireSync Failed to acquire RBS assertion 'WebKit Media Playback' for process with PID=36540, error: Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.runningboard.assertions.webkit AND originator doesn't have entitlement com.apple.multitasking.systemappassertions)} and here is the log from when we are running the code on iPhone (trying to save space here) I apologize for the length of this post, however submitting a test project to apple developer support just isn't possible with the device thats in use. Any help at all is appreciated. i've looked at every permission, entitlement, background processing, and tried every solution that I could find to no avail.
0
0
254
1w