Push To Talk

RSS for tag

Let people send and receive audio in your app with the push of a button.

Posts under Push To Talk tag

26 Posts

Post

Replies

Boosts

Views

Activity

CallKit and PushToTalk related changes in iOS 26
Starting in iOS 26, two notable changes have been made to CallKit, LiveCommunicationKit, and the PushToTalk framework: As a diagnostic aid, we're introducing new dialogs to warn apps of voip push related issue, for example when they fail to report a call or when when voip push delivery stops. The specific details of that behavior are still being determined and are likely to change over time, however, the critical point here is that these alerts are only intended to help developers debug and improve their app. Because of that, they're specifically tied to development and TestFlight signed builds, so the alert dialogs will not appear for customers running app store builds. The existing termination/crashes will still occur, but the new warning alerts will not appear. As PushToTalk developers have previously been warned, the last unrestricted PushKit entitlement ("com.apple.developer.pushkit.unrestricted-voip.ptt") has been disabled in the iOS 26 SDK. ALL apps that link against the iOS 26 SDK which receive a voip push through PushKit and which fail to report a call to CallKit will be now be terminated by the system, as the API contract has long specified. __ Kevin Elliott DTS Engineer, CoreOS/Hardware
0
0
383
Jun ’25
CallKit連携でError 4099が頻発し、NEAppPushDelegateの通知が取得できず着信画面が表示されない問題について
iOSアプリでNEAppPushSessionを使い、NEAppPushDelegateの通知を受けてCallKitの着信画面を表示する実装をしていますが、以下の問題に直面しています。 8/13 ログにて下記のエラーが頻発しました。 通知の受け取りテストを約120回してその間ずっとこのエラーが出ていました。 エラー 2025-08-14 11:27:06.793073 +0900 nesessionmanager NESMAppPushSession[SimplePushDefaultConfiguration:7B7218F3-94B5-4AE5-9B9E-94E176694D02] failed to report incoming call to CallKit, error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.callkit.networkextension.messagecontrollerhost was invalidated from this process." UserInfo={NSDebugDescription=The connection to service named com.apple.callkit.networkextension.messagecontrollerhost was invalidated from this process.} このエラーログが頻発した後、callkitの通知画面が表示されなくなりました。 ですがどうやら通知の監視は開始しているようです。 15時間後の8/14 時間をあけたからか、再度通知が来るようになりました。 ですが再度通知の受け取りテストを行った時に同じエラーログが出ました。 再度通知テストを約120回程行ったら、このエラーログが頻発した後、callkitの通知画面が表示されなくなりました。 ですがどうやら今回も通知の監視は開始しているようです。 15時間後の8/15 今日は15時間かけても通知を取得できませんでした。 ですが同じく通知の監視は開始していそうです。 iPhoneの再起動、Xcodeのクリーンアップ、アンインストールして再インストールなどしても通知は来ないままでした。 また、不思議なことに通知が来ない事象が起きた端末以外でも同じように通知を取得することができません。 他の通知は受け取ることができますが独自の通知であるNEAppPushManagerだけ通知を取得することができません。 質問です。 再度通知を出すためには何をすれば良いでしょうか。 この事象は4099エラーを出しすぎたことにより発生する障害なのでしょうか。 4099エラーを出ている原因は何でしょうか。
0
0
399
Aug ’25
Delay in Microphone Input When Talking While Receiving Audio in PTT Framework (Full Duplex Mode)
Context: I am currently developing an app using the Push-to-Talk (PTT) framework. I have reviewed both the PTT framework documentation and the CallKit demo project to better understand how to properly manage audio session activation and AVAudioEngine setup. I am not activating the audio session manually. The audio session configuration is handled in the incomingPushResult or didBeginTransmitting callbacks from the PTChannelManagerDelegate. I am using a single AVAudioEngine instance for both input and playback. The engine is started in the didActivate callback from the PTChannelManagerDelegate. When I receive a push in full duplex mode, I set the active participant to the user who is speaking. Issue When I attempt to talk while the other participant is already speaking, my input tap on the input node takes a few seconds to return valid PCM audio data. Initially, it returns an empty PCM audio block. Details: The audio session is already active and configured with .playAndRecord. The input tap is already installed when the engine is started. When I talk from a neutral state (no one is speaking), the system plays the standard "microphone activation" tone, which covers this initial delay. However, this does not happen when I am already receiving audio. Assumptions / Current Setup Because the audio session is active in play and record, I assumed that microphone input would be available immediately, even while receiving audio. However, there seems to be a delay before valid input is delivered to the tap, only occurring when switching from a receive state to simultaneously talking. Questions Is this expected behavior when using the PTT framework in full duplex mode with a shared AVAudioEngine? Should I be restarting or reconfiguring the engine or audio session when beginning to talk while receiving audio? Is there a recommended pattern for managing microphone readiness in this scenario to avoid the initial empty PCM buffer? Would using separate engines for input and output improve responsiveness? I would like to confirm the correct approach to handling simultaneous talk and receive in full duplex mode using PTT framework and AVAudioEngine. Specifically, I need guidance on ensuring the microphone is ready to capture audio immediately without the delay seen in my current implementation. Relevant Code Snippets Engine Setup func setup() { let input = audioEngine.inputNode do { try input.setVoiceProcessingEnabled(true) } catch { print("Could not enable voice processing \(error)") return } input.isVoiceProcessingAGCEnabled = false let output = audioEngine.outputNode let mainMixer = audioEngine.mainMixerNode audioEngine.connect(pttPlayerNode, to: mainMixer, format: outputFormat) audioEngine.connect(beepNode, to: mainMixer, format: outputFormat) audioEngine.connect(mainMixer, to: output, format: outputFormat) // Initialize converters converter = AVAudioConverter(from: inputFormat, to: outputFormat)! f32ToInt16Converter = AVAudioConverter(from: outputFormat, to: inputFormat)! audioEngine.prepare() } Input Tap Installation func installTap() { guard AudioHandler.shared.checkMicrophonePermission() else { print("Microphone not granted for recording") return } guard !isInputTapped else { print("[AudioEngine] Input is already tapped!") return } let input = audioEngine.inputNode let microphoneFormat = input.inputFormat(forBus: 0) let microphoneDownsampler = AVAudioConverter(from: microphoneFormat, to: outputFormat)! let desiredFormat = outputFormat let inputFramesNeeded = AVAudioFrameCount((Double(OpusCodec.DECODED_PACKET_NUM_SAMPLES) * microphoneFormat.sampleRate) / desiredFormat.sampleRate) input.installTap(onBus: 0, bufferSize: inputFramesNeeded, format: input.inputFormat(forBus: 0)) { [weak self] buffer, when in guard let self = self else { return } // Output buffer: 1920 frames at 16kHz guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: desiredFormat, frameCapacity: AVAudioFrameCount(OpusCodec.DECODED_PACKET_NUM_SAMPLES)) else { return } outputBuffer.frameLength = outputBuffer.frameCapacity let inputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in outStatus.pointee = .haveData return buffer } var error: NSError? let converterResult = microphoneDownsampler.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock) if converterResult != .haveData { DebugLogger.shared.print("Downsample error \(converterResult)") } else { self.handleDownsampledBuffer(outputBuffer) } } isInputTapped = true }
4
0
294
Aug ’25
How to send PTT push for incomingServiceUpdatePush?
I was recently made aware of a relatively new method on PTChannelManagerDelegate that allows the backend infrastructure to update the PTT service on a device. incomingServiceUpdatePush(channelManager:channelUUID:pushPayload:isHighPriority:remainingHighPriorityBudget:completion:) I remember the person stated that there was a specific format that the payload must have, but I am unable to find any documentation about it. Can somebody point me to documentation on how to send this type of push notification for incomingServiceUpdatePush or give me an example payload? I believe it uses the same PTT push token, but if the headers are different, then please specify those differences too. Thanks!
3
1
101
Jun ’25
Push-to-Talk: Disable “Leave” Button in System Screen & APNs Only Triggered Once
Dear Apple Team, I have two questions regarding the Push-to-Talk framework on iOS 18: Would it be possible to disable or hide the “Leave” button in the system screen? Currently, only the “Talk” button can be disabled when using the "Listen Only" mode, but the “Leave” button remains active. It would be very helpful to have the option to control or hide this button as well, depending on the use case. At the moment, I can only detect if the user pressed the "Leave" button by checking the PTChannelLeaveReason. However, this alone is not sufficient to prevent unwanted user actions or to guide the user experience more precisely in certain scenarios. I only receive the APNs push notification “type: voip-ptt“ once per active channel. My app is running in the background, and I frequently switch between full- and half-duplex to allow users to reply immediately. After some time, even though a new notification should be triggered, no banner is shown. When I try to talk via the framework, the active speaker is correctly updated, but no notification appears. Only after leaving and rejoining the channel do I receive a new notification – and again, just once. Could you please let me know whether this is expected behavior or if it might be a bug? Thank you very much in advance. Best regards, Eugen
1
0
87
Jun ’25
PushToTalk Framework Behavior After Force Quit and Challenges in Achieving Reliable PTT Functionality
Hello everyone, Our team is currently developing a PTT (Push-to-Talk) application using the officially recommended PushToTalk framework. During development, we've encountered a point of confusion regarding the application's behavior after being force-quit by the user. Based on our understanding of the PushToTalk framework documentation (https://developer.apple.com/documentation/pushtotalk/creating-a-push-to-talk-app/) and the PTChannelManager session restoration mechanism, when a user manually kills the app from the background (App Switcher), the current PTT session (the system session managed by PTChannelManager) should terminate. Subsequent pushtotalk type pushes sent via APNS, without an active session, appear to be silently discarded by the system and cannot wake the app for processing (similar to what Kevin Elliott DTS mentioned in https://developer.apple.com/forums/thread/760506 Point D). This seems to prevent reliable PTT message reception in our app after a user force quits. However, we've observed that some popular PTT applications on the market (e.g., TenTen) appear to successfully receive and play PTT voice messages from friends even after the user has performed a force-quit action. This behavior seems inconsistent with our test results and understanding based on the standard framework, posing a challenge for us in providing similar reliability using standard methods. This naturally leads us to wonder how this capability is achieved. We've reviewed developer forums and are aware of the historical existence of a PTT-specific com.apple.developer.pushkit.unrestricted-voip entitlement, which allowed PushKit usage for PTT without CallKit binding. While Apple DTS engineers have repeatedly stated this entitlement is being deprecated and urged migration to the PushToTalk framework (e.g., https://developer.apple.com/forums/thread/763289), we are curious if the observed "wake-after-force-quit" capability might be related to some apps potentially still utilizing this outgoing special entitlement. Alternatively, is there perhaps a mechanism within the standard PushToTalk framework that allows wake-up after force quit that we haven't fully grasped? Therefore, we'd like to ask fellow developers for clarification and discussion: When using the standard PushToTalk framework, have others confirmed that the app indeed cannot be woken up by pushtotalk pushes after being force-quit by the user? Is this the expected behavior? Has anyone successfully achieved a TenTen-like experience (reliable PTT reception after force quit) using only the standard PushToTalk framework? If so, could you share key implementation insights or areas to focus on? (e.g., Is it related to specific usage patterns of the restorationDelegate?) How do you view this potential discrepancy between standard framework capabilities and the behavior exhibited by some apps? What considerations does this bring to development planning and user experience design (especially when users might have expectations set by the "always-on" behavior of other apps)? Are there any best practices or specific techniques when using PTChannelManager session management and restoration that maximize PTT message reliability (especially after the app is terminated by the system in the background), while still adhering to the framework's design principles (like user awareness of the session via UI)? [For instance, another developer raised challenges related to PTT framework restrictions here: https://developer.apple.com/forums/thread/773981] We hope this discussion can help clarify our understanding of the framework and gather community best practices for building reliable PTT functionality while adhering to Apple's guidelines. Thanks for any insights or shared experiences!
4
0
180
Jun ’25
Push to talk channelManager(_:didActivate:) doesn't get called
I am implementing the new Push to talk framework and I found an issue where channelManager(:didActivate:) is not called after I immediately return a NOT NIL activeRemoteParticipant from incomingPushResult. I have tested it and it could play the PTT audio in foreground and background. This issue is only occurring when I join the PTT Channel from the app foreground, then kill the app. The channel gets restored via channelDescriptor(restoredChannelUUID:). After the channel gets restored, I send PTT push. I can see that my device is receiving the incomingPushResult and returning the activeRemotePartipant and the notification panel is showing that A is speaking - but channelManager(:didActivate:) never gets called. Thus resulting in no audio being played. Rejoining the channel fixes the issue. And reopening the app also seems to fix the issue.
1
0
100
Jun ’25
iOS doesn't handle incoming call of Local PUSH when receiving a Local PUSH after receiving an APNs PUSH
I am developing an application that uses NetworkExtension (Local PUSH function) And VoIP(APNs) PUSH. Nowadays, I found a problem on this app doesn't handle incoming call of Local PUSH when receiving a Local PUSH after receiving an APNs PUSH. My confimation result of my app and server log is below. 11:00 AM: my server(PBX) requests a VoIP(APNs) PUSH notification to the APNs. But my app does not receive the VoIP(APNs) PUSH. At this time, my app is running on LAN (Wi-Fi without internet connection), as a result, NetworkExtension was running. so I think this is normal behaviour. 14:55:11 PM: There is an incoming call from the my server(PBX) via local net, and NetworkExtension calls iOS API(API name is reportIncomingCall). However, iOS does not call the delegate didReceiveIncomingCallWithUserInfo for the reportIncomingCall. 14:55:11 PM: At almost the same time, iOS calls the delegate cdidReceiveIncomingPushWithPayload of VoIP PUSH. (instead of call the delegate didReceiveIncomingCallWithUserInfo for the reportIncomingCall?) And the content of this VoIP(APNs) PUSH was the incoming call at "11:00 AM". In other words, the VoIP(APNs) PUSH at 11:00 AM is stuck inside iOS, and at 14:55:11 PM, from NetworkExtension reports it. I feel there is a problem on iOS doesn't handle incoming call of Local PUSH when receiving a Local PUSH after receiving an VoIP(APNs) PUSH. Would you tell me Apple's opioion about this? If this is known problem, Please tell me about it.
6
0
691
May ’25
PushToTalk Microphone Permission Issues After Force Quit
Hello Apple Developer Community, We're implementing the PushToTalk framework as recommended. According to Apple engineers in previous forum responses : the framework allows your app to continue receiving push notifications even after your app is terminated or the device is rebooted. Implementation: We've properly implemented: Early initialization of PTChannelManager via channelManager(delegate:restorationDelegate:completionHandler:) Channel joining with requestJoinChannel(channelUUID:descriptor:) when foregrounded All required delegate methods Issue After a user force quits our app, PushToTalk functionality works briefly but fails after some time (minutes to hours). The system logs show: AudioSessionServerImpCommon.mm:105 { "action":"cm_session_begin_interruption", "error":"translating CM session error", "session":{"ID":"0x72289","name":"getcha(2958)"}, "details":{ "calling_line":997, "error_code":-12988, "error_string":"Missing entitlement" } } We suspect that entitlement after force-quitting the app, there's a permission cache that temporarily allows functionality, but once this cache is cleared, the features stop working. Without this entitlement, both audio playback and recording fail, completely breaking the PTT functionality. Questions Which specific entitlement is missing according to this error? Is there a permission caching mechanism that expires after force quit? How can we ensure reliable PTT operation after force quit as stated in documentation? This behavior contradicts Apple's guidance that PushToTalk should work reliably after termination. Any insights would be greatly appreciated. Thank you!
4
0
133
Apr ’25
APNS notifications of apns-push-type pushtotalk sometimes stop arriving after switching networks
PLATFORM AND VERSION: iOS Development environment: Other: .net MAUI with vscode Run-time configuration: iOS 18.1.1 DESCRIPTION OF PROBLEM APNS notifications of apns-push-type pushtotalk sometimes stop arriving after switching networks. STEPS TO REPRODUCE We have created a simple app which can be used to deminstrate this issue. When you launch the app it displays the APNS token which you can then use fromn the Apple Push Console to manually send it PTT push notifications. https://github.com/trampster/PttPushNotificationIssue On an iPhone SE (we havn't been able to reproduce on our iPhone 11) Start the APP to register for the APNS push notifications Turn off the WiFi wait for 5 seconds Attempt a push to the app manually using the Push Notifications Console (this should fail, which is fine) Turn on Cellular and wait for it to connect Attempt to push to the app manually using the Push Notifications Console -> This fails, and all attempts to send an pushtotalk push notifications fail until the we switch network again. Send a push while offline before connecting to the new network seems to make it happen more often but hard to tell for sure. The results of the failed push in the console look like this: Delivery LogLast updated: 30/01/25, 16:45:06 GMT+13 Refresh 30 Jan 2025, 16:45:03.661 GMT+13 received by APNS Server 30 Jan 2025, 16:45:03.662 GMT+13 discarded as device was offline The device is actually very much online. Switching networks again oftern makes things come right. But it doesn't seem to come right by itself. We can't respond to network changes and do anything as the whole point of using push-to-talk push notifications is to wake up the app when in the background to answer a call, this means we are not running and therefore cannot respond to network changes to try to work arround this issue.
6
0
490
Apr ’25
Is PushToTalk Framework Half-Duplex Only, or Does It Include Built-in Full-Duplex Audio Capabilities?
Hello everyone, Our team is currently developing an iOS application requiring real-time audio communication and evaluating the most suitable frameworks. Options include CallKit, custom solutions using AVAudioEngine/Audio Units, and the PushToTalk framework. Regarding the PushToTalk framework, we have some questions about its core design and capabilities that we'd appreciate clarification on from the community or Apple engineers. Based on the PushToTalk framework documentation, its API design (e.g., methods like requestBeginTransmission, endTransmission which imply explicitly requesting transmission rights), and its system UI integration, it strongly appears oriented towards half-duplex communication scenarios, similar to traditional walkie-talkies where only one participant transmits audio at a time. Is this understanding accurate? Is the PushToTalk framework's design strictly limited to managing half-duplex audio interactions? Or, does the framework itself also provide built-in mechanisms or APIs to manage simultaneous, bi-directional (full-duplex) audio streaming between participants? To be clear, we are asking about the inherent capabilities of the PushToTalk framework itself. We understand it's possible to use PushToTalk for signaling and UI management, and separately implement the actual full-duplex audio stream using AVAudioEngine or other audio APIs. However, we want to confirm if the framework itself is designed to support or simplify full-duplex audio communication. Have other developers investigated the specific limitations or capabilities of the PushToTalk framework regarding audio transmission modes (half-duplex vs. full-duplex)? Are there any official documentation references or WWDC sessions that explicitly clarify the framework's support (or lack thereof) for full-duplex operation? If PushToTalk is indeed limited to half-duplex, what are the generally accepted best practices for apps requiring full-duplex calls – transitioning directly to CallKit (where applicable) or building custom audio processing pipelines? Clarifying this point is crucial for us to select the correct technology stack for our application. Any relevant insights, documentation pointers, or shared development experiences would be greatly appreciated. Thank you for your help!
1
0
99
Apr ’25
PTT Framework Restrictions
We are attempting to update our app to use the PTT framework, as it has been made clear that this will be required in a future iOS version as opposed to using the Unrestricted VoIP entitlement we are using for several features of our app. However, the behavior of this framework poses some problems with implementing our app's functionality: It is not possible to programmatically join a channel when the app is not in the foreground. This hinders our ability to implement the Automatically activate radio stream feature of our app, which allows users who have opted into this feature to immediately begin hearing live PTT audio from their agency following an incident alert. Having the app constantly "joined to a channel" and using the restoration delegate could potentially work, however this is not ideal as this would result in the PTT UI needing to be displayed at all times, even when no radio stream is activated. We have a "Text to Speech" option that, when enabled, reads out the content of an incident alert after the alert sound has played. This currently happens by triggering an AVSpeechSynthesizer in the PushKit incoming push callback. It may be possible to render TTS audio on the fly in a Notification Service Extension and assign it as the notification's sound, if that is possible this is less of a problem. We also use the PushKit callback to, again if the user has enabled it, activate a "Shake to Respond" feature, allowing a short period of time after receiving an incident alert in which the user can shake their device to indicate that they are responding to the incident. There does not appear to be any way to have the level of background execution required to implement this using an NSE, and this is of course beyond the scope of the PTT framework. What options do we have to be able to continue to provide this functionality, without risk of it being disabled in a future iOS version?
2
0
376
Apr ’25
PTT Framework has compatibility issue with .voiceChat AVAudioSession mode
As I've mentioned before our app uses PTT Framework to record and send audio messages. In one of supported by app mode we are using WebRTC.org library for that purpose. Internally WebRTC.org library uses Voice-Processing I/O Unit (kAudioUnitSubType_VoiceProcessingIO subtype) to retrieve audio from mic. According to https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/voicechat using Voice-Processing I/O Unit leads to implicit enabling .voiceChat AVAudioSession mode (i.e. it looks like it's not possible to use Voice-Processing I/O Unit without .voiceChat mode). And problem is following: when user starts outgoing PTT, PTT Framework plays audio notification, but in case of enabled .voiceChat mode that sound is playing distorted or not playing at all. Questions: Is it known issue? Is there any way to workaround it?
17
0
695
Mar ’25
AVAudioRecorder records silence
We have application using PTT Framework to record audio messages when app is backgrounded. Right now we are using AVAudioRecorder for that purpose. And problem is one specific user has frequent issue - recorded audio contains only silence. I've checked almost everything I can imagine but didn't find any possible reason of issue. Conditions: AVAudioRecorder uses following configuration: [ AVEncoderAudioQualityKey: AVAudioQuality.low.rawValue, AVFormatIDKey : kAudioFormatMPEG4AAC, AVNumberOfChannelsKey: 1, AVSampleRateKey: 16000.0 ] App waits both didBeginTransmitting and didActivate audioSession from PTChannelManager (audio session has playback category at that moment) App does AVAudioSession category change to playAndRecord App gets routeChangeNotification with categoryChange and category = playAndRecord There is no any interruption notifications from AVAudioSession during recording There is no any error notification from AVAudioRecorder Any idea what exactly I do wrong? Is there anything else I should check? Thanks in advance. P.S. it looks like recording audio with AudioUnit has the same issue, but let's exclude it from question atm for simplicity.
3
0
340
Mar ’25
Not able to receive silent pushes in background
I’ve developed the Pro Talkie app—a walkie-talkie solution designed to keep you connected with family and friends App Store: https://apps.apple.com/in/app/pro-talkie/id6742051063 Play Store: https://play.google.com/store/apps/details?id=com.protalkie.app While the app works flawlessly on Android and in the foreground on iOS, I’m facing issues with establishing connections when the app is in the background or terminated on iOS. Specifically, I’ve attempted the following: Silent pushes and alert payloads: These are intended to wake the app in the background, but they often fail—notifications may not be received or can be delayed by 20–30 minutes, leading to a poor user experience. VoIP pushes: These reliably wake the app, but they trigger the incoming call UI, which isn’t suitable for a walkie-talkie app that should connect directly without displaying a call screen. I’ve enabled all the necessary background modes (audio, remote notifications, VoIP, background fetch, processing), but the challenge remains. How can I ensure a consistent background connection on iOS without triggering the call UI?
1
0
412
Mar ’25
PTTFramework w/ AVAudioSession
Hi all, I have spent a lot of time reading the tech note and watching the WDDC video that introduce the PTTFramework on iOS. I currently have a custom setup where I am using AVAudioEngine to schedule and play buffers that are being streamed through a call. I am looking to use the PTTFramework to allow a user to trigger this push to talk behavior from the lock screen and the various places with the system UI it provides. However I am unsure what the correct behavior is regarding the handling of the audio session. Right now I am using .playback when there is no active voice transmission so that devices such as AirPods can be in AD2P mode where applicable, and then transitioning to .playbackAndRecord category only when the mic input should become active. Following this change in my AVAudioEngine manager I am then manually activating and deactivating the audio session manually when the engine is either playing/recording or idle. In the documentation it states that you should not attempt to activate or deactivate your audio session directly, but allow the framework to handle it. Does that mean that I need to either call the request to transmit delegate function or set an active participant on the channel manager first, and then wait for the didBecomeActive delegate method to trigger before I actually attempt to play or record any audio? (I am using the fullDuplex mode currently.) I noticed that that delegate method will only trigger if the audio session wasn't active before doing one of the above (setting active participant, requesting transmit). Lastly, when using the PTTFramework it also mentions that we get support for PTT devices and I notice on the didBeginTransmittingFrom property we have a handsfreeButton case. Is there any documentation or resources for what is actually supported out of the box for this? I am currently working on handling a lot of the push to talk through bluetooth LE, and wanted to make sure there wasn't overlap with what the system provides. Thank you!
2
0
552
Feb ’25
AVAudioEngine Stop Method
Hi all! I have been experiencing some issues when using the AVAudioEngine to play audio and record input while doing a voice chat (through the PTT Interface). I noticed if I connect any players to the AudioGraph OR call start that the audio session becomes active (this is on iOS). I don't see anything in the docs or the header files in the AVFoundation, but is it possible that calling the stop method on an engine deactivates the audio session too? In a normal app this behavior seems logical, but when using PTT all activation and deactivation of the audio session must go through the framework and its delegate methods. The issue I am debugging is that when the engine with the input node tapped gets stopped, and there is a gap between the input and when the server replies with inbound audio to be played and something seems to be getting the hardware/audio session into a jammed state. Thanks for any feedback and/or confirmation on this behavior!
2
0
589
Feb ’25
Using AVCaptureSession to record a video on initialising audio session from a Push To Talk call, audio of the ongoing video recording is getting stopped while the video recording is still ongoing.
We have a Push To Talk application which allow user to record video and audio. When user is recording a video using AVCaptureSession and receive's an Push To Talk call, from moment the Push To Talk call is received the audio in the video which is being captured is stopped while the video capture is still in progress. Here after the PTT call is completed, we have tried restarting the audio session, there are no errors that are getting printed but we still don't see the audio getting restarted in video capture. We have also tried to add a new input for AVCaptureSession we are receiving error that is resulting in video capture stopping, error mentioned below: [OS-PLT] [CameraManager] Movie file finished with error: Error Domain=AVFoundationErrorDomain Code=-11818 "Recording Stopped" UserInfo={AVErrorRecordingSuccessfullyFinishedKey=true, NSLocalizedDescription=Recording Stopped, NSLocalizedRecoverySuggestion=Stop any other actions using the recording device and try again., AVErrorRecordingFailureDomainKey=1, NSUnderlyingError=0x3026bff60 {Error Domain=NSOSStatusErrorDomain Code=-16414 "(null)"}}, success We have also raised a Feedback Ticket on same: https://feedbackassistant.apple.com/feedback/16050598
1
0
550
Feb ’25
It's not possible to access system PTT UI when app is backgrounded and "Always" location access is activated for app
Hello. Our app uses PTT framework and "Always" location tracking at the same time. By some reason, after backgrounding app, Dynamic Island shows only location tracking icon instead if PTT icon. And when user taps on it - application foregrounding instead of system PTT UI show. Only after first incoming PTT user can access system PTT UI. Is it a bug or intended behaviour?
1
0
396
Jan ’25