We are attempting to update our app to use the PTT framework, as it has been made clear that this will be required in a future iOS version as opposed to using the Unrestricted VoIP entitlement we are using for several features of our app.
However, the behavior of this framework poses some problems with implementing our app's functionality:
It is not possible to programmatically join a channel when the app is not in the foreground. This hinders our ability to implement the Automatically activate radio stream feature of our app, which allows users who have opted into this feature to immediately begin hearing live PTT audio from their agency following an incident alert. Having the app constantly "joined to a channel" and using the restoration delegate could potentially work, however this is not ideal as this would result in the PTT UI needing to be displayed at all times, even when no radio stream is activated.
We have a "Text to Speech" option that, when enabled, reads out the content of an incident alert after the alert sound has played. This currently happens by triggering an AVSpeechSynthesizer in the PushKit incoming push callback. It may be possible to render TTS audio on the fly in a Notification Service Extension and assign it as the notification's sound, if that is possible this is less of a problem.
We also use the PushKit callback to, again if the user has enabled it, activate a "Shake to Respond" feature, allowing a short period of time after receiving an incident alert in which the user can shake their device to indicate that they are responding to the incident. There does not appear to be any way to have the level of background execution required to implement this using an NSE, and this is of course beyond the scope of the PTT framework.
What options do we have to be able to continue to provide this functionality, without risk of it being disabled in a future iOS version?
Push To Talk
RSS for tagLet people send and receive audio in your app with the push of a button.
Posts under Push To Talk tag
29 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
As I've mentioned before our app uses PTT Framework to record and send audio messages. In one of supported by app mode we are using WebRTC.org library for that purpose. Internally WebRTC.org library uses Voice-Processing I/O Unit (kAudioUnitSubType_VoiceProcessingIO subtype) to retrieve audio from mic. According to https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/voicechat using Voice-Processing I/O Unit leads to implicit enabling .voiceChat AVAudioSession mode (i.e. it looks like it's not possible to use Voice-Processing I/O Unit without .voiceChat mode).
And problem is following: when user starts outgoing PTT, PTT Framework plays audio notification, but in case of enabled .voiceChat mode that sound is playing distorted or not playing at all.
Questions:
Is it known issue?
Is there any way to workaround it?
We have application using PTT Framework to record audio messages when app is backgrounded. Right now we are using AVAudioRecorder for that purpose. And problem is one specific user has frequent issue - recorded audio contains only silence.
I've checked almost everything I can imagine but didn't find any possible reason of issue.
Conditions:
AVAudioRecorder uses following configuration:
[
AVEncoderAudioQualityKey: AVAudioQuality.low.rawValue,
AVFormatIDKey : kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey: 1,
AVSampleRateKey: 16000.0
]
App waits both didBeginTransmitting and didActivate audioSession from PTChannelManager (audio session has playback category at that moment)
App does AVAudioSession category change to playAndRecord
App gets routeChangeNotification with categoryChange and category = playAndRecord
There is no any interruption notifications from AVAudioSession during recording
There is no any error notification from AVAudioRecorder
Any idea what exactly I do wrong? Is there anything else I should check?
Thanks in advance.
P.S. it looks like recording audio with AudioUnit has the same issue, but let's exclude it from question atm for simplicity.
I’ve developed the Pro Talkie app—a walkie-talkie solution designed to keep you connected with family and friends
App Store: https://apps.apple.com/in/app/pro-talkie/id6742051063
Play Store: https://play.google.com/store/apps/details?id=com.protalkie.app
While the app works flawlessly on Android and in the foreground on iOS, I’m facing issues with establishing connections when the app is in the background or terminated on iOS.
Specifically, I’ve attempted the following:
Silent pushes and alert payloads: These are intended to wake the app in the background, but they often fail—notifications may not be received or can be delayed by 20–30 minutes, leading to a poor user experience.
VoIP pushes: These reliably wake the app, but they trigger the incoming call UI, which isn’t suitable for a walkie-talkie app that should connect directly without displaying a call screen.
I’ve enabled all the necessary background modes (audio, remote notifications, VoIP, background fetch, processing), but the challenge remains.
How can I ensure a consistent background connection on iOS without triggering the call UI?
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
APNS
User Notifications
PushKit
Push To Talk
Hi all, I have spent a lot of time reading the tech note and watching the WDDC video that introduce the PTTFramework on iOS. I currently have a custom setup where I am using AVAudioEngine to schedule and play buffers that are being streamed through a call.
I am looking to use the PTTFramework to allow a user to trigger this push to talk behavior from the lock screen and the various places with the system UI it provides.
However I am unsure what the correct behavior is regarding the handling of the audio session. Right now I am using .playback when there is no active voice transmission so that devices such as AirPods can be in AD2P mode where applicable, and then transitioning to .playbackAndRecord category only when the mic input should become active. Following this change in my AVAudioEngine manager I am then manually activating and deactivating the audio session manually when the engine is either playing/recording or idle.
In the documentation it states that you should not attempt to activate or deactivate your audio session directly, but allow the framework to handle it.
Does that mean that I need to either call the request to transmit delegate function or set an active participant on the channel manager first, and then wait for the didBecomeActive delegate method to trigger before I actually attempt to play or record any audio? (I am using the fullDuplex mode currently.) I noticed that that delegate method will only trigger if the audio session wasn't active before doing one of the above (setting active participant, requesting transmit).
Lastly, when using the PTTFramework it also mentions that we get support for PTT devices and I notice on the didBeginTransmittingFrom property we have a handsfreeButton case. Is there any documentation or resources for what is actually supported out of the box for this? I am currently working on handling a lot of the push to talk through bluetooth LE, and wanted to make sure there wasn't overlap with what the system provides.
Thank you!
Hi all!
I have been experiencing some issues when using the AVAudioEngine to play audio and record input while doing a voice chat (through the PTT Interface).
I noticed if I connect any players to the AudioGraph OR call start that the audio session becomes active (this is on iOS).
I don't see anything in the docs or the header files in the AVFoundation, but is it possible that calling the stop method on an engine deactivates the audio session too?
In a normal app this behavior seems logical, but when using PTT all activation and deactivation of the audio session must go through the framework and its delegate methods.
The issue I am debugging is that when the engine with the input node tapped gets stopped, and there is a gap between the input and when the server replies with inbound audio to be played and something seems to be getting the hardware/audio session into a jammed state.
Thanks for any feedback and/or confirmation on this behavior!
We have a Push To Talk application which allow user to record video and audio.
When user is recording a video using AVCaptureSession and receive's an Push To Talk call, from moment the Push To Talk call is received the audio in the video which is being captured is stopped while the video capture is still in progress.
Here after the PTT call is completed, we have tried restarting the audio session, there are no errors that are getting printed but we still don't see the audio getting restarted in video capture.
We have also tried to add a new input for AVCaptureSession we are receiving error that is resulting in video capture stopping, error mentioned below:
[OS-PLT] [CameraManager] Movie file finished with error: Error Domain=AVFoundationErrorDomain Code=-11818 "Recording Stopped" UserInfo={AVErrorRecordingSuccessfullyFinishedKey=true, NSLocalizedDescription=Recording Stopped, NSLocalizedRecoverySuggestion=Stop any other actions using the recording device and try again., AVErrorRecordingFailureDomainKey=1, NSUnderlyingError=0x3026bff60 {Error Domain=NSOSStatusErrorDomain Code=-16414 "(null)"}}, success
We have also raised a Feedback Ticket on same: https://feedbackassistant.apple.com/feedback/16050598
Hello.
Our app uses PTT framework and "Always" location tracking at the same time. By some reason, after backgrounding app, Dynamic Island shows only location tracking icon instead if PTT icon. And when user taps on it - application foregrounding instead of system PTT UI show. Only after first incoming PTT user can access system PTT UI.
Is it a bug or intended behaviour?
I tried below at 2:00 PM on 21/01/2025(JST).
Apple Push Notification service server certificate update
I followed above,
a new server certificate: "SHA-2 Root : USERTrust RSA Certification Authority certificate" was added to my push server, but a certificate error occurred and push notifications could not be sent.
So I refered this article,Instead of connecting via DNS name resolution at api.development.push.apple.com,
I fixed api.development.push.apple.com to "17.188.143.34" in /etc/hosts,
I could push notifications with the new server certificate.
(I got this IP(17.188.143.34) from this airtcle)
From this, I suspect that Apple had not yet updated the APNs certificate (CA) for the Sandbox environment as of 2:00 PM on January 21, 2025 (JST).
Was the update published as scheduled?
Topic:
App & System Services
SubTopic:
Notifications
Tags:
APNS
User Notifications
PushKit
Push To Talk