Hi everyone,
I’m building a React Native iOS app where I’m integrating Wazo (native WebRTC) and Jitsi (WebView / WebRTC).
Use case:
Wazo is used to maintain a background call session (mainly signaling + audio keep-alive).
Jitsi is used in the foreground for video calls.
Problem:
When Jitsi starts, it takes control of the microphone and camera.
The Wazo call disconnects after ~5 minutes (likely due to media / audio session conflict).
Even if Wazo audio/video is muted or tracks are disabled, the session still drops.
My questions:
Is it officially supported or recommended to run two WebRTC stacks (Wazo + Jitsi) simultaneously on iOS?
Can Wazo stay connected without active audio/video tracks while Jitsi uses mic/camera?
Is there a way to release Wazo media streams temporarily (but keep signaling alive) while Jitsi is loading or active?
Are there any AVAudioSession / background mode limitations on iOS that make this impossible by design?
If this is not supported, what is the recommended architecture (single WebRTC pipeline, switching media ownership, etc.)?
Environment:
iOS (React Native)
Wazo SDK (native WebRTC)
Jitsi Meet (WebView)
CallKit + PushKit enabled
Any guidance, documentation, or real-world experience would be greatly appreciated.
Thanks in advance 🙏