Posts

Post not yet marked as solved
4 Replies
1.7k Views
Hello, Our use case is Screen sharing in a live video call. We use broadcast extension to capture screens and send frames. The broadcast extension has hard limit of 50MB. The screen sharing works great with iPhones. But on iPad, ReplayKit delivers larger screens, and as a result, the extension memory usages goes beyond 50MB. While using the profiler we noticed, the memory used by our code is <25MB, but on iPad ReplayKit is having memory spikes which causes memory to go beyond 50MB limits. How should I achieve screen sharing use case on iPads? What is the guideline. Any suggestion/help is appreciated. Best, Piyush
Posted
by ptank.
Last updated
.
Post not yet marked as solved
2 Replies
1k Views
Hello,Our used case is bi-directional Audio + Video over IP iOS application.We noticed that, in our app, if a user is playing audio using AVPlayer and while playing the AVPlayer if the user initiates Audio + Video Call in the app, the volume of the AVPlayer gets reduced. When I am connected to the VoIP call, I am able to hear AVPlayer audio and the audio of remote participant in the call too, but the AVPlayer audio volume got lowered than what it was before. I noticed that AVPlayer audio volume stays low even after the call is disconnected, it never gets restored back.Is this expected? If yes, is there a workaround for this problem.Thanks,Piyush
Posted
by ptank.
Last updated
.
Post not yet marked as solved
3 Replies
1.4k Views
Hello, Our used cases is VoIP calling. There are multiple answers available on internet on how a device token should be parsed. We are using following technique to parse device token, let us know if there is an Apple Recommended way of parsing device token - (NSString *)deviceTokenWithData:(NSData *)data {   if (![data isKindOfClass:[NSData class]] || !([data length] > 0)) {     TVOLogError(@"Invalid device token");     return nil;   }   const char *tokenBytes = (const char *)[data bytes];   NSMutableString *deviceTokenString = [NSMutableString string];   for (NSUInteger i = 0; i < [data length]; ++i) {     [deviceTokenString appendFormat:@"%02.2hhx", tokenBytes[i]];   }       return deviceTokenString; } Also, it would be nice if apple provides a toString method to get string out of the device token raw bytes.
Posted
by ptank.
Last updated
.
Post not yet marked as solved
2 Replies
1.7k Views
Hello,Our used case is in-app screen sharing.Few of our customer have reported the problem while using this sample app, that ReplayKit throws -"Recording interrupted by multitasking and content resizing" error and the in-app recording does not start.I was able to reproduce the problem once on iOS 12.2, iPad Pro 12.9 inch 2nd gen during the multitasking.Here is the code which starts the in-app recording which causes the problem - https://github.com/twilio/video-quickstart-swift/blob/master/ReplayKitExample/ReplayKitExample/ViewController.swift#L343Once it gets into the problem it never recovers. I used the suggestion from https://forums.developer.apple.com/thread/109696 suggested doing "Reset All Settings", I tried and the problem went away. I haven't seen problem yet but "Reset all settings" is not ideal.Thank you,Piyush
Posted
by ptank.
Last updated
.
Post not yet marked as solved
1 Replies
2.2k Views
Hello,On iOS 12, if an application is consuming WebRTC framework, at runtime, following error is thrownClass RTCUIApplicationStatusObserver is implemented in both /System/Library/PrivateFrameworks/WebCore.framework/Frameworks/libwebrtc.dylib () and /private/var/containers/Bundle/Application/IDENTIFIER/AppRTCMobile.app/Frameworks/WebRTC.framework/WebRTC (). One of the two will be used. Which one is undefined.This error is thrown for other RTC classes as well.Looks like this is something not intended by Apple. It would be nice if Apple provides the fix for this for WebRTC developers.This problem is reproducible when an app consumes WebRTC framework or by running the AppRTCMobile app on iOS 12.The problem gets reproduced on iOS 12 Beta1, Beta2, and Beta3.I have filed a bug for this issue 41896685 and post an update.Thanks,Piyush
Posted
by ptank.
Last updated
.
Post not yet marked as solved
2 Replies
2.7k Views
Hello,After setting the preferred sample rate by calling `setPreferredSampleRate` on `AVAudioSession`, when I query the `sampleRate` on AVAudioSession, it returns the sample rate set earlier by calling the `setPreferredSampleRate` API most of the time. However in few cases (e.g. iPad Pro running iOS 11), the `AVAudioSession.sampleRate` does not get updated. We observed the sampleRate gets updated if the preferred sample rate is set after the AVAudioSession is activated. The setPreferredSampleRate documentation says -"This method requests a change to the input and output audio sample rate. To see the effect of this change, use the sampleRate property.You can set a preferred sample rate before or after activating the audio session."Should I not expect the `sampleRate` to get updated immediately after setting the preferred sample rate when the audio session is not activated?Thank you,Piyush
Posted
by ptank.
Last updated
.
Post not yet marked as solved
0 Replies
511 Views
I am setting the `AVAudioSessionCategoryPlayback` and then running the Voice Processing IO AudioUnit. In my test, when I am checking the AVAudioSession's currentRoute.inputs.count, it is not returning the consistent result. Following is my observation -Returns 0 for iPhone 7 plus running iOS 12 BetaReturns 0 for iPhone 6s running iOS 11.0Returns 1 for iPhone 6 running iOS 11.4.1Returns 1 for iPad pro running iOS 11I was expecting when the category is set to `AVAudioSessionCategoryPlayback`, AVAudioSession's currentRoute.inputs.count should always return 0. Is this an incorrect expectation?
Posted
by ptank.
Last updated
.
Post not yet marked as solved
2 Replies
3.9k Views
Hello, Our used case is real-time audio (voip) application. I wanted to use AVAudioEngine for to capture and render audio. This is how I am setting up the AVAudioEngine nodes in my app - AVAudioUnit -&gt; MainMixer -&gt; Speaker Microphone -&gt; InputNode -&gt; MicMixer -&gt; Tap (write audio to a file) When I am writing the tapped buffers into a file, I noticed, the audio played on speaker gets captured by the microphone and gets written to file. I do not want the audio played on the speaker to be captured.Is Acoustic Echo Cancellation (AEC) is supported by AVAudioEngine? Do I need to make any specific configuration or design changes to achieve the AEC? Is there a recommendation on how to achieve the mentioned used case?Any help for this is greatly appreciated.Thank you,Piyush
Posted
by ptank.
Last updated
.
Post not yet marked as solved
0 Replies
571 Views
Hello,Our used case is writing a video conferencing application. We are planning to support H.264 hardware encoder and decoder in our application. Is there a limit on how many hardware encoder and decoder can be created and/or used at a time? How can an app detect if the maximum number of encoder and decoder limit is reached? Thank you,Piyush
Posted
by ptank.
Last updated
.