Issue Summary:
In our Flutter application, we utilize Tencent's TRTC API for voice and video communication. While the broadcast functionality operates correctly on Android, it fails to respond on iOS devices. Attempting to initiate a broadcast results in no action, and long-pressing the broadcast button does not reveal the broadcast extension.
Steps to Reproduce:
Add Broadcast Upload Extension:
In Xcode, navigate to File > New > Target.
Select Broadcast Upload Extension and add it to the project.
2. Build the Project:
Attempt to build the project.
Encounter the error: "Cycle inside Runner; building could produce unreliable results."
3. Resolve Build Cycle Error:
Go to the project’s Build Phases.
Locate the Embed App Extensions phase.
Move Embed App Extensions just below Copy Bundle Resources.
Ensure the Copy only when installing option is selected.
Rebuild the project; the cycle error is resolved.
4.Test Broadcast Functionality:
Install the app on an iOS device.
Tap the broadcast button; observe no response.
Long-press the broadcast button in the top right hand scroll down menu; the broadcast extension is not listed.
5. Isolate the Issue:
Create a new Flutter project.
Repeat the above steps to add the broadcast upload extension.
The issue persists: broadcast functionality remains unresponsive on iOS.
ReplayKit
RSS for tagRecord or stream video from the screen and audio from the app and microphone using ReplayKit.
Posts under ReplayKit tag
26 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
There are different kinds of screen-sharing applications, all using different APIs.
The API used by AnyDesk, for example, or TeamViewer, which doesn't require light signals.
I wonder if this is more appropriate for a corporate application, i.e. MDM,
A screen-sharing application could be created and validated by Apple to display no light signals, and which could access the user's screen whenever the person wanted to after an initial acceptance?
In other words, the user accepts to share his screen once, but won't be notified to accept the next time.
Or is this impossible on iOS?
I'd be honored to have some answers
I am developing an iOS application that supports screen mirroring to Google TV (or Chromecast with Google TV). My goal is to mirror the iPhone/iPad screen in real time to a Google TV device.
What I Have Tried So Far
I have explored multiple approaches but haven't found a direct way to achieve low-latency screen mirroring. Here are some of my findings:
Google Cast SDK:
Google Cast SDK is primarily designed for casting media (videos, images, audio) rather than real-time mirroring. It supports custom receiver applications, but there are no direct APIs for full screen mirroring. Casting a recorded video is possible, but it introduces latency and is not real-time.
ReplayKit for Screen Capture:
RPScreenRecorder.shared().startCapture(handler: ...) allows capturing the iPhone screen as a video stream. However, sending this stream to Google TV in real time is a challenge. I could potentially encode the video as HLS and stream it, but the delay is significant.
RTSP/UDP Streaming:
Some third-party libraries support RTSP/UDP streaming for real-time screen sharing. Google TV does not natively support RTSP, making this approach difficult.
My Questions:
Is it possible to achieve real-time screen mirroring on Google TV using Google Cast SDK? Does Google TV support WebRTC or any low-latency streaming protocol that can be used from iOS? Are there any alternative approaches to mirror an iOS screen to Google TV with minimal latency? I would appreciate any guidance, code examples, or references to relevant documentation.
I'm trying to make an app that is able to quietly run in the background. It needs to detect other apps' or the system's incoming video and/or audio, using only on-device resources to determine if it might be a scam caller.
It will tap into an escalating cascade of resources to do so. For video/image scam detection, it uses OpenCV to detect faces, then refers to a known database of reported scam imagery. For audio scam calls, we defer to known techniques of voice modulation in frequency and/or amplitude. Each video and/or audio result will be relayed via notification banner as well as recorded in-app. Crucially, if the results are uncertain, users have the option to submit it to a global collaborative cloud database for investigative teams; 60 second audio snippets or series of images where faces were detected (60 second equivalent).
In the end, we expect to deploy this app across most parts of Asia and Africa, thereby protecting generations of iPhone and iPad users.
However, we have not been able to find a method that does this, and there is no known correspondance able to provide such technical guidance.
Please assist.
I am recording video on iOS using ReplayKit and found that after copying data in the processSampleBuffer:withType: callback using memcpy, the data changes. This occurs particularly frequently when the screen content changes rapidly, making it look like the frames are overlapping.
I found that the values starting from byte 672 in the video data on my device often change. Here is the test demo:
- (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer withType:(RPSampleBufferType)sampleBufferType {
switch (sampleBufferType) {
case RPSampleBufferTypeVideo: {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
int ret = 0;
uint8_t *oYData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
size_t oYSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
uint8_t *oUVData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
size_t oUVSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
if (oYSize <= 672) {
return;
}
uint8_t tempValue = oYData[672];
uint8_t *tYData = malloc(oYSize);
memcpy(tYData, oYData, oYSize);
if (tYData[672] != oYData[672]) {
NSLog(@"$$$$$$$$$$$$$$$$------ t:%d o:%d temp:%d", tYData[672], oYData[672], tempValue);
}
free(tYData);
CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
break;
}
default: {
break;
}
}
}
Output:
$$$$$$$$$$$$$$$$------ t:110 o:124 temp:110
$$$$$$$$$$$$$$$$------ t:111 o:133 temp:111
$$$$$$$$$$$$$$$$------ t:124 o:138 temp:124
$$$$$$$$$$$$$$$$------ t:133 o:144 temp:133
$$$$$$$$$$$$$$$$------ t:138 o:151 temp:138
$$$$$$$$$$$$$$$$------ t:144 o:156 temp:144
$$$$$$$$$$$$$$$$------ t:151 o:135 temp:151
$$$$$$$$$$$$$$$$------ t:156 o:78 temp:156
$$$$$$$$$$$$$$$$------ t:135 o:76 temp:135
$$$$$$$$$$$$$$$$------ t:78 o:77 temp:78
$$$$$$$$$$$$$$$$------ t:76 o:80 temp:76
$$$$$$$$$$$$$$$$------ t:77 o:80 temp:77
$$$$$$$$$$$$$$$$------ t:80 o:79 temp:80
$$$$$$$$$$$$$$$$------ t:79 o:80 temp:79
I understand that there are no delegate methods for this. But to determine a positive consent from user to Record Screen needs to be evaluated via block parameter.
When user denies the permission by mistake, and, if user tries again, the alert is not showing up. How do I reset the permission and throw the below alert again?