ReplayKit

RSS for tag

Record or stream video from the screen and audio from the app and microphone using ReplayKit.

Posts under ReplayKit tag

17 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

unable to find boardcast extension
Issue Summary: In our Flutter application, we utilize Tencent's TRTC API for voice and video communication. While the broadcast functionality operates correctly on Android, it fails to respond on iOS devices. Attempting to initiate a broadcast results in no action, and long-pressing the broadcast button does not reveal the broadcast extension. Steps to Reproduce: Add Broadcast Upload Extension: In Xcode, navigate to File > New > Target. Select Broadcast Upload Extension and add it to the project. 2. Build the Project: Attempt to build the project. Encounter the error: "Cycle inside Runner; building could produce unreliable results." 3. Resolve Build Cycle Error: Go to the project’s Build Phases. Locate the Embed App Extensions phase. Move Embed App Extensions just below Copy Bundle Resources. Ensure the Copy only when installing option is selected. Rebuild the project; the cycle error is resolved. 4.Test Broadcast Functionality: Install the app on an iOS device. Tap the broadcast button; observe no response. Long-press the broadcast button in the top right hand scroll down menu; the broadcast extension is not listed. 5. Isolate the Issue: Create a new Flutter project. Repeat the above steps to add the broadcast upload extension. The issue persists: broadcast functionality remains unresponsive on iOS.
1
0
190
5d
Screen sharing application - URGENT question
There are different kinds of screen-sharing applications, all using different APIs. The API used by AnyDesk, for example, or TeamViewer, which doesn't require light signals. I wonder if this is more appropriate for a corporate application, i.e. MDM, A screen-sharing application could be created and validated by Apple to display no light signals, and which could access the user's screen whenever the person wanted to after an initial acceptance? In other words, the user accepts to share his screen once, but won't be notified to accept the next time. Or is this impossible on iOS? I'd be honored to have some answers
3
0
260
2w
How to Implement Screen Mirroring in iOS for Google TV?
I am developing an iOS application that supports screen mirroring to Google TV (or Chromecast with Google TV). My goal is to mirror the iPhone/iPad screen in real time to a Google TV device. What I Have Tried So Far I have explored multiple approaches but haven't found a direct way to achieve low-latency screen mirroring. Here are some of my findings: Google Cast SDK: Google Cast SDK is primarily designed for casting media (videos, images, audio) rather than real-time mirroring. It supports custom receiver applications, but there are no direct APIs for full screen mirroring. Casting a recorded video is possible, but it introduces latency and is not real-time. ReplayKit for Screen Capture: RPScreenRecorder.shared().startCapture(handler: ...) allows capturing the iPhone screen as a video stream. However, sending this stream to Google TV in real time is a challenge. I could potentially encode the video as HLS and stream it, but the delay is significant. RTSP/UDP Streaming: Some third-party libraries support RTSP/UDP streaming for real-time screen sharing. Google TV does not natively support RTSP, making this approach difficult. My Questions: Is it possible to achieve real-time screen mirroring on Google TV using Google Cast SDK? Does Google TV support WebRTC or any low-latency streaming protocol that can be used from iOS? Are there any alternative approaches to mirror an iOS screen to Google TV with minimal latency? I would appreciate any guidance, code examples, or references to relevant documentation.
0
1
208
2w
Intercepting incoming video and/or audio for stopping scams
I'm trying to make an app that is able to quietly run in the background. It needs to detect other apps' or the system's incoming video and/or audio, using only on-device resources to determine if it might be a scam caller. It will tap into an escalating cascade of resources to do so. For video/image scam detection, it uses OpenCV to detect faces, then refers to a known database of reported scam imagery. For audio scam calls, we defer to known techniques of voice modulation in frequency and/or amplitude. Each video and/or audio result will be relayed via notification banner as well as recorded in-app. Crucially, if the results are uncertain, users have the option to submit it to a global collaborative cloud database for investigative teams; 60 second audio snippets or series of images where faces were detected (60 second equivalent). In the end, we expect to deploy this app across most parts of Asia and Africa, thereby protecting generations of iPhone and iPad users. However, we have not been able to find a method that does this, and there is no known correspondance able to provide such technical guidance. Please assist.
0
0
290
Nov ’24
Overlapping Video Frames in RPBroadcastSampleHandler with ReplayKit
I am recording video on iOS using ReplayKit and found that after copying data in the processSampleBuffer:withType: callback using memcpy, the data changes. This occurs particularly frequently when the screen content changes rapidly, making it look like the frames are overlapping. I found that the values starting from byte 672 in the video data on my device often change. Here is the test demo: - (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer withType:(RPSampleBufferType)sampleBufferType { switch (sampleBufferType) { case RPSampleBufferTypeVideo: { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); int ret = 0; uint8_t *oYData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0); size_t oYSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0); uint8_t *oUVData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1); size_t oUVSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1); if (oYSize <= 672) { return; } uint8_t tempValue = oYData[672]; uint8_t *tYData = malloc(oYSize); memcpy(tYData, oYData, oYSize); if (tYData[672] != oYData[672]) { NSLog(@"$$$$$$$$$$$$$$$$------ t:%d o:%d temp:%d", tYData[672], oYData[672], tempValue); } free(tYData); CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); break; } default: { break; } } } Output: $$$$$$$$$$$$$$$$------ t:110 o:124 temp:110 $$$$$$$$$$$$$$$$------ t:111 o:133 temp:111 $$$$$$$$$$$$$$$$------ t:124 o:138 temp:124 $$$$$$$$$$$$$$$$------ t:133 o:144 temp:133 $$$$$$$$$$$$$$$$------ t:138 o:151 temp:138 $$$$$$$$$$$$$$$$------ t:144 o:156 temp:144 $$$$$$$$$$$$$$$$------ t:151 o:135 temp:151 $$$$$$$$$$$$$$$$------ t:156 o:78 temp:156 $$$$$$$$$$$$$$$$------ t:135 o:76 temp:135 $$$$$$$$$$$$$$$$------ t:78 o:77 temp:78 $$$$$$$$$$$$$$$$------ t:76 o:80 temp:76 $$$$$$$$$$$$$$$$------ t:77 o:80 temp:77 $$$$$$$$$$$$$$$$------ t:80 o:79 temp:80 $$$$$$$$$$$$$$$$------ t:79 o:80 temp:79
0
0
371
Oct ’24
Capturing External Object Images via Vision Pro Passthrough Camera with Enterprise APIs
We are currently working with the Enterprise APIs for visionOS 2 and have successfully obtained the necessary entitlements for passthrough camera access. Our goal is to capture images of external real-world objects using the passthrough camera of the Vision Pro, not just take screenshots or screen captures. Our specific use case involves: 1. Accessing the raw passthrough camera feed. 2. Capturing high-resolution images of objects in the real world through the camera. 3. Processing and saving these images for further analysis within our custom enterprise app. We would greatly appreciate any guidance, tutorials, or sample code that could help us achieve this functionality. If there are specific APIs or best practices for handling real-world image capture via passthrough cameras with the Enterprise APIs, please let us know.
0
0
443
Oct ’24
RPScreenRecorder startCapture issues on generated file
Hello all, This is my first post on the developer forums. I am developing an app that records the screen of my app, using AVAssetWriter and RPScreenRecorder startCapture. Everything is working as it should on most cases. There are some seemingly random times where the file generated is of some kb and it is corrupted. There seems to be no pattern on what the device is or the iOS version is. It can happen on various phones and iOS versions. The steps I have followed in order to create the file are: configuring the AssetWritter videoAssetWriter = try? AVAssetWriter(outputURL: url!, fileType: AVFileType.mp4) let size = UIScreen.main.bounds.size let width = (Int(size.width / 4)) * 4 let height = (Int(size.height / 4)) * 4 let videoOutputSettings: Dictionary<String, Any> = [ AVVideoCodecKey : AVVideoCodecType.h264, AVVideoWidthKey : width, AVVideoHeightKey : height ] videoInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoOutputSettings) videoInput?.expectsMediaDataInRealTime = true guard let videoInput = videoInput else { return } if videoAssetWriter?.canAdd(videoInput) ?? false { videoAssetWriter?.add(videoInput) } let audioInputsettings = [ AVFormatIDKey: Int(kAudioFormatMPEG4AAC), AVSampleRateKey: 12000, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioInputsettings) audioInput?.expectsMediaDataInRealTime = true guard let audioInput = audioInput else { return } if videoAssetWriter?.canAdd(audioInput) ?? false { videoAssetWriter?.add(audioInput) } The urlForVideo returns the URL to the documentDirectory, after appending and creating the folders needed. This part seems to be working as it should as the directories are created and the video file exists on them. Start the recording if RPScreenRecorder.shared().isRecording { return } RPScreenRecorder.shared().startCapture(handler: { [weak self] sample, bufferType, error in if let error = error { onError?(error.localizedDescription) } else { if (!RPScreenRecorder.shared().isMicrophoneEnabled) { RPScreenRecorder.shared().stopCapture { error in if let error = error { return } } onError?("Microphone was not enabled") } else { succesCompletion?() succesCompletion = nil self?.processSampleBuffer(sample, with: bufferType) } } }) { error in if let error = error { onError?(error.localizedDescription) } } Process the sampleBuffers guard CMSampleBufferDataIsReady(sampleBuffer) else { return } DispatchQueue.main.async { [weak self] in switch sampleBufferType { case .video: self?.handleVideoBaffer(sampleBuffer) case .audioMic: self?.add(sample: sampleBuffer, to: self?.audioInput) self?.audioInput) default: break } } // The add function from above fileprivate func add(sample: CMSampleBuffer, to writerInput: AVAssetWriterInput?) { if writerInput?.isReadyForMoreMediaData ?? false { writerInput?.append(sample) } // The handleVideoBaffer function from above fileprivate func handleVideoBaffer(_ sampleBuffer: CMSampleBuffer) { if self.videoAssetWriter?.status == AVAssetWriter.Status.unknown { self.videoAssetWriter?.startWriting() self.videoAssetWriter?.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) } else { if (self.videoInput?.isReadyForMoreMediaData) ?? false { if self.videoAssetWriter?.status == AVAssetWriter.Status.writing { self.videoInput?.append(sampleBuffer) } } } } } Finally the stop recording func stopRecording(completion: @escaping (URL?, URL?, Error?) -> Void) { RPScreenRecorder.shared().stopCapture { error in if let error = error { completion(nil, nil, error) return } self.finish { videoURL, _ in completion(videoURL, nil, nil) } } } // The finish function mentioned above fileprivate func finish(completion: @escaping (URL?, URL?) -> Void) { let dispatchGroup = DispatchGroup() dispatchGroup.enter() finishRecordVideo { dispatchGroup.leave() } dispatchGroup.notify(queue: .main) { print("Finish with url:\(String(describing: self.urlForVideo()))") completion(self.urlForVideo(), nil) } } // The finishRecordVideo mentioned above fileprivate func finishRecordVideo(completion: @escaping ()-> Void) { videoInput?.markAsFinished() audioInput?.markAsFinished() videoAssetWriter?.finishWriting { if let writer = self.videoAssetWriter { if writer.status == .completed { completion() } else if writer.status == .failed { // Print the error to find out what went wrong if let error = writer.error { print("Video asset writing failed with error: \(error.localizedDescription). Url: \(writer.outputURL.path)") } else { print("Video asset writing failed, but no error description available.") } completion() }else { completion() } } } } What could it be the reason of the corrupted files generated? This issue has never happened to my devices so there is no way to debug using xcode. Also there are no errors popping out on the logs. Can you spot any issues on the code that can create this kind of issue? Do you have any suggestions on the problem at hand? Thanks
0
0
424
Sep ’24
Record Entire Phone Screen Functionality IOS Xcode 15.4
Hi guys. I am currently working on an App where one of the functionalities is a screen recording function. The function should record the ENTIRE phone screen and be able to upload it to a Firestore database. I am currently having issues with the screen recording. I used the ReplayKit package and I am able to screen record just the screen of the App and not the entire phone screen. Essentially, the screen recording function should allow a user to record other apps screens that the user goes to for about 30 seconds. I am unable to figure out how to record the entire screen. Does anybody know way (package, API, or process) for me to achieve such functionality. Thank you in advance, Hanav Modasiya
1
0
564
Aug ’24
IOS BroadcastExtension
Hello, I'm new here, I was developing a screen recording extension for an IOS application, I used the RPSBroadcastSampleHandler livekit as a basis, in tests a few months ago it worked, but after the long wait for publishing authorization the extension stopped working, I noticed which is not just mine but screen sharing from Google Meet, Zoom Mettings and others also don't work, I tested it on iPhone 14 pro and iPhone 6s, nothing worked, the option to select the extension appears but when clicking "start sharing" nothing happens and after a few seconds the sharing button returns to "start sharing", the same behavior in all tested apps, does anyone know what happens? Did you change the way you record and no app has updated? Internal error in IOS? Nothing logs in terminal just doesn't work.
1
0
839
Jul ’24
RPBroadcastSampleHandler crashes while processing payload
We have an app with a broadcast extension with a RPBroadcastSampleHandler. The implementation is working fine, however for quite some users the extension suddenly crashes during the broadcast. The stacktrace stacktrace of the crashing thread always looks like the shortened sample below. (Full crash reports and stack traces are attached to the submitted Feedbacks.) Looking at the stacktrace none of our code is running, just ReplayKit code handling XPC messages at that moment: Thread: #0 0x00000001e2cf342c in __pthread_kill () #1 0x00000001f6a51c0c in pthread_kill () #2 0x00000001a1bfaba0 in abort () #3 0x00000001a9e38588 in malloc_vreport () #4 0x00000001a9e35430 in malloc_zone_error () [...] #18 0x0000000218ac91bc in -[RPBroadcastSampleHandler processPayload:completion:] () #19 0x0000000198b81360 in __NSXPCCONNECTION_IS_CALLING_OUT_TO_EXPORTED_OBJECT_S2__ () Is anyone aware of there issues with ReplayKit? Are there known workarounds? Could anything we're doing affect crashes like this? Would greatly appreciate it if anyone from Apple DTS could look into this and flag the below Feedbacks to the relevant teams! Feedback IDs: FB13949098, FB13949188
0
0
626
Jun ’24
iOS to Android H264 encoding issue.
I'm trying to cast the screen from an iOS device to an Android device. I'm leveraging ReplayKit on iOS to capture the screen and VideoToolbox for compressing the captured video data into H.264 format using CMSampleBuffers. Both iOS and Android are configured for H.264 compression and decompression. While screen casting works flawlessly within the same platform (iOS to iOS or Android to Android), I'm encountering an error ("not in avi mode") on the Android receiver when casting from iOS. My research suggests that the underlying container formats for H.264 might differ between iOS and Android. Data transmission over the TCP socket seems to be functioning correctly. My question is: Is there a way to ensure a common container format for H.264 compression and decompression across iOS and Android platforms? Here's a breakdown of the iOS sender details: Device: iPhone 13 mini running iOS 17 Development Environment: Xcode 15 with a minimum deployment target of iOS 16 Screen Capture: ReplayKit for capturing the screen and obtaining CMSampleBuffers Video Compression: VideoToolbox for H.264 compression Compression Properties: kVTCompressionPropertyKey_ConstantBitRate: 6144000 (bitrate) kVTCompressionPropertyKey_ProfileLevel: kVTProfileLevel_H264_Main_AutoLevel (profile and level) kVTCompressionPropertyKey_MaxKeyFrameInterval: 60 (maximum keyframe interval) kVTCompressionPropertyKey_RealTime: true (real-time encoding) kVTCompressionPropertyKey_Quality: 1 (lowest quality) NAL Unit Handling: Custom header is added to NAL units Android Receiver Details: Device: RedMi 7A running Android 10 Video Decoding: MediaCodec API for receiving and decoding the H.264 stream
0
0
832
May ’24