Post not yet marked as solved
Is there a way to record in app generated audio using ReplayKit ?
I successfully import ReplayKit and I can start and stop a recording, where the audio source comes from the microphone.
However the sound of interest for recording is the audio from the app itself.
Is there a way to have the audio recording come from the sound generated inside the application?
Post not yet marked as solved
Hi,
I want to use Apple Broadcast API as client but before I broadcast my iOS screen record, I want to make object detection on video data. I read ReplayKit design docs, the only relevant Instance Property is "serviceinfo:Information updated by the service during a broadcast." in RPBroadcastController, But it does not also sound like it will provide me the required video data.
So do you have any Idea if it is possible to reach iOS screen record data in run time before broadcasting?
Thanks a lot in advance.
Best Regards
Emre
Hi,I am trying to record the screen and microfone inside my application using ReplayKit. It records everything without problems and at the end I present the preview through this code:recorder.stopRecording { [weak self] (preview, error) in
guard preview != nil else {
print("Preview not available")
return
}
preview?.previewControllerDelegate = self
self?.present(preview!, animated: true, completion: nil)
}The problem is here, the RPPreviewViewController obtained in the callback does not respect the safe area for iPhone X. The buttons are too close to the edges and the title is hidden below the notch.I also tryed to embed the RPPreviewViewController into another UIViewController (a simple wrapper), but it seems like the height is fixed, so if I lower the top part the bottom one goes out of the screen.Any clue of what I am doing wrong? Any help is appreciated.Thanks!
Post not yet marked as solved
Hi Developer Community,
I am currently working with a developer to create an application that uses the camera to scan QR codes and take pictures to be uploaded to an AWS database. My developer mentions that our application needs to request permission to use the microphone. The app works perfectly fine without the usage of the microphone and it is my preference to not request for the usage of the microphone and not use the microphone at all? Can somebody confirm if microphone usage and permission request is mandatory for this issue?
Thank you.
Privacy concerned developer
Post not yet marked as solved
Hello,
Im working on an app where the user will be drawing on the screen while it is recorded.
But I want to be able to hide some parts when for example he changes colors or present some view.
My idea was to simply pause the record when he changes the color then when he is done to resume it.
Is that possible to do it, it seems that there is no pause/resume method in RPScreenRecorder
Am I missing something ?
Post not yet marked as solved
I'm working on a project that will use iOS (and macOS) full screen broadcasting. For this I'm doing my research into ReplayKit and working on a proof of concept app.
I'm able to broadcast the screen to Mobcrush but the view of RPBroadcastActivityViewController looks like this:
The obvious problem is that the view looks all wrong with all the icons missing. why?
In my device I have installed Twitch, Mobcrush, and YouTube. I don't know why Twitch doesn't appear here. If I start recording the screen from Control Center then I can stream to Twitch.
I'm using an iPhone 7 with iOS 15.1
Any help will be greatly appreciated.
Post not yet marked as solved
When using ReplayKit => RPScreenRecorder => startCaptureWithHandler, iPhone APP can’t rotate.
I'm developing and application that will use ReplayKit's Broadcast API to stream the screen of a iOS device. For that I'll need to create the Client App and the Broadcaster App.
The WWDC videos about ReplayKit mention WWDC 2016 session 601 "Go Live with ReplayKit" but I can't find it.
Can someone point me to that video or to its transcript?.
Also, any other resource about the Broadcast API will be appreciated. Most of the samples and documentation that I've found on the web are for just streaming of the app's own screen.
Thank you.
Post not yet marked as solved
I have a Broadcast Extension that's running a system wide screen capture. I can start the broadcast and receive buffers and the like.. Buffers are successfully getting to our back and and distributed to all the peers (webrtc). The problem is when I navigate from the host app to apps that play video with audio. And then try to go landscape, sometimes during the rotation, the system will send a broadcastFinished() message to my RPBroadcastSampleHandler subclass. It doesn't occur all the time, but when it does .. it looks like the rotation glitches some. Hard to explain. But I've noticed that it only sends the finished message when it glitches. I would love to get a screen grab of it occurring, but my capture is running..
A quick google/forum search has revealed not much help. Has anyone heard of such a thing?
Post not yet marked as solved
I have a simple macOS app, with two windows. I would like the user to be able recored each window using a button on the window.
I can record the main window and save that to a file using ReplayKit - so far, so good.
But I can only record the main window - even if I have all ReplayKit code in the second NSViewController, its the main widow thats being recorded.
It is possible to tell ReplayKit what window to record somehow?
Post not yet marked as solved
When RPScreenRecorder - startCapture is performed, the user is prompted to allow screen capture. Every time you open an app and run it, it will ask you. Is it possible not to ask after the second launch?
Post not yet marked as solved
On iOS15.1, if users set Settings > Notifications > Screen Sharing > Notifications Off, our application cannot display notification during screen recording.
I find this setting very useful, but it means that there is no way to display important messages such as no audio.
Is there a way around this with any notifications?
Post not yet marked as solved
My goal is to implement a screen sharing function like Skype, Discord etc. My biggest problem in that implementation is that when you have an active RPScreenRecorder.shared() screen recording and then leave the app, the screen recording stops.
For my goal it is necessary to be able to share the screen even if the app is in background. For example: The local user shares his current screen to another user, then exits the app to show the other user his Photo Library in realtime.
Problem 1:
I have came across ReplayKit and "RPBroadcastSampleHandler", but this extension is only called when I explicitly run the app with the "ScreenShare" scheme that I created. When I try to run the app on the normal/main scheme the extension functions aren't called.
Problem 2:
I can't call my pod functionalities from the "RPBroadcastSampleHandler" class. It just says "-PodName- not found" when I try to import the pod I need to use.
There are no full documentations on that topic and I would really appreciate anybody who can explain to me what I am doing wrong.
Post not yet marked as solved
I am simply trying to capture the screen, app audio and mic audio. The app audio and mic audio independently work fine but when combined some unknown error is thrown. Following are the methods to start the capture of screen and processSampleBuffer.
Method to setup writers for screen capture
func startCapture() {
_filename = UUID().uuidString
let videoPath = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent("\(_filename).mp4")
let writer = try! AVAssetWriter(outputURL: videoPath, fileType: .mp4)
let screen = UIScreen.main.bounds
let screenBounds = screen.size
let videoCompressionPropertys = [
AVVideoAverageBitRateKey: screenBounds.width * screenBounds.height * 10.1
]
let videoSettings: [String: Any] = [
AVVideoCodecKey: AVVideoCodecType.h264,
AVVideoWidthKey: screenBounds.width,
AVVideoHeightKey: screenBounds.height,
AVVideoCompressionPropertiesKey: videoCompressionPropertys
]
let input = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings)
input.expectsMediaDataInRealTime = true
if writer.canAdd(input) {
writer.add(input)
}
// Add the app audio input
var acl = AudioChannelLayout()
memset(&acl, 0, MemoryLayout<AudioChannelLayout>.size)
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;
let audioOutputSettings: [String: Any] =
[ AVFormatIDKey: kAudioFormatMPEG4AAC,
AVSampleRateKey : 44100,
AVNumberOfChannelsKey : 1,
AVEncoderBitRateKey : 128000,
AVChannelLayoutKey : Data(bytes: &acl, count: MemoryLayout<AudioChannelLayout>.size)]
let audioInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings: audioOutputSettings)
audioInput.expectsMediaDataInRealTime = true
if (writer.canAdd(audioInput)) {
writer.add(audioInput)
}
// Add the mic audio input
let audioOutputSettings1: [String: Any] =
[ AVFormatIDKey: kAudioFormatMPEG4AAC,
AVSampleRateKey : 24000,
AVNumberOfChannelsKey : 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue]
let micAudioInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings: audioOutputSettings1)
micAudioInput.expectsMediaDataInRealTime = true
if (writer.canAdd(micAudioInput)) {
writer.add(micAudioInput)
}
writer.startWriting()
_audioAssetWriterInput = audioInput
_micAssetWriterInput = micAudioInput
_assetWriterInput = input
_assetWriter = writer
}
processSampleBuffer
override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) {
if startTime == nil {
startTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
_assetWriter!.startSession(atSourceTime: CMTime.zero)
}
if sampleBufferType == RPSampleBufferType.video {
if let _assetWriterInput = _assetWriterInput {
if _assetWriterInput.isReadyForMoreMediaData {
let appended = _assetWriterInput.append(sampleBuffer)
print(appended)
if !appended {
let status = _assetWriter?.status
let error = _assetWriter?.error
print("cannot append video")
}
}
}
}
if sampleBufferType == RPSampleBufferType.audioApp {
if let _assetWriterInput = _audioAssetWriterInput {
if _assetWriterInput.isReadyForMoreMediaData {
let appended = _assetWriterInput.append(sampleBuffer)
print(appended)
if !appended {
let status = _assetWriter?.status
let error = _assetWriter?.error
print("cannot append app audio")
}
}
}
}
if sampleBufferType == RPSampleBufferType.audioMic {
if let _assetWriterInput = _micAssetWriterInput {
if _assetWriterInput.isReadyForMoreMediaData {
let appended = _assetWriterInput.append(sampleBuffer)
print(appended)
if !appended {
let status = _assetWriter?.status
let error = _assetWriter?.error
print("cannot append mic audio")
}
}
}
}
if shouldEnd {
_finishWriters()
}
}
Post not yet marked as solved
Has anyone figured this out.It seems replaykit 2 always delevers frames at a fixed resolution of 886 x 1918I've tried various solutions to rotate the screen but so far none work.Using the code below crashes in the rotate stepvImageRotate90_ARGB8888(&srcBuffer, &destBuffer, factor, &color, vImage_Flags(0)) let flags = CVPixelBufferLockFlags(rawValue: 0)
guard kCVReturnSuccess == CVPixelBufferLockBaseAddress(srcPixelBuffer, flags) else {
return nil
}
defer { CVPixelBufferUnlockBaseAddress(srcPixelBuffer, flags) }
guard let srcData = CVPixelBufferGetBaseAddress(srcPixelBuffer) else {
print("Error: could not get pixel buffer base address")
return nil
}
let sourceWidth = CVPixelBufferGetWidth(srcPixelBuffer)
let sourceHeight = CVPixelBufferGetHeight(srcPixelBuffer)
var destWidth = sourceHeight
var destHeight = sourceWidth
var color = UInt8(0)
if factor % 2 == 0 {
destWidth = sourceWidth
destHeight = sourceHeight
}
let srcBytesPerRow = CVPixelBufferGetBytesPerRow(srcPixelBuffer)
var srcBuffer = vImage_Buffer(data: srcData,
height: vImagePixelCount(sourceHeight),
width: vImagePixelCount(sourceWidth),
rowBytes: srcBytesPerRow)
let destBytesPerRow = destWidth*4
guard let destData = malloc(destHeight*destBytesPerRow) else {
print("Error: out of memory")
return nil
}
var destBuffer = vImage_Buffer(data: destData,
height: vImagePixelCount(destHeight),
width: vImagePixelCount(destWidth),
rowBytes: destBytesPerRow)
let error = vImageRotate90_ARGB8888(&srcBuffer, &destBuffer, factor, &color, vImage_Flags(0))
if error != kvImageNoError {
print("Error:", error)
free(destData)
Post not yet marked as solved
I use RelayKit to record screen in my app, but I want to detect if user use Record Screen from Access Control or third party app.
I tried UIScreen.capturedDidChangeNotification but it doesn't check if I'm recording the screen in my app, or using Record Screen from Access Control.
Do you have any ideas
Post not yet marked as solved
While i'm sharing my screen, I get an error message “Live broadcast to: has stopped due to: Attempted to start an invalid broadcast session”.
Could I know what is the cause of this error
What can I do to fix this error in the future?
Post not yet marked as solved
I'm having an issue where I can only use ReplayKit successfully once, then I have to restart the app to get it to work again.
The first time I start recording, then stop. It works.
The second time one of two things happen...
I get a black screen in my preview
I get the following error message in stopRecording().
Failed due to failure to process the first sample buffer
After that, if I try to call startRecording again I get the following error.
Recording failed to start
Then the above error repeats until I restart my app.
One other thing to note, the only time I get the alert to ask for approval to record is the first time I use ReplayKit. The alert doesn't show again until the first time after the app is restarted.
Here are my functions I'm using.
func didStartRecording() {
self.startRecording()
}
@objc func startRecording() {
let recorder = RPScreenRecorder.shared()
recorder.startRecording { [unowned self] (error) in
if let unwrappedError = error {
log.error("Error trying to record using ReplayKit: \(unwrappedError.localizedDescription)")
return
}
recordingView.backgroundColor = UIColor.red
self.view.addSubview(recordingView)
recordingView.snp.makeConstraints { make in
make.top.left.right.equalToSuperview()
make.height.equalTo(50)
}
let recordingLabel = InstrLabel()
recordingLabel.text = "Recording....Tap to stop"
recordingLabel.textColor = .white
recordingView.addSubview(recordingLabel)
recordingLabel.snp.makeConstraints { make in
make.width.height.equalToSuperview().multipliedBy(0.9)
make.centerX.centerY.equalToSuperview()
}
let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(stopRecording))
recordingLabel.isUserInteractionEnabled = true
recordingLabel.addGestureRecognizer(tapGestureRecognizer)
}
}
@objc func stopRecording() {
let recorder = RPScreenRecorder.shared()
recorder.stopRecording { [unowned self] (preview, error) in
DispatchQueue.main.async {
recordingView.removeFromSuperview()
}
if let error = error {
log.error("Error with stopping the recording: \(error.localizedDescription)")
}
if let unwrappedPreview = preview {
unwrappedPreview.previewControllerDelegate = self
self.pushViewController(unwrappedPreview, animated: true)
}
}
}
func previewControllerDidFinish(_ previewController: RPPreviewViewController) {
self.popViewController(animated: true)
}
I'm using iPad 5th Generation 14.7.1
Is there anyway to reset the replaykit successfully so I can do another recording immediately after without restarting the app? I've seen a few other threads with the "black screen" but none of them had any solutions that I could find.
Post not yet marked as solved
Hi,
We have developed EnxRTCiOS framework, which is working perfect when we import in any app target. But when I am same in my broadcast extension "SampleHandler" its giving error - "Could not build Objective-C module 'EnxRTCiOS'" and my framework "'RTCCameraVideoCapturer' is unavailable: not available on iOS (App Extension) - Camera not available in app extensions."
I have clean Xcode many time, deleted drive data many time, remove POD file and reinstall still not working. Kindly help me in this.
Post not yet marked as solved
Hello,
Our use case is Screen sharing in a live video call. We use broadcast extension to capture screens and send frames. The broadcast extension has hard limit of 50MB. The screen sharing works great with iPhones. But on iPad, ReplayKit delivers larger screens, and as a result, the extension memory usages goes beyond 50MB. While using the profiler we noticed, the memory used by our code is <25MB, but on iPad ReplayKit is having memory spikes which causes memory to go beyond 50MB limits.
How should I achieve screen sharing use case on iPads? What is the guideline. Any suggestion/help is appreciated.
Best,
Piyush