Post not yet marked as solved
I have a simple tvOS app that has a single audio track, a storyboard with single scene that contains a button for play/pause toggling.I am trying to add Siri Remote handling so that user can play/pause the track with remote's "toggle play/pause" button.The problem, though, is that when the application starts it doesn't respond to remote control events if the AVPlayer is paused. When user starts playback using the touch-button on Siri Remote I can pause the playback using "toggle play/pause" button. I CAN'T start playback again after the track has been paused.I tried two approaches:application.beginReceivingRemoteControlEvents()MPRemoteCommandCenter.shared().pauseCommand.addTarget(...)Using the remoteControlReceived(with event: UIEvent?) I receive the event only when the AVPlayer is playing and the event's subtype is pause command.Using the remote command center I registered for pauseCommand, playCommand and togglePlayPauseCommandcommands but only pauseCommand is being triggered when the playback is active.Am I missing something?
Post not yet marked as solved
Hello, everybody,I would like to display the camera image of the back camera twice on the screen (as with the cardboard). I created two views and they each take 50% of the screen, I can also output the live image of the camera on one view.// views left and right
var viewRect:CGRect!
var leftView:UIView!
var rightView:UIView!
//...
//left
viewRect = CGRect(x: 0, y: 0, width: 0.5 * self.view.frame.width, height: self.view.frame.height)
leftView = UIView(frame: viewRect)
leftView.backgroundColor = .green
view.addSubview(leftView)
//rgiht
viewRect = CGRect(x: 0.5 * self.view.frame.width, y: 0, width: 0.5 * self.view.frame.width, height: self.view.frame.height)
rightView = UIView(frame: viewRect)
rightView.backgroundColor = .red
view.addSubview(rightView)
//...
func beginSession(){
captureSession.sessionPreset = AVCaptureSession.Preset.photo
let devices = AVCaptureDevice.devices()
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaType.video)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevice.Position.back) {
captureDevice = device as? AVCaptureDevice
}
}
}
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice!))
captureSession.sessionPreset = AVCaptureSession.Preset.photo
} catch _ {
}
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.connection?.videoOrientation = AVCaptureVideoOrientation.landscapeLeft
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.leftPreviewLayer = previewLayer
self.leftPreviewLayer.frame = self.view.layer.frame
self.leftPreviewLayer.frame = self.leftView.bounds
self.leftView.layer.addSublayer(previewLayer)
self.leftView.layer.addSublayer(previewLayer)
captureSession.startRunning()
}I already found something like "let replicatorLayer = CAReplicatorLayer()" online, but can't get it to work 😟it would be great if someone could give me a hint 😉Thx and cya Tom
The AVReaderWriter was last updated in 2016 for Swift 2.3. I am a newby, and have been unable to upgrade or convert it using the Xcode tools. I tried going from Swift 2 to 3, in preparation of going from 3 to 4, but got a ton of errors. I downloaded every version of Xcode since 8.0 to try and incrementally upgrade it, and it quickly fell apart. Is there an update schedule for their documentation? I hear Swift 5 is coming out, and I am sure 6 and 7 are not far behind. I'm not sure how people are supposed to figure this stuff out if the documentation is several versions out-of-date, I guess it is easier if you have a lot of experience.
Post not yet marked as solved
(For Apple folks: rdar://47577096.)# BackgroundThe Core Video function `CVImageBufferCreateColorSpaceFromAttachments` creates custom color profiles with simplified transfer functions instead of using the standard system color profiles. Let’s take ITU-R 709 as an example.The macOS `Rec. ITU-R BT.709-5` system color profile specifies the transfer function asf(x) = { (0.91x + 0.09)^2.222 where x >= 0.081
{ 0.222x where x < 0.081The Apple-custom `HDTV` color profile created by the above Core Video function specifies the transfer function asf(x) = x^1.961My understanding is that `x^1.961` is the closest approximation of the more complex ITU-R 709 transfer function.# Questions1. Why use a custom color profile with a simplified transfer function rather than the official specification? - Was it done for performance? - Was it done for compatibility with non-QuickTime-based applications? - etc.2. Speaking of compatibility, there is a problem when an encoding application uses the official transfer function and the decoding application uses the approximated transfer function. I tested this using two images. One image uses the `Rec. ITU-R BT.709-5` color profile. The other image is derived from the former by assigning the Apple-custom `HDTV` color profile. The latter image loses the details in the darker areas of the image. Why go to the trouble of approximating the transfer function when the approximation isn’t that great?3. Are the Apple-custom color profiles also used for encoding? Or are they only for decoding?4. Another thing that concerns me is that the Apple-custom `HDR (PQ)` and `HDR (HLG)` color profiles use the same simplified transfer function of `f(x) = x^1.801`. Isn’t the whole point of the PQ and HLG standards to define more sophisticated transfer functions? Doesn’t simplifying those two transfer functions defeat their purpose?
Post not yet marked as solved
I have been trying to multiple HLS audio files together. The files are encrypted with a key. I| need to fetch the key and store it locally for offline use. When I download a small number (2 or 3 files) of files simultaneously it works fine, but if I start downloading 10-15 files simultaneously most of them fails with the error message-Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-17377), NSLocalizedDescription=The operation could not be completed}I am getting error message from NSURLErrorDomain as well but they are rare.Below is my download function -class AudioDownloader { var productKey: String var downloadUrl: URL var downloadSession: AVAssetDownloadURLSession? var fakeDownloadUrl: URL? var downloadTask: AVAssetDownloadTask?func downloadAudio() { if downloadSession == nil { let configuration = URLSessionConfiguration.background(withIdentifier: self.productKey) downloadSession = AVAssetDownloadURLSession(configuration: configuration, assetDownloadDelegate: self, delegateQueue: OperationQueue.main) configuration.shouldUseExtendedBackgroundIdleMode = true configuration.httpShouldSetCookies = true configuration.httpShouldUsePipelining = false configuration.allowsCellularAccess = true configuration.isDiscretionary = true } self.fakeDownloadUrl = self.convertToScheme(url: self.downloadUrl, scheme: "fakehttp") let asset = AVURLAsset(url: self.fakeDownloadUrl!) let loader = asset.resourceLoader loader.setDelegate(self, queue: DispatchQueue(label: "dispatch2")) self.downloadTask = downloadSession?.makeAssetDownloadTask(asset: asset, assetTitle: "assetTitle \(self.productKey)", assetArtworkData: nil, options: nil)! self.downloadTask?.taskDescription = self.productKey self.downloadTask?.resume()}}The link of the approach which I am using to fetch the key for offline use is below -https://stackoverflow.com/questions/45670774/playing-offline-hls-with-aes-128-encryption-iosAny help would be appreciated.
Post not yet marked as solved
Hello,I have recently implemented an AVRoutePickerView. It is working well, but the icon shows as "Airplay Audio" instead of "Airplay Video"https://developer.apple.com/design/human-interface-guidelines/airplay/overview/media-playback/#entering-airplayI would like it to use the "Airplay Video" icon and I cannot figure out how to set it. The closest thing I've found is attempting to set the routePickerButtonStyle but when I do it says "'routePickerButtonStyle' is unavailable".Any help would be appreciated.Thank you.
Post not yet marked as solved
When a magnification is altered from default 1.0 (100%) I'm trying to maintain my orignal layout - set in IB like so: func fit(_ childView: NSView, parentView: NSView) {
childView.translatesAutoresizingMaskIntoConstraints = false
childView.topAnchor.constraint(equalTo: parentView.topAnchor).isActive = true
childView.leadingAnchor.constraint(equalTo: parentView.leadingAnchor).isActive = true
childView.trailingAnchor.constraint(equalTo: parentView.trailingAnchor).isActive = true
childView.bottomAnchor.constraint(equalTo: parentView.bottomAnchor).isActive = true
}where the childView is the webView and its parent container view - fit() called from viewDidLoad(). The result is the view is pegged to top left, nice but I'like the window content to match the now smaller or larger webView - using scroll bars if needbe. Any user attempt to resize window, being to skew the layout until they resume default.Can someone suggestt how to go about supporting magnification ?
Post not yet marked as solved
If I create, via interface builder, a window with a WKWebView, all is nice. I wire it up, constraints and added drop-n-drop support for asset viewing. But upon loading say a movie for example, the window appears to obtain a scroll view. This is easily triggered when resizing the window and make the window smaller - the video will be resized once vertical size is changed, but the addition of a horizontal scrollbar cannot be removed, unless a resize is large enough for the video to fit.I'd like to hide the scroll views' sliders when the mouse leaves the window, but I first need a way to access the scrollView.I have tried to query the WKWebView's enclosingScrollView - but always nil, and on MacOSX, there is no scrollView as on iOS.Any help appreciated, thanks.
Post not yet marked as solved
Is there a way to get stereo audio on iPhone XS using RemoteIO unit, or any of the CoreAudio APIs? Need to record audio in stereo format.
Post not yet marked as solved
How do I copy to file system or play with AVAudioPlayer an MPMediaItem that I get from an MPMediaPickerController initialized with type audio? When I try, I get an error that says:The file “item.mp3” couldn’t be opened because URL type ipod-library isn’t supported.
Post not yet marked as solved
Hi everybody,I would like to access the raw data from the iPhone True Depth Camera (IR sensor), in order to do my own image processing.I'm not talking about depth info but the infrared image that the iPhone uses to get those depth information.You can see what I'm talking about below:https://www.youtube.com/watch?v=g4m6StzUcOwHowever, I know that the API allow to get processed depth info but we couldn't find any reference to the raw IR data used to obtain it.Do you know if is it possible to access that information?Thanks in advance.
Post not yet marked as solved
Hello guys, I constantly keep getting error code -12780 while saving the file.Configuration isAVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: [
AVVideoCodecKey: AVVideoCodecType.h264,
AVVideoWidthKey: floor(UIScreen.main.bounds.width / 16) * 16,
AVVideoHeightKey: floor(UIScreen.main.bounds.height / 16) * 16,
AVVideoCompressionPropertiesKey: [
AVVideoAverageBitRateKey: 2300000,
],
])I don't understand what NSOSStatusErrorDomain -12780 means. The localized description is The operation could not be completed.Other than that I discovered that on my iOS 13 beta device the error code is: -17508,but still is The operation could not be completed.
Post not yet marked as solved
Some of my code for combining an .mp4 and an .aac file together into an .mov file worked just fine since its very beginning, it creats an AVMutableComposition with 1 video track and 1 audio track from the input audio and video files. After exportAsynchronously(), the AVAssetExportSession gets AVAssetExportSession.Status.completed. Here is the Swift source code of the key function:
func compileAudioAndVideoToMovie(audioInputURL:URL, videoInputURL:URL) {
let docPath:String = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0];
let videoOutputURL:URL = URL(fileURLWithPath: docPath).appendingPathComponent("video_output.mov");
do
{
try FileManager.default.removeItem(at: videoOutputURL);
}
catch {}
let mixComposition = AVMutableComposition();
let videoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid);
let videoInputAsset = AVURLAsset(url: videoInputURL);
let audioTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid);
let audioInputAsset = AVURLAsset(url: audioInputURL);
do
{
try videoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTimeMake(value: 0, timescale: 1000), duration: CMTimeMake(value: 3000, timescale: 1000)), of: videoInputAsset.tracks(withMediaType: AVMediaType.video)[0], at: CMTimeMake(value: 0, timescale: 1000));// Insert an 3-second video clip into the video track
try audioTrack?.insertTimeRange(CMTimeRangeMake(start: CMTimeMake(value: 0, timescale: 1000), duration: CMTimeMake(value: 3000, timescale: 1000)), of: audioInputAsset.tracks(withMediaType: AVMediaType.audio)[0], at: CMTimeMake(value: 0, timescale: 1000));// Insert an 3-second audio clip into the audio track
let assetExporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetPassthrough);
assetExporter?.outputFileType = AVFileType.mov;
assetExporter?.outputURL = videoOutputURL;
assetExporter?.shouldOptimizeForNetworkUse = false;
assetExporter?.exportAsynchronously {
switch (assetExporter?.status)
{
case .cancelled:
print("Exporting cancelled");
case .completed:
print("Exporting completed");
case .exporting:
print("Exporting ...");
case .failed:
print("Exporting failed");
default:
print("Exporting with other result");
}
if let error = assetExporter?.error
{
print("Error:\n\(error)");
}
}
}
catch
{
print("Exception when compiling movie");
}
}However, after I upgraded my iPhone to iOS13(beta), it always ends up with .failed status, and the AVAssetExportSession.error reads: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12735), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x2815e9bc0 {Error Domain=NSOSStatusErrorDomain Code=-12735 "(null)”}}I've tested this on iPhone6Plus and iPhone7, both given the same result.You can clone my minimal demo project with sample input audio and video files embedded in its bundle from here:https://github.com/chenqiu1024/iOS13VideoRecordingError.git, run it and check the console output.Is there any explanation and suggestion?
Post not yet marked as solved
I was wondering if there's a way to shoot with two rear cameras simultaneously and recieve two discrete images from them?I saw that with iPhone 11 Pro and iOS 13, some apps like Filmic Pro will have access to two video streams from multiple cameras. Is that available to all developers or only Apple partners? If so, is the API available or will it be made available in the future?I'm looking to take two photos from rear facing cameras and applying post-processing to them seperately before merging them in our app.Also, related but seperate: is there any way to recieve rectification/calibration data from the individual device to account for alignment differences between individual units during manufacturing?
Post not yet marked as solved
Our team have an app to play back m4a resources online using the avplayer. Recently, there are some users complaining the playback is keep failing and we have no idea the reason behind this.We checked the user log and the avplayer error log are as follow (for multiple failed instance):- avPlayer.currentItem.error = Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-16155), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x280e6ef10 {Error Domain=NSOSStatusErrorDomain Code=-16155 "(null)"}}- avPlayer.currentItem.error = Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (606068440), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x280e9f8d0 {Error Domain=NSOSStatusErrorDomain Code=606068440 "(null)"}}- avPlayer.currentItem.error = Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (1705376704), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x281ec60d0 {Error Domain=NSOSStatusErrorDomain Code=1705376704 "(null)"}}The normal flow for us to start playing: (Work as expected for majority of user)1. [[AVAudioSession sharedInstance] setActive:YES error:&activationError];2. call [avplayer play]3. the audio started to play successfullyFailed scenario: (For some users, this scenario keeps happening)1. The activation error returns Error Domain=NSOSStatusErrorDomain Code=2003329396 "(null)"2. We logged the [AVAudioSession sharedInstance].category becomes empty3. The mediaServicesWereReset notification is received4. the avplayer failed to play and the one of the above avplayer item error is observedWhen the user failed once, he cannot play any audio resources in our app and the scenario is keep repeating.We would like to know:1. Why would this occur on certain user devices? 2. How to prevent the problem occurs?3. Is there a way to recover from the lost mediaService? so that even if the error occurs once, the user can still play other resources in our app.We cannot produce the fail scenario by ourselves even we tried the Reset Media Services in developer menu, the behaviour is not exactly the same. Look forward to any help from the community and thanks.
Post not yet marked as solved
I've been wrestling with a problem related to AVAudioEngine. I've posted a couple questions recently to other forums, but haven't had much luck, so I'm guessing not many people are encountering this, or the questions are unclear, or perhaps I'm not asking in the most appropriate subforums. So, I thought I'd try here in the 'Concurrency' forum, as using concurrency would be one way to solve the problem.The specific problem is that AVAudioPlayerNode.play() takes a long time to execute. The execution time seems to be proportional to the value of AVAudioSession.ioBufferDuration, and can be from a few milliseconds for low buffer durations to over 20 milliseconds at the default buffer duration. These execution times can be an issue in real-time applications such as games.An obvious solution would be to move such operations to a background thread using GCD, and I've seen various posts and articles that do this. Here's a code example showing what I mean:import AVFoundation
import UIKit
class ViewController: UIViewController {
private let engine = AVAudioEngine()
private let player = AVAudioPlayerNode()
private let queue = DispatchQueue(label: "", qos: .userInitiated)
override func viewDidLoad() {
super.viewDidLoad()
engine.attach(player)
engine.connect(player, to: engine.mainMixerNode, format: nil)
try! engine.start()
}
override func touchesBegan(_ touches: Set, with event: UIEvent?) {
queue.async {
if self.player.isPlaying {
self.player.stop()
} else {
self.player.play()
}
}
}
}In this scenario all AVAudioEngine-related operations would be serial and never concurrent/parallel. The audio system would never be accessed simultaneously from multiple threads, only serially.My concern is that I don't know whether it's safe to use AVAudioEngine in this way. More generally, I'm not sure what should be assumed about any API for which nothing specific is said about thread safety. In such cases, can it be assumed that access from multiple threads is safe as long as only one thread is active at any given time? (The 'Thread Programming Guide' touches on this, but doesn't appear to address audio frameworks specifically.)The narrowest version of my question is whether it's safe to use GCD with AVAudioEngine, provided all access is serial. The broader question would be what assumptions should or should not be made about APIs and thread safety when it's not specifically addressed in the documentation.Any input on either of these issues would be greatly appreciated.
Post not yet marked as solved
I am playing videos that are in my app bundle.They are playing correctly.However, when I call to dismiss the AVPlayerViewController, it visibly is removed from the view hierarchy but, if I turn off the iOS device and turn it back on again, on the lock screen there is a media control showing that video and a 'play' button.If you touch play you only get the audio and no video.My problem is I don't understand why the 'dismiss' is not completely 'killing' the player when I'm done with it.NOTE: The follow code works perfectly in iOS 13 devices, however any device running previous iOS versions, the error appears as stated. This is blocking me from submitting my app, please can someone advise on how to solve this? Here is the presentation code:
internal func play(FileName filename: String, FileType type: String)
{
if self.isAlreadyPlaying == YES
{
killVideoPlayer()
DispatchQueue.main.asyncAfter(deadline: .now() + 1.5, execute: { self.play(FileName: filename, FileType: type) })
return
}
let audioSession = AVAudioSession.sharedInstance()
do
{
try audioSession.setCategory(AVAudioSession.Category.playback, mode: .moviePlayback, options: [.allowAirPlay, .allowBluetooth, .allowBluetoothA2DP])
try audioSession.setActive(true)
}
catch
{
print("Audio session failed")
}
let path = Bundle.main.path(forResource: filename, ofType: type)
let url = NSURL(fileURLWithPath: path!)
let player = AVPlayer(url: url as URL)
NotificationCenter.default.addObserver(self,
selector: #selector(PBFMenuSystemFloatingButtonsViewController.didFinishPlaying(notification:)),
name: NSNotification.Name.AVPlayerItemDidPlayToEndTime,
object: player.currentItem)
self.playerController = AVPlayerViewController()
self.menuWindow.videoPlayer = self.playerController
self.playerController?.player = player
self.playerController?.allowsPictureInPicturePlayback = true
self.playerController?.showsPlaybackControls = YES
self.playerController?.delegate = self
self.playerController?.exitsFullScreenWhenPlaybackEnds = YES
self.isAlreadyPlaying = YES
self.present(self.playerController!, animated: true, completion : { self.playerController?.player?.play() })
}
Here is the dismissal code:
private func killVideoPlayer()
{
self.isAlreadyPlaying = NO
self.playerController?.player?.pause()
self.playerController?.player = nil
let audioSession = AVAudioSession.sharedInstance()
do
{
try audioSession.setActive(false, options: .notifyOthersOnDeactivation)
try audioSession.setCategory(.soloAmbient)
}
catch
{
print("Audio session failed")
}
self.playerController?.dismiss(animated: YES, completion: { self.playerController = nil })
}
And here's what's remaining in the systemwide media player that is shown on the lock screen / control centre:
Post not yet marked as solved
I want to stream audio on iOS and use for that use-case the AVAudioEngine. So, currently I'm not really sure, what is the best solution for my problem.I get the RTP data from the network and want playback this audio data with AVAudioEngine. I use the iOS Network.Framework to receive the network data. Then first I decode my voice data and want to playback it, now.Here is my receive code:connection.receiveMessage { (data, context, isComplete, error) in
if isComplete {
// decode the raw network data with Audio codec G711
let decodedData = AudioDecoder.decode(enc: data, frames: 160)
// create PCMBuffer for audio data for playback
let format = AVAudioFormat(settings: [AVFormatIDKey: NSNumber(value: kAudioFormatALaw), AVSampleRateKey: 8000, AVNumberOfChannelsKey: 1])
// let format = AVAudioFormat(standardFormatWithSampleRate: 8000, channels: 1)
let buffer = AVAudioPCMBuffer(pcmFormat: format!, frameCapacity: 160)!
buffer.frameLength = buffer.frameCapacity
// TODO: now I have to copy the decodedData --> buffer (AVAudioPCMBuffer)
if error == nil {
// recall receive() for next message
self.receive(on: connection)
}
}?How I have to copy my decoded data into the AVAudioPCMBuffer? Currently, my AVAudioPCMBuffer is created, but not contain any audio data.Background information: My generell approach would be to cash here in the above code (at ToDo-Line) the PCMBuffer in a collection and playback this collection by the AVAudioEngine (in a background thread).My decoded linear data is cashed in an array from type Int16. So the var decodedData is from type [Int16], maybe there is a possibility to consume this data directly? The function scheduleBuffer allows only AVAudioPCMBuffer as input.
Post not yet marked as solved
Hi,asking since I couldn't find any definitive information on this. When using AVCaptureMultiCamSession, is it possible to use RAW capture?My photo output's availableRawPhotoFileTypes is always empty, even if I've only got a single camera wired up. Am I doing something wrong or is this an undocumented limitation?
Post not yet marked as solved
Could not find enough documentation on below behavior, appreciate if someone could direct to the documentation of below behavior:Running App would terminate/restart when permission is altered in settings.Scenario:If App, using microphone is running and say is in screen#3 and microphone permission is granted.Now if user revokes the microphone permission by invoking Settings->Privacy->Microphone->App -> off and opens already running App, it starts from screen #1 instead of #3.What is the best way to prevent this behavior as microphone permission is not mandatory but convenient option for application and could wait for next time launch to apply permission change. Thinking to store state before going to background and reapply when relaunched. But is there a possibility to not to reset the app when permission changes at first place automatically, like Bluetooth on/off