Getting this error in iPhone Portrait Mode with notch.
Currrently using AVQueuePlayer to play more than 30 mp3 files one by one.
All constraint properties are correct but error occures only in Apple iPhone Portrait Mode with notch series. But same code works on same iPhone in Landscape mode.
**But I get this error: **
LoudnessManager.mm:709 unable to open stream for LoudnessManager plist
Type: Error | Timestamp: 2025-02-07 | Process: | Library: AudioToolbox | Subsystem: com.apple.coreaudio | Category: aqme | TID: 0x42754
LoudnessManager.mm:709 unable to open stream for LoudnessManager plist
LoudnessManager.mm:709 unable to open stream for LoudnessManager plist
Timestamp: 2025-02-07 | Library: AudioToolbox | Subsystem: com.apple.coreaudio | Category: aqme
AVKit
RSS for tagCreate view-level services for media playback, complete with user controls, chapter navigation, and support for subtitles and closed captioning using AVKit.
Posts under AVKit tag
65 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
For some users in production, there's a high probability that after launching the App, using AVPlayer to play any local audio resources results in the following error. Restarting the App doesn't help.
issue:
[error: Error Domain=AVFoundationErrorDomain Code=-11800 "这项操作无法完成" UserInfo={NSLocalizedFailureReason=发生未知错误(24), NSLocalizedDescription=这项操作无法完成, NSUnderlyingError=0x30311f270 {Error Domain=NSPOSIXErrorDomain Code=24 "Too many open files"}}
I've checked the code, and there aren't actually multiple AVPlayers playing simultaneously. What could be causing this?
Hi folks,
When doing HLS v6 live streaming with fmp4 chunks we noticed that when the encoder timestamps slightly drift and a #EXT-X-DISCONTINUITY tag is created in either the audio or video playlist (in an ABR setup), the tag is not correctly handled by the player leading to a broken playback containing black screen or no audio (depending on which playlist the tag is printed in).
We noticed that this is often true when the number of tags is odd between the playlists (eg. the audio playlist contains 1 tag and the video contains 2 tags will result in a black screen with audio).
By using the same "broken" source but using Shaka player instead won't break the playback at all.
Are there any possible fix (or upcoming) for AV Player?
I’m experiencing a crash at runtime when trying to extract audio from a video. This issue occurs on both iOS 18 and earlier versions. The crash is caused by the following error:
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '*** -[AVAssetExportSession exportAsynchronouslyWithCompletionHandler:] Cannot call exportAsynchronouslyWithCompletionHandler: more than once.'
*** First throw call stack:
(0x1875475ec 0x184ae1244 0x1994c49c0 0x217193358 0x217199899 0x192e208b9 0x217192fd9 0x30204c88d 0x3019e5155 0x301e5fb41 0x301af7add 0x301aff97d 0x301af888d 0x301aff27d 0x301ab5fa5 0x301ab6101 0x192e5ee39)
libc++abi: terminating due to uncaught exception of type NSException
My previous code worked fine, but it's crashing with Swift 6.
Does anyone know a solution for this?
## **Previous code:**
func extractAudioFromVideo(from videoURL: URL, exportHandler: ((AVAssetExportSession, CurrentValueSubject<Float, Never>?) -> Void)? = nil, completion: @escaping (Swift.Result<URL, Error>) -> Void) {
let asset = AVAsset(url: videoURL)
// Create an AVAssetExportSession to export the audio track
guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to create AVAssetExportSession"])))
return
}
// Set the output file type and path
guard let filename = videoURL.lastPathComponent.components(separatedBy: ["."]).first else { return }
let outputURL = VideoUtils.getTempAudioExportUrl(filename)
VideoUtils.deleteFileIfExists(outputURL.path)
exportSession.outputFileType = .m4a
exportSession.outputURL = outputURL
let audioExportProgressPublisher = CurrentValueSubject<Float, Never>(0.0)
if let exportHandler = exportHandler {
exportHandler(exportSession, audioExportProgressPublisher)
}
// Periodically check the progress of the export session
let timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { _ in
audioExportProgressPublisher.send(exportSession.progress)
}
// Export the audio track asynchronously
exportSession.exportAsynchronously {
switch exportSession.status {
case .completed:
completion(.success(outputURL))
case .failed:
completion(.failure(exportSession.error ?? NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown error occurred while exporting audio"])))
case .cancelled:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
default:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown export session status"])))
}
// Invalidate the timer when the export session completes or is cancelled
timer.invalidate()
}
}
## New Code:
func extractAudioFromVideo(from videoURL: URL, exportHandler: ((AVAssetExportSession, CurrentValueSubject<Float, Never>?) -> Void)? = nil, completion: @escaping (Swift.Result<URL, Error>) -> Void) async {
let asset = AVAsset(url: videoURL)
// Create an AVAssetExportSession to export the audio track
guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to create AVAssetExportSession"])))
return
}
// Set the output file type and path
guard let filename = videoURL.lastPathComponent.components(separatedBy: ["."]).first else { return }
let outputURL = VideoUtils.getTempAudioExportUrl(filename)
VideoUtils.deleteFileIfExists(outputURL.path)
let audioExportProgressPublisher = CurrentValueSubject<Float, Never>(0.0)
if let exportHandler {
exportHandler(exportSession, audioExportProgressPublisher)
}
if #available(iOS 18.0, *) {
do {
try await exportSession.export(to: outputURL, as: .m4a)
let states = exportSession.states(updateInterval: 0.1)
for await state in states {
switch state {
case .pending, .waiting:
break
case .exporting(progress: let progress):
print("Exporting: \(progress.fractionCompleted)")
if progress.isFinished {
completion(.success(outputURL))
}else if progress.isCancelled {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
}else {
audioExportProgressPublisher.send(Float(progress.fractionCompleted))
}
}
}
}catch let error {
print(error.localizedDescription)
}
}else {
// Periodically check the progress of the export session
let publishTimer = Timer.publish(every: 0.1, on: .main, in: .common)
.autoconnect()
.sink { [weak exportSession] _ in
guard let exportSession else { return }
audioExportProgressPublisher.send(exportSession.progress)
}
exportSession.outputFileType = .m4a
exportSession.outputURL = outputURL
await exportSession.export()
switch exportSession.status {
case .completed:
completion(.success(outputURL))
case .failed:
completion(.failure(exportSession.error ?? NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown error occurred while exporting audio"])))
case .cancelled:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
default:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown export session status"])))
}
// Invalidate the timer when the export session completes or is cancelled
publishTimer.cancel()
}
}
Im building a video feed that scrolls and acts similar to TikTok or equivalent. Running into an issue where the video doesnt stop playing after you scroll to the next video, but stops only after the video after that. so it takes 2 scrolls for the video to stop playing, meanwhile every video starts playing when its in view normally, but because it takes 2 scrolls for the first video to stop, there are always 2 videos playing at the same time.
Is there a function i can add so that only one video plays at a time? I tried the activeIndex with the onappear / on disappear but that didnt change anything other than all the videos following the first video wouldnt play.
Here is some of the code I have, I need some dire help here.
Swift Pros only - thank you in advance!
import AVKit
import MapKit
struct LiveEventCard: View {
let event: CustomEvent
var isActive: Bool // Determines if the video should play
let onCommentButtonTapped: () -> Void
@EnvironmentObject var watchlistManager: WatchlistManager
@EnvironmentObject var liveNowManager: LiveNowManager
@State private var player: AVPlayer?
var body: some View {
GeometryReader { geometry in
ZStack {
// Video Player
if let videoURL = event.fullVideoPath(),
FileManager.default.fileExists(atPath: videoURL.path) {
VideoPlayer(player: player)
.frame(width: geometry.size.width, height: geometry.size.height)
.onAppear {
initializePlayer(with: videoURL)
handlePlayback()
}
.onChange(of: isActive) { _ in
handlePlayback()
}
.onDisappear {
cleanupPlayer()
}
} else {
// Error Placeholder
Rectangle()
.fill(Color.black.opacity(0.8))
.frame(width: geometry.size.width, height: geometry.size.height)
.overlay(
Text("Unable to play video")
.foregroundColor(.white)
.font(.headline)
)
}
// Gradient Overlay at the Top
VStack {
LinearGradient(
gradient: Gradient(colors: [.black.opacity(0.7), .clear]),
startPoint: .top,
endPoint: .bottom
)
.frame(height: 150)
Spacer()
}
.edgesIgnoringSafeArea(.top)
// Event Title + Subtitle
VStack(alignment: .leading, spacing: 4) {
Text(event.name ?? "Unknown Event")
.font(.title2)
.fontWeight(.bold)
.foregroundColor(.white)
Text("\(event.genre ?? "Genre") • \(event.time ?? "Time")")
.font(.subheadline)
.foregroundColor(.white.opacity(0.8))
}
.padding([.top, .leading], 16)
.frame(maxWidth: .infinity, maxHeight: .infinity, alignment: .topLeading)
// Buttons at Bottom-Right
VStack(spacing: 12) {
// Like Button
ActionButton(
systemName: liveNowManager.likedEvents.contains(event.id ?? "") ? "heart.fill" : "heart",
action: { toggleLike() },
accessibilityLabel: liveNowManager.likedEvents.contains(event.id ?? "") ? "Unlike" : "Like"
)
Text("\(liveNowManager.likesCount[event.id ?? ""] ?? 0)")
.font(.caption)
.foregroundColor(.white)
// Watchlist Button
ActionButton(
systemName: watchlistManager.isInWatchlist(event: event) ? "checkmark.circle" : "plus.circle",
action: { toggleWatchlist(for: event) },
accessibilityLabel: watchlistManager.isInWatchlist(event: event) ? "Remove from Watchlist" : "Add to Watchlist"
)
// Profile Button
ActionButton(
systemName: "person.crop.circle",
action: { /* Profile Action */ },
accessibilityLabel: "Profile"
)
// Comments Button
ActionButton(
systemName: "bubble.right",
action: { onCommentButtonTapped() },
accessibilityLabel: "Comments"
)
// Location Button
ActionButton(
systemName: "mappin.and.ellipse",
action: { /* Map Action */ },
accessibilityLabel: "Location"
)
}
.padding(.trailing, 16)
.padding(.bottom, 75)
.frame(maxWidth: .infinity, maxHeight: .infinity, alignment: .bottomTrailing)
}
}
}
// MARK: - Player Management
private func initializePlayer(with videoURL: URL) {
if player == nil {
player = AVPlayer(url: videoURL)
}
}
private func handlePlayback() {
guard let player = player else { return }
if isActive {
player.play()
} else {
player.pause()
}
}
private func cleanupPlayer() {
player?.pause()
player = nil
}
// MARK: - Actions
private func toggleWatchlist(for event: CustomEvent) {
if watchlistManager.isInWatchlist(event: event) {
watchlistManager.removeFromWatchlist(event: event)
} else {
watchlistManager.addToWatchlist(event: event)
}
}
private func toggleLike() {
liveNowManager.toggleLike(for: event.id ?? "")
}
}
I have AVPlayer with AVPictureInPictureController. Play video in app and picture In Picture works except one situation. Issue is: I pause video in application and during switch to background is not PiP activate. What do I wrong?
import UIKit
import AVKit
import AVFoundation
class ViewControllerSec: UIViewController,AVPictureInPictureControllerDelegate {
var pipPlayer: AVPlayer!
var avCanvas : UIView!
var pipCanvas: AVPlayerLayer?
var pipController: AVPictureInPictureController!
var mainViewControler : UIViewController!
var playerItem : AVPlayerItem!
var videoAvasset : AVAsset!
public func link(to parentViewController : UIViewController) {
mainViewControler = parentViewController
setup()
}
@objc func appWillResignActiveNotification(application: UIApplication) {
guard let pipController = pipController else {
print("PiP not supported")
return
}
print("PIP isSuspend: \(pipController.isPictureInPictureSuspended)")
print("PIP isPossible: \(pipController.isPictureInPicturePossible)"
if playerItem.status == .readyToPlay {
if pipPlayer.rate == 0 {
pipPlayer.play()
}
pipController.startPictureInPicture(). ---> Errorin log: Failed to start picture in picture.
} else {
print("Player not ready for PiP.")
}
}
private func setupAudio() {
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .moviePlayback)
try session.setActive(true)
} catch {
print("Audio session setup failed: \(error.localizedDescription)")
}
}
@objc func playerItemDidFailToPlayToEnd(_ notification: Notification) {
if let error = notification.userInfo?[AVPlayerItemFailedToPlayToEndTimeErrorKey] as? Error {
print("Failed to play to end: \(error.localizedDescription)")
}
}
func setup() {
setupAudio()
guard let videoURL = URL(string: "https://demo.unified-streaming.com/k8s/features/stable/video/tears-of-steel/tears-of-steel.mp4/.m3u8") else { return }
videoAvasset = AVAsset(url: videoURL)
playerItem = AVPlayerItem(asset: videoAvasset)
addPlayerObservers()
pipPlayer = AVPlayer(playerItem: playerItem)
avCanvas = UIView(frame: view.bounds)
pipCanvas = AVPlayerLayer(player: pipPlayer)
guard let pipCanvas else { return }
pipCanvas.frame = avCanvas.bounds
//pipCanvas.videoGravity = .resizeAspectFill
mainViewControler.view.addSubview(avCanvas)
avCanvas.layer.addSublayer(pipCanvas)
if AVPictureInPictureController.isPictureInPictureSupported() {
pipController = AVPictureInPictureController(playerLayer: pipCanvas)
pipController?.delegate = self
pipController?.canStartPictureInPictureAutomaticallyFromInline = true
}
let playButton = UIButton(frame: CGRect(x: 20, y: 50, width: 100, height: 50))
playButton.setTitle("Play", for: .normal)
playButton.backgroundColor = .blue
playButton.addTarget(self, action: #selector(playTapped), for: .touchUpInside)
mainViewControler.view.addSubview(playButton)
let pauseButton = UIButton(frame: CGRect(x: 140, y: 50, width: 100, height: 50))
pauseButton.setTitle("Pause", for: .normal)
pauseButton.backgroundColor = .red
pauseButton.addTarget(self, action: #selector(pauseTapped), for: .touchUpInside)
mainViewControler.view.addSubview(pauseButton)
let pipButton = UIButton(frame: CGRect(x: 260, y: 50, width: 150, height: 50))
pipButton.setTitle("Start PiP", for: .normal)
pipButton.backgroundColor = .green
pipButton.addTarget(self, action: #selector(startPictureInPicture), for: .touchUpInside)
mainViewControler.view.addSubview(pipButton)
print("Error:\(String(describing: pipPlayer.error?.localizedDescription))")
NotificationCenter.default.addObserver(forName: UIApplication.didEnterBackgroundNotification, object: nil, queue: nil) { [weak self] _ in
guard let self = self else { return }
if self.pipPlayer.rate == 0 {
self.pipPlayer.play()
pipController?.startPictureInPicture()
}
}
func addPlayerObservers() {
playerItem?.addObserver(self, forKeyPath: "status", options: [.old, .new], context: nil)
NotificationCenter.default.addObserver(self, selector: #selector(playerDidFinishPlaying(_:)), name: .AVPlayerItemDidPlayToEndTime, object: playerItem)
}
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if keyPath == "status" {
if let statusNumber = change?[.newKey] as? NSNumber {
let status = AVPlayer.Status(rawValue: statusNumber.intValue)!
switch status {
case .readyToPlay:
print("Player is ready to play")
case .failed:
print("Player failed: \(String(describing: playerItem?.error))")
case .unknown:
print("Player status is unknown")
@unknown default:
fatalError()
}
}
}
}
@objc func playerDidFinishPlaying(_ notification: Notification) {
print("Video finished playing.")
}
deinit {
playerItem?.removeObserver(self, forKeyPath: "status")
NotificationCenter.default.removeObserver(self)
}
@objc func playTapped() {
pipPlayer.play()
}
@objc func pauseTapped() {
pipPlayer.pause()
}
@objc func startPictureInPicture() {
if let pipController = pipController, !pipController.isPictureInPictureActive {
pipController.startPictureInPicture()
}
}
@objc func stopPictureInPicture() {
if let pipController = pipController, pipController.isPictureInPictureActive {
pipController.stopPictureInPicture()
}
}
func pictureInPictureController(_ pictureInPictureController: AVPictureInPictureController, failedToStartPictureInPictureWithError error: Error) {
print("Failed to start PiP: \(error.localizedDescription)")
if let underlyingError = (error as NSError).userInfo[NSUnderlyingErrorKey] {
print("Underlying error: \(underlyingError)")
}
}
}
I would like to integrate the object capture API with a ML model for analysis. So, i will need to get the current frame into CG images for further process.
Thanks in advance !
Can anyone explain how AVAssetExportSession works in iOS 18 and earlier versions?
Here we are focusing to change the cookie at every 120 seconds while playing , in apple avplayer we can't modify cookie after initialisation due to that we followed the approach to using " Resource loader delegate " to pass cookie as a header value .
What I notice is that the playlist file (.m3u8) gets downloaded correctly. Then video file (.m4a) some chunks also gets downloaded. I know that the .ts file is downloaded because I can see the GET request completing on the web server with status 200. I also set a breakpoint at the following line:
loadingRequest.dataRequest?.respond(with: data)
immediately got error from avplayer status as
"The operation could not be completed. An unknown error occurred (-12881) From core media"
Need confirmation on why I am unable to load HLS using resource loader.
is it possible to update cookie value while paying continuously on avplayer.
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
let urlString = "localhost://demo.unified-streaming.com/k8s/features/stable/video/tears-of-steel/tears-of-steel.ism/.m3u8"
guard let url = URL(string: urlString) else {
print("Invalid URL")
return
}
//Create cookie to prepare for player asset
let cookie = HTTPCookie(properties: [
.name: "dazn-token",
.value: "cookie value",
.domain: url.host() ?? "",
.path: "/",
.discard: true
])
//Create cookie key to set AVURLAsset
let options = [AVURLAssetHTTPCookiesKey: [cookie]]
let asset = AVURLAsset(url: url,options: options)
proxy = ReverseProxyResourceLoader()
proxy?.cookie = "exampleCookie"
// Set resource loader delegate to moniter the chunks
asset.resourceLoader.setDelegate(proxy, queue: DispatchQueue.global())
// Load asset keys asynchronously (e.g., "playable")
let keys = ["playable"]
// Initialize the AVPlayer with the URL
let playerItem = AVPlayerItem(asset: asset)
self.player = AVPlayer(playerItem: playerItem)
playerItem.addObserver(self, forKeyPath: "status", options: [.new, .initial], context: nil)
// Observe 'error' property (if needed)
playerItem.addObserver(self, forKeyPath: "error", options: [.new], context: nil)
let contentKeySessionDelegate = ContentKeyDelegate()
// Initialize AVContentKeySession
let contentKeySession = AVContentKeySession(keySystem: .clearKey)
self.contentKeySession = contentKeySession
contentKeySession.setDelegate(contentKeySessionDelegate, queue: DispatchQueue.main)
// Associate the asset with the content key session
contentKeySession.addContentKeyRecipient(asset)
// Create a layer for the AVPlayer and add it to the view
playerLayer = AVPlayerLayer(player: player)
playerLayer?.frame = view.bounds
playerLayer?.videoGravity = .resizeAspect
if let playerLayer = playerLayer {
view.layer.addSublayer(playerLayer)
}
NotificationCenter.default.addObserver(
self,
selector: #selector(playerDidFinishPlaying),
name: .AVPlayerItemDidPlayToEndTime,
object: player?.currentItem
)
// Start playback
player?.play()
}
// Update cookie when ever needed
func updateCookie() {
proxy?.cookie = "update exampleCookie"
}
@objc private func playerDidFinishPlaying(notification: Notification) {
print("Playback finished!")
// Optionally, handle end-of-playback actions here
}
//
// ReverseProxyResourceLoader.swift
// HLSDemo
//
// Created by Gajje.Venkatarao on 12/12/24.
//
import Foundation
import AVKit
import AVFoundation
class ReverseProxyResourceLoader: NSObject, AVAssetResourceLoaderDelegate {
var cookie = ""
func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool {
resourceLoader.preloadsEligibleContentKeys = true
guard let interceptedURL = loadingRequest.request.url else {
loadingRequest.finishLoading(with: NSError(domain: "ReverseProxy", code: -1, userInfo: [NSLocalizedDescriptionKey: "Invalid URL"]))
return false
}
if interceptedURL.scheme == "skd" {
print("Token updated Cookie:", interceptedURL )
return false
}
var components = URLComponents(url: interceptedURL, resolvingAgainstBaseURL: false)
components?.scheme = "https" // Replace with the original scheme
guard let originalURL = components?.url else {
loadingRequest.finishLoading(with: NSError(domain: "ReverseProxy", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to map URL"]))
loadingRequest.finishLoading()
return false
}
var request = URLRequest(url: originalURL)
request.httpMethod = "GET"
if let storeCoockie = HTTPCookie(properties: [
.name: "dazn-token",
.value: cookie,
.domain: originalURL.host ?? "",
.path: "/",
.discard: true
]){
HTTPCookieStorage.shared.setCookie(storeCoockie)
}
let headers = loadingRequest.request.allHTTPHeaderFields ?? [:]
for (key, value) in headers {
request.addValue(value, forHTTPHeaderField: key)
}
request.addValue(cookie, forHTTPHeaderField: "Cookie")
URLSession.shared.configuration.httpShouldSetCookies = true
request.httpShouldHandleCookies = true
let task = (URLSession.shared.dataTask(with: originalURL) { data, response, error in
if let error = error {
print("Error Received:", error)
loadingRequest.finishLoading(with: error)
return
}
print(originalURL)
guard let data = data , let url = response?.url else {
loadingRequest.finishLoading(with: NSError(domain: "ReverseProxy", code: -1, userInfo: [NSLocalizedDescriptionKey: "No data received"]))
return
}
loadingRequest.dataRequest?.respond(with: data)
loadingRequest.finishLoading()
} as URLSessionDataTask)
task.resume()
return true
}
}
Example project
I've seen the Multiview feature on tvOS that displays a small grid icon when available. However, I only see this functionality in VisionOS using the AVMultiviewManager. Does a different name refer to this feature on tvOS?
Relevant Links:
https://www.reddit.com/r/appletv/comments/12opy5f/handson_with_the_new_multiview_split_screen/
https://www.pocket-lint.com/how-to-use-multiview-apple-tv/#:~:text=You'll%20see%20a%20grid,running%20at%20the%20same%20time.
Title: Unable to Access Microphone in Control Center Widget – Is It Possible?
Hello everyone,
I'm attempting to create a widget in the Control Center that accesses the microphone, similar to how Shazam does it. However, I'm running into an issue where the widget always prints "Microphone permission denied." It's worth mentioning that microphone access works fine when I'm using the app itself.
Here's the code I'm using in the widget:
swift
Copy code
func startRecording() async {
logger.info("Starting recording...")
print("Starting recording...")
recognizedText = ""
isFinishingRecognition = false
// First, check speech recognition authorization
let speechAuthStatus = await withCheckedContinuation { continuation in
SFSpeechRecognizer.requestAuthorization { status in
continuation.resume(returning: status)
}
}
guard speechAuthStatus == .authorized else {
logger.error("Speech recognition not authorized")
return
}
// Then, request microphone permission using our manager
let micPermission = await AudioSessionManager.shared.requestMicrophonePermission()
guard micPermission else {
logger.error("Microphone permission denied")
print("Microphone permission denied")
return
}
// Continue with recording...
}
Issues:
The code consistently prints "Microphone permission denied" when run from the widget.
Microphone access works without issues when the same code is executed from within the app.
Questions:
Is it possible for a Control Center widget to access the microphone?
If yes, what might be causing the "Microphone permission denied" error in the widget?
Are there additional permissions or configurations required to enable microphone access in a widget?
Any insights or suggestions would be greatly appreciated!
Thank you.
I was able to obtain the depth map image using AVCapturePhotoOutput from the delegate method
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?)
I convert the depth map to kCVPixelFormatType_DepthFloat32 format and get the pixel values of the depth map using the below code
func convertDepthData(depthMap: CVPixelBuffer) -> [[Float32]] {
let width = CVPixelBufferGetWidth(depthMap)
let height = CVPixelBufferGetHeight(depthMap)
var convertedDepthMap: [[Float32]] = Array(
repeating: Array(repeating: 0, count: width),
count: height
)
CVPixelBufferLockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2))
let floatBuffer = unsafeBitCast(
CVPixelBufferGetBaseAddress(depthMap),
to: UnsafeMutablePointer<Float32>.self
)
for row in 0 ..< height {
for col in 0 ..< width {
if floatBuffer[width * row + col].isFinite{
convertedDepthMap[row][col] = floatBuffer[width * row + col]
}
}
}
CVPixelBufferUnlockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2))
return convertedDepthMap
}
Is this the right way of accessing the depth float values from a depth map. And what will be the unit for it. Because some times the depth values are in range of 0.7 when I keep the device close to the subject around 15 to 30 cm.
I'm trying to capture the depth map image using true depth camera in iPhone 15 plus. I was able to setup the AVCapture session with AVCaptureDeviceInput as builtInTrueDepthCamera and AVCapturePhotoOutput with isDepthDataDeliveryEnabled set as true. I also manually made the activeDepthDataFormat of AVCapture device to kCVPixelFormatType_DepthFloat16 or kCVPixelFormatType_DepthFloat32. Finally I have enabled isDepthDataDeliveryEnabled, embedsDepthDataInPhoto , embedsPortraitEffectsMatteInPhoto and embedsSemanticSegmentationMattesInPhoto in AVCapturePhotoSettings before capturing the photo using capturePhoto(with: photoSettings, delegate: self) method.
I have checked manually printing the activeDepthDataFormat of AVCapture device. First before setting it by default it is
Optional('dpth'/'hdis' 640x 480, { 2- 30 fps}, photo dims:{}, fov:73.699, system exposure bias range:-2.0-2.0)
After forcing it to kCVPixelFormatType_DepthFloat16 or kCVPixelFormatType_DepthFloat32 the format is
Optional('dpth'/'hdep' 160x 120, { 2- 30 fps}, photo dims:{}, fov:73.699, system exposure bias range:-2.0-2.0)
But when I receive the captured photo in
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?)
The depth map is
Optional(hdis 640x480 (high/abs) calibration:{intrinsicMatrix: [2723.07 0.00 2016.00 | 0.00 2723.07 1512.00 | 0.00 0.00 1.00], extrinsicMatrix: [1.00 0.00 0.00 0.00 | 0.00 1.00 0.00 0.00 | 0.00 0.00 1.00 0.00] pixelSize:0.001 mm, distortionCenter:{2016.00,1512.00}, ref:{4032x3024}})
Here it shows hdis instead of hdep, why is it capturing disparity map instead of true depth map.
The depth quality is high and depth data accuracy is absolute.
Here is my code
import UIKit
import AVKit
import AVFoundation
class ViewController: UIViewController, AVCapturePhotoCaptureDelegate {
@IBOutlet weak var previewView: UIView!
@IBOutlet weak var resultLbl: UILabel!
private var session = AVCaptureSession()
private var captureDevice: AVCaptureDevice?
private var inputDevice: AVCaptureDeviceInput?
private var photoOutput: AVCapturePhotoOutput?
private var photoSettings: AVCapturePhotoSettings?
private var cameraPreviewLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
self.setupCaptureSession()
}
func setupCaptureSession(){
captureDevice = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .unspecified)
guard let captureDevice else{
print("ERROR::UNABLE TO SET TRUE DEPTH CAMERA ")
return }
session.beginConfiguration()
do{
inputDevice = try AVCaptureDeviceInput(device: captureDevice)
guard let inputDevice else{
print("ERROR: UNABLE TO SET UP INPUT DEVICE")
return }
if session.canAddInput(inputDevice){
session.addInput(inputDevice)
}
}
catch{
print(error)
}
photoOutput = AVCapturePhotoOutput()
guard let photoOutput else{
print("ERROR: UNABLE TO SET UP PHOTO OUTPUT")
return }
if session.canAddOutput(photoOutput){
session.addOutput(photoOutput)
}
session.sessionPreset = .photo
photoOutput.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
print("IS DEPTH ENABLED:: \(photoOutput.isDepthDataDeliveryEnabled)")
session.commitConfiguration()
let availableFormats = captureDevice.activeFormat.supportedDepthDataFormats
let depthFormat = availableFormats.filter { format in
let pixelFormatType =
CMFormatDescriptionGetMediaSubType(format.formatDescription)
return (pixelFormatType == kCVPixelFormatType_DepthFloat16 ||
pixelFormatType == kCVPixelFormatType_DepthFloat32)
}.first
session.beginConfiguration()
try! captureDevice.lockForConfiguration()
captureDevice.activeDepthDataFormat = depthFormat
captureDevice.unlockForConfiguration()
session.commitConfiguration()
self.setupPreviewLayer()
}
func setupPreviewLayer(){
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
cameraPreviewLayer?.videoGravity = .resizeAspectFill
if let cameraPreviewLayer{
self.previewView.layer.addSublayer(cameraPreviewLayer)
cameraPreviewLayer.frame = self.previewView.bounds
}
DispatchQueue.global(qos: .userInteractive).async {
self.session.startRunning()
}
}
@IBAction func captureBtnPressed(_ sender: Any) {
photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey:AVVideoCodecType.jpeg])
guard let photoSettings else{
print("ERROR: UNABLE TO SETUP PHOTO SETTINGS")
return
}
guard let photoOutput else{
print("ERROR: UNABLE TO SET UP PHOTO OUTPUT")
return
}
photoSettings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
photoSettings.embedsDepthDataInPhoto = true
photoSettings.embedsPortraitEffectsMatteInPhoto = true
photoSettings.embedsSemanticSegmentationMattesInPhoto = true
photoOutput.capturePhoto(with: photoSettings, delegate: self)
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) {
print(photo.depthData)
switch photo.depthData?.depthDataQuality {
case .low:
print("Depth quality is low")
case .high:
print("Depth quality is high")
case nil:
print("Depth quality is nil")
}
switch photo.depthData?.depthDataAccuracy {
case .relative:
print("Depth accuarcy is relative")
case .absolute:
print("Depth accuarcy is absolute")
case nil:
print("Depth accuarcy is nil")
}
if let imageData = photo.fileDataRepresentation(){
if let image = UIImage(data: imageData){
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
}
}
}
I'm trying to implement anti-spoofing in iOS app using iphone true depth front camera. I have checked the following questions still can't find a proper working solution.
I trained a coreML model using 22000 depth human face images and 22000 non-human face(objects,food etc) images. The accuracy of the model is very less.
When testing out with flat 2d images shown on a smartphone screen I found that I get depth map even for flat 2D images like this. Even though the image is flat how does it give the depth map for the person shown in the flat 2D picture so the model thinks that it is a real face instead of a spoofed one.
I implemented depth capture by following this documentation and I made sure that I get depth map instead of disparity map
https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_photos_with_depth
My next approach was to use NCNN framework to implement anti-spoofing by using the model used in the Mini-vision android anti-spoofing sample. I rewrote their library in iOS by using the objective C++ wrapper for C++ as the sample was only available for android app. And I tested by feeding 80x80 UI-Image in a open cv matrix format it's accurracy is less than the android one.
How can I solve this problem.
I plan to create a simple motion graphics software for macOS that animates text, basic shapes, and handles audio. I'll use SwiftUI for the UI.
What are the commonly used technologies for rendering animated graphics? Core Animation is suitable for UI animations but not for exporting and controlling UI animations.
Basic requirements:
Timeline user interface
Animation of text and basic shapes
Viewer in SwiftUI GUI with transport control (play, pause, scrub, …)
Export to video file
Is Metal or Core Graphics typically used directly? I want to keep it as simple as possible.
Hey. I am trying to create a present view with a bunch of media (images/videos). Right now I am using a ZStack to render each media and change opacity based on the index selected using a scrollView. The issue seems to be that sometimes, videos don't seem to load in the main slide. There is a slide created as the video exists, the Player shows controls too but doesn't play anything.
Present View Z-Stack
ZStack {
ForEach(presentation.slides.indices, id: .self) { index in
if let media = mediaCacheManager.mediaCache[index] {
if let player = media as? AVPlayer {
PlayerView(player: player)
.aspectRatio(16/10, contentMode: .fit )
.frame(width: UIScreen.main.bounds.width * 0.8)
.background(Color.gray.opacity(0.2))
.clipShape(RoundedRectangle(cornerRadius: 40))
.overlay(
RoundedRectangle(cornerRadius: 40)
.stroke(Color.gray.opacity(0.5), lineWidth: 1)
)
.onDisappear {
player.pause()
}
.opacity(appModel.currentSlide == index ? 1 : 0)
} else if let image = media as? Image {
image
.resizable()
.scaledToFit()
.frame(width: UIScreen.main.bounds.width * 0.8)
.background(Color.gray.opacity(0.2))
.clipShape(RoundedRectangle(cornerRadius: 40))
.overlay(
RoundedRectangle(cornerRadius: 40)
.stroke(Color.gray.opacity(0.5), lineWidth: 1)
)
.padding(.vertical, 10)
.opacity(appModel.currentSlide == index ? 1 : 0)
}
}
}
}
The PlayerView
public class PlayerUIView: UIView {
let playerVC = AVPlayerViewController()
let gravity: AVLayerVideoGravity
let manageAudio: Bool
override init(frame: CGRect) {
self.gravity = .resizeAspectFill
self.manageAudio = true
super.init(frame: frame)
}
deinit {
if manageAudio {
try? AVAudioSession.sharedInstance().setActive(false)
}
}
init(player: AVPlayer?, gravity: AVLayerVideoGravity, manageAudio: Bool = true) {
self.gravity = gravity
self.manageAudio = manageAudio
super.init(frame: .zero)
guard let player = player else { return }
self.playerSetup(player: player)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
public override func layoutSubviews() {
super.layoutSubviews()
playerVC.view.frame = bounds
playerVC.view.backgroundColor = .clear
playerVC.allowsVideoFrameAnalysis = false
}
private func playerSetup(player: AVPlayer) {
playerVC.updatesNowPlayingInfoCenter = true
playerVC.player = player
playerVC.showsPlaybackControls = true
playerVC.view.backgroundColor = .clear
playerVC.exitsFullScreenWhenPlaybackEnds = true
playerVC.videoGravity = gravity
self.addSubview(playerVC.view)
}
}
There are significant crash reports coming from iOS 18 users regarding AVKit framework that starts from this line [AVPlayerController _observeValueForKeyPath:oldValue:newValue:] which seems to be coming from iOS internal SDK. There are 2 kinds of crash we found:
UI modification on background thread
From the stack trace it seems like when AVPictureInPictureController is being deallocated and its view is being removed from superview somehow the code is being executed in background thread because there is this line there _AssertAutoLayoutOnAllowedThreadsOnly highlighted before the crash.
But I’ve checked our code that plays around AVPictureInPictureController, in the locations where we would deallocate the object it will always be called on main thread which are insideviewDidLoad and deinit inside UIViewController class. From the log, it seems like the crash happened when user try to open another content when PIP player is active resulting in the current PIP instance will be replaced with a new one. My suspect is the observation logic inside AVPlayerController could be the hint to this issue, probably something broken over there since this issue happened across our app versions on iOS 18 users only.
Unfortunately, I was unable to reproduce this issue yet but one of my colleagues reproduced it once but haven’t been able to do it again since. The reports keep raising each day up to 1.3k events in the last 30 days now.
Over release object
This one has lower reports than the first one but I decided to include it since it might have relevant information regarding the first crash since the starting stack trace is similar. The crash timing seems to be similar to the first one, where we deallocate existing AVPictureInPictureController and later replace it with a new one and also found only in iOS 18 users which also refers to [AVPlayerController _observeValueForKeyPath:oldValue:newValue:]. I also was unable to reproduce this issue so far.
Oh, and both of the issues happened on both iPhone and iPad.
We’d appreciate any advice on what we can do to avoid this in the future and probably any hint on why it could happened.
I have reported this issue with bug number: FB15620734
I also attached one sample crash report for each of the crashes here.
non ui thread access.crash
over release.crash
Observing 4K playback issues on tvOS 18. Encountering HTTP 416 (Range Not Satisfiable) errors when the player attempts to request byte ranges that are outside the available data on the server. This leads to fatal playback error resulting in the error
CoreMediaErrorDomain error -12939 - HTTP 416: Requested Range Not Satisfiable
Notably, there are no customizations or modifications to the standard AVPlayerViewController on tvOS player.
AVPlayer is trying to request the resource of length equals 739 bytes with an invalid byte range (739-) request. Since the request is not satisfiable server returns with 416. Note this is only limited to tvOS 18 and we are trying to understand why AVPlayer is making this invalid request in tvOS 18 resulting in playback error.
In iOS, when I use AVPlayerViewController to play back a slow motion video, it has a "ramp-up" stage at the start and a "ramp-down" stage at the end, and the video plays at the normal speed (i.e. not slow motion) during these stages.
My question is: are these non-slow-motion stages defined in the video file itself? (e.g. some kind of meta data?) Or, is it just a playback approach used by AVPlayerViewController ?
Thanks!
When the native info panel (which displays the title, subtitle, description, and custom buttons) opens, the focus immediately shifts to the first button. As a result, VoiceOver skips the description, which is crucial for users relying on accessibility features.
I haven’t found a way to detect when it opens. Knowing this would allow me to trigger custom VoiceOver announcements or adjust the focus order dynamically.
Are any other people experiencing this issue, and how do we solve it?