Hello,
I am wondering if it is possible to have audio from my AirPods be sent to my speech to text service and at the same time have the built in mic audio input be sent to recording a video?
I ask because I want my users to be able to say "CAPTURE" and I start recording a video (with audio from the built in mic) and then when the user says "STOP" I stop the recording.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
We're distributing a virtual camera with our app that does not profit in the slightest from automatically applied system video effects both to the video going in (physical camera device) or out (virtual camera device). I'm aware of setting NSCameraReactionEffectGesturesEnabledDefault in Info.plist and determining active video effects via AVCaptureDevice API. Those are obviously crutches, because having to tell users to go look for and click around in menu bar apps is the opposite of a great UX.
To make our product's video output more deterministic, I'm looking for a way to tell the CMIO subsystem that our virtual camera does not support any of the system video effects. I'm seeing properties like
AVCaptureDevice.Format.isPortraitEffectSupported and AVCaptureDevice.Format.isStudioLightSupported whose documentation refers to the format's ability to support these effects. Since we're setting a CMFormatDescription via CMIOExtensionStreamSource.formats I was hoping to find something in the extensions, but wasn't successful so far.
Can this be done?
Environment
Device: iPhone 16e
iOS Version: 18.4.1 - 18.7.1
Framework: AVFoundation (AVAudioEngine)
Problem Summary
On iPhone 16e (iOS 18.4.1-18.7.1), the installTap callback stops being invoked after resuming from a phone call interruption. This issue is specific to phone call interruptions and does not occur on iPhone 14, iPhone SE 3, or earlier devices.
Expected Behavior
After a phone call interruption ends and audioEngine.start() is called, the previously installed tap should continue receiving audio buffers.
Actual Behavior
After resuming from phone call interruption:
Tap callback is no longer invoked
No audio data is captured
No errors are thrown
Engine appears to be running normally
Note: Normal pause/resume (without phone call interruption) works correctly.
Steps to Reproduce
Start audio recording on iPhone 16e
Receive or make a phone call (triggers AVAudioSession interruption)
End the phone call
Resume recording with audioEngine.start()
Result: Tap callback is not invoked
Tested devices:
iPhone 16e (iOS 18.4.1-18.7.1): Issue reproduces ✗
iPhone 14 (iOS 18.x): Works correctly ✓
iPhone SE 3 (iOS 18.x): Works correctly ✓
Code
Initial Setup (Works)
let inputNode = audioEngine.inputNode
inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in
self.processAudioBuffer(buffer, at: time)
}
audioEngine.prepare()
try audioEngine.start()
Interruption Handling
NotificationCenter.default.addObserver(
forName: AVAudioSession.interruptionNotification,
object: AVAudioSession.sharedInstance(),
queue: nil
) { notification in
guard let userInfo = notification.userInfo,
let typeValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt,
let type = AVAudioSession.InterruptionType(rawValue: typeValue) else {
return
}
if type == .began {
self.audioEngine.pause()
} else if type == .ended {
try? self.audioSession.setActive(true)
try? self.audioEngine.start()
// Tap callback doesn't work after this on iPhone 16e
}
}
Workaround
Full engine restart is required on iPhone 16e:
func resumeAfterInterruption() {
audioEngine.stop()
inputNode.removeTap(onBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in
self.processAudioBuffer(buffer, at: time)
}
audioEngine.prepare()
try audioSession.setActive(true)
try audioEngine.start()
}
This works but adds latency and complexity compared to simple resume.
Questions
Is this expected behavior on iPhone 16e?
What is the recommended way to handle phone call interruptions?
Why does this only affect iPhone 16e and not iPhone 14 or SE 3?
Any guidance would be appreciated!
With older iOS versions, when user taps Mute/Volume button on AVPLayerViewController to unmute, the system restores the sound volume of device to the level when user muted before.
On iOS 26, when user taps unmute button on screen, the volume starts from 0 (not restore). (but it still restores if user unmutes by pressing physical volume buttons).
As I understand, the Volume bar/button on AVPlayerViewController is MPVolumeView, and I can not control it. So this is a feature of the system.
But I got complaints that this is a bug. I did not find documents that describe this change of Mute button behavior.
I need some bases to explain this situation. Thank you.
Hi,
I'm working on a project that uses the AVSpeechSynthesizer and AVSpeechUtterance.
I discovered by chance that the AVSpeechSynthesizer automatically completes some words instead of just outputting what it's supposed to.
These are abbreviations for days of the week or months, but not all of them. I don't want either of them automatically completed, but only the specified text. The completion transcends languages.
I have written a short example program for demonstration purposes.
import SwiftUI
import AVFoundation
import Foundation
let synthesizer: AVSpeechSynthesizer = AVSpeechSynthesizer()
struct ContentView: View {
var body: some View {
VStack {
Button {
utter("mon")
} label: {
Text("mon")
}
.buttonStyle(.borderedProminent)
Button {
utter("tue")
} label: {
Text("tue")
}
.buttonStyle(.borderedProminent)
Button {
utter("thu")
} label: {
Text("thu")
}
.buttonStyle(.borderedProminent)
Button {
utter("feb")
} label: {
Text("feb")
}
.buttonStyle(.borderedProminent)
Button {
utter("feb", lang: "de-DE")
} label: {
Text("feb DE")
}
.buttonStyle(.borderedProminent)
Button {
utter("wed")
} label: {
Text("wed")
}
.buttonStyle(.borderedProminent)
}
.padding()
}
private func utter(_ text: String, lang: String = "en-US") {
let utterance = AVSpeechUtterance(string: text)
let voice = AVSpeechSynthesisVoice(language: lang)
utterance.voice = voice
synthesizer.speak(utterance)
}
}
#Preview {
ContentView()
}
Thank you
Christian
I'm building a Swift video editor with AVFoundation and a custom compositor. Despite setting AVVideoComposition.frameDuration to 60 FPS, I'm seeing significant frame skipping during playback.
Console Output Shows Frame Skipping
Frame #0 at 0.0 ms (fps: 60.0)
Frame #2 at 33.333333333333336 ms (fps: 60.0)
Frame #6 at 100.0 ms (fps: 60.0)
Frame #10 at 166.66666666666666 ms (fps: 60.0)
Frame #32 at 533.3333333333334 ms (fps: 60.0)
Frame #62 at 1033.3333333333335 ms (fps: 60.0)
Frame #96 at 1600.0 ms (fps: 60.0)
Instead of frames every ~16.67ms (60 FPS), I'm getting irregular intervals, sometimes 33ms, 67ms, or hundreds of milliseconds apart.
Renderer.swift (Key Parts)
@MainActor
class Renderer: ObservableObject {
@Published var playerItem: AVPlayerItem?
private let assetManager: ProjectAssetManager?
private let compositorId: String
func buildComposition() async {
// ... load mouse moves/clicks data ...
let composition = AVMutableComposition()
let videoTrack = composition.addMutableTrack(
withMediaType: .video,
preferredTrackID: kCMPersistentTrackID_Invalid
)
var currentTime = CMTime.zero
var layerInstructions: [AVMutableVideoCompositionLayerInstruction] = []
// Insert video segments
for videoURL in videoURLs {
let asset = AVAsset(url: videoURL)
let tracks = try await asset.loadTracks(withMediaType: .video)
let assetVideoTrack = tracks.first
let duration = try await asset.load(.duration)
try videoTrack.insertTimeRange(
CMTimeRange(start: .zero, duration: duration),
of: assetVideoTrack,
at: currentTime
)
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
let transform = try await assetVideoTrack.load(.preferredTransform)
layerInstruction.setTransform(transform, at: currentTime)
layerInstructions.append(layerInstruction)
currentTime = CMTimeAdd(currentTime, duration)
}
let videoComposition = AVMutableVideoComposition()
videoComposition.frameDuration = CMTime(value: 1, timescale: 60) // 60 FPS
// Set render size from first video
if let firstURL = videoURLs.first {
let firstAsset = AVAsset(url: firstURL)
let firstTrack = try await firstAsset.loadTracks(withMediaType: .video).first
let naturalSize = try await firstTrack.load(.naturalSize)
let transform = try await firstTrack.load(.preferredTransform)
videoComposition.renderSize = CGSize(
width: abs(naturalSize.applying(transform).width),
height: abs(naturalSize.applying(transform).height)
)
}
let instruction = CompositorInstruction()
instruction.timeRange = CMTimeRange(start: .zero, duration: currentTime)
instruction.layerInstructions = layerInstructions
instruction.compositorId = compositorId
videoComposition.instructions = [instruction]
videoComposition.customVideoCompositorClass = CustomVideoCompositor.self
let playerItem = AVPlayerItem(asset: composition)
playerItem.videoComposition = videoComposition
self.playerItem = playerItem
}
}
class CompositorInstruction: NSObject, AVVideoCompositionInstructionProtocol {
var timeRange: CMTimeRange = .zero
var enablePostProcessing: Bool = false
var containsTweening: Bool = false
var requiredSourceTrackIDs: [NSValue]?
var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid
var layerInstructions: [AVVideoCompositionLayerInstruction] = []
var compositorId: String = ""
}
class CustomVideoCompositor: NSObject, AVVideoCompositing {
var sourcePixelBufferAttributes: [String : Any]? = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)
]
var requiredPixelBufferAttributesForRenderContext: [String : Any] = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)
]
func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {}
func startRequest(_ asyncVideoCompositionRequest: AVAsynchronousVideoCompositionRequest) {
guard let sourceTrackID = asyncVideoCompositionRequest.sourceTrackIDs.first?.int32Value,
let sourcePixelBuffer = asyncVideoCompositionRequest.sourceFrame(byTrackID: sourceTrackID),
let outputBuffer = asyncVideoCompositionRequest.renderContext.newPixelBuffer() else {
asyncVideoCompositionRequest.finish(with: NSError(domain: "VideoCompositor", code: -1))
return
}
let videoComposition = asyncVideoCompositionRequest.renderContext.videoComposition
let frameDuration = videoComposition.frameDuration
let fps = Double(frameDuration.timescale) / Double(frameDuration.value)
let compositionTime = asyncVideoCompositionRequest.compositionTime
let seconds = CMTimeGetSeconds(compositionTime)
let frameInMilliseconds = seconds * 1000
let frameNumber = Int(round(seconds * fps))
print("Frame #\(frameNumber) at \(frameInMilliseconds) ms (fps: \(fps))")
asyncVideoCompositionRequest.finish(withComposedVideoFrame: outputBuffer)
}
func cancelAllPendingVideoCompositionRequests() {}
}
VideoPlayerViewModel
@MainActor
class VideoPlayerViewModel: ObservableObject {
let player = AVPlayer()
private let renderer: Renderer
func loadVideo() async {
await renderer.buildComposition()
if let playerItem = renderer.playerItem {
player.replaceCurrentItem(with: playerItem)
}
}
}
What I've Tried
Frame skipping is consistent—exact same timestamps on every playback
Issue persists even with minimal processing (just passing through buffers)
Occurs regardless of compositor complexity
Please note that I need every frame at exact millisecond intervals for my application. Frame loss or inconsistent frameInMillisecond values are not acceptable.
In what scenario will an app receive the limitExceeded PHPhotosError code? This case was added in iOS 26.1 and is not currently documented. What PhotoKit APIs can encounter this error and how should it be handled?
Hi,
I'm a fan of the gallery in vision pro which has video as well as still photography but I'm wondering if Apple has considered adding the projected media tags to heic so that we can go that next step from Spatial photos to Immersive photos. I have a device that can give me 12k x 6k fisheye images in HDR, but it can't do it at a framerate or resolution that's good enough for video, so I want to cut my losses and show off immersive photos instead. Is there something Apple is already working on for APMP stills or should I create my own app that reads metadata inside a HEIC that I infer in a similar way to the demo "ProjectedMediaConversion" is doing for Video. It would be great to have 180VR photos, which could show as Spatial in a gallery view, but going immersive would half-surround you instead of floating in the blurred view. I think that would be a pretty amazing effect.
Topic:
Media Technologies
SubTopic:
Photos & Camera
I use
htttps://api.music.apple.com/v1/me/library/playlists/${playlistId}/tracks
to add tracks to a playlist I created.
How do I DELETE tracks from the playlist?
The documentation does not mention a method for this. I have tried calling DELETE methods in various combinations but nothing seems to work.
Is this possible?
方法不执行,当我点击avplayerviewcontroller 全屏点击返回按钮 时 func playerViewController(_ playerViewController: AVPlayerViewController,
willEndFullScreenPresentationWithAnimationCoordinator coordinator: UIViewControllerTransitionCoordinator) 没有执行
Topic:
Media Technologies
SubTopic:
Video
When I’m on FaceTime, my phone will randomly end my call? I have an iPhone 17, iOS 26.1. Some times I won’t even be touching my phone screen and it’ll hang up. I’m not sure if this is a universal issue or just a me problem. It’s getting really annoying.
Topic:
Media Technologies
SubTopic:
General
Is there limits on the supported dimension for VTLowLatencyFrameInterpolationConfiguration. Querying VTLowLatencyFrameInterpolationConfiguration.maximumDimensions and VTLowLatencyFrameInterpolationConfiguration.minimumDimensions returns nil. When I try the WWDC sample project EnhancingYourAppWithMachineLearningBasedVideoEffects with a 4k video this statement try frameProcessor.startSession(configuration: configuration) executes but try await frameProcessor.process(parameters: parameters) throws error Error Domain=VTFrameProcessorErrorDomain Code=-19730 "Processor is not initialized" UserInfo={NSLocalizedDescription=Processor is not initialized}.
Also, why is VTLowLatencyFrameInterpolationConfiguration able to run while app is backgrounded but VTFrameRateConversionParameters can't (due to gpu usage)?
Hi,
when a CoreMIDI driver controls physical HW it is probably quite commune to have to control the amount of MIDI data received from the system.
What comes to mind is to just delay returning control of the MIDIDriverInterface::Send() callback to the calling process. While the application trying to send MIDI really stalls until the callback returns it seems only to be a side effect of a generally stalled CoreMIDI server. Between the callbacks the application can send as much MIDI data as it wants to CoreMIDI, it's buffering seems to be endless... However the HW might not be able to play out all the data.
It seems there is no way to indicate an overflow/full buffer situation back the application/CoreMIDI. How is this supposed to work?
Thanks, any hints or pointers are highly appreciated!
Hagen.
The iPhone 17’s front camera with the new 18MP square sensor and iOS 26’s Center Stage feature can auto-rotate between portrait and landscape like the native iOS Camera app.
Is there a Swift or AVFoundation API that allows developers to manually control front camera orientation in the same way the native Camera app does? Or is this auto-rotation strictly handled by the system without public API access?
Hi,
While I don't normally use FairPlay,
I got this email that is so strangely worded I am wondering if it makes sense to people who do know it or if it has some typo or what.
You can only generate certificates for SDK 4, also SDK 4 is no longer supported?
(Also "will not expire" is imprecise phrasing, certificates presumably will expire, I think it meant to say are still valid / are not invalidated.)
Fundamentally, my questions are: is there a known transform I can apply onto a given (pixel) position (passed into a Metal Fragment Function) to correctly sample a texture provided by the main cameras + processed by a Vision request. If so, what is it? If not, how can I accurately sample my masks?
My goal is to highlight people in a Vision Pro app using Compositor Services.
To start, I asynchronously receive camera frames for the main left and right cameras. This is the breakdown of the specific CameraVideoFormat I pass along to the CameraFrameProvider:
minFrameDuration: 0.03
maxFrameDuration: 0.033333335
frameSize: (1920.0, 1080.0)
pixelFormat: 875704422
cameraType: main
cameraPositions: [left, right]
cameraRectification: mono
From each camera frame sample, I extract the left and right buffers (CVReadOnlyPixelBuffer.withUnsafebuffer ==> CVPixelBuffer).
I asynchronously process the extracted buffers by performing a VNGeneratePersonSegmentationRequest on both of them:
// NOTE: This block of code and all following code blocks contain simplified representations of my code for clarity's sake.
var request = VNGeneratePersonSegmentationRequest()
request.qualityLevel = .balanced
request.outputPixelFormat = kCVPixelFormatType_OneComponent8
...
let lHandler = VNSequenceRequestHandler()
let rHandler = VNSequenceRequestHandler()
...
func processBuffers() async {
try lHandler.perform([request], on: lBuffer)
guard let lMask = request.results?.first?.pixelBuffer else {...}
try rHandler.perform([request], on: rBuffer)
guard let rMask = request.results?.first?.pixelBuffer else {...}
appModel.latestPersonMasks = (lMask, rMask)
}
I store the two resulting CVPixelBuffers in my appModel. For both of these buffers aka grayscale masks:
width (in pixels) = 512
height (in pixels) = 384
byters per row = 512
plane count = 0
pixel format type = 1278226488
I am using Compositor Services to render my content in Immersive Space. My implementation of Compositor Services is based off of the same code from Interacting with virtual content blended with passthrough.
Within the Shaders.metal, the tint's Fragment Shader is now passed the grayscale masks (converted from CVPixelBuffer to MTLTexture via CVMetalTextureCacheCreateTextureFromImage() at the beginning of the main render pipeline).
fragment float4 tintFragmentShader(
TintInOut in [[stage_in]],
ushort amp_id [[amplification_id]],
texture2d<uint> leftMask [[texture(0)]],
texture2d<uint> rightMask [[texture(1)]]
)
{
if (in.color.a <= 0.0) {
discard_fragment();
}
float2 uv;
if (amp_id == 0) { // LEFT
uv = ??????????????????????;
} else { // RIGHT
uv = ??????????????????????;
}
constexpr sampler linearSampler (mip_filter::linear, mag_filter::linear, min_filter::linear);
// Sample the PersonSegmentation grayscale mask
float maskValue = 0.0;
if (amp_id == 0) { // LEFT
if (leftMask.get_width() > 0) {
maskValue = rightMask.sample(linearSampler, uv).r;
}
} else { // RIGHT
if (rightMask.get_width() > 0) {
maskValue = rightMask.sample(linearSampler, uv).r;
}
}
if (maskValue > 250) {
return (1.0, 1.0, 1.0, 0.5)
}
return in.color;
}
I need to correctly sample the masks for a given fragment.
The LayerRenderer.Layout is set to .layered. From Developer Documentation.
A layout that specifies each view’s content as a slice of a single texture.
Using the Metal debugger, I know that the final render target texture for each view / eye is 1888 x 1792 pixels, giving an aspect ratio of 59:56.
The initial CVPixelBuffer provided by the main left and right cameras is 1920x1080 (16:9).
The grayscale CVPixelBuffer returned by the VNPersonSegmentationRequest is 512x384 (4:3).
All of these aspect ratios are different.
My questions come down to: is there a known transform I can apply onto a given (pixel) position to correctly sample a texture provided by the main cameras + processed by a Vision request. If so, what is it? If not, how can I accurately sample my masks?
Within the tint's Vertex Shader, after applying the modelViewProjectionMatrix, I have tried every version I have been able to find that takes the pixel space position (= vertices[vertexID].position.xy) and the viewport size (1888x1792) to compute the correct clip space position (maybe = pixel space position.xy / (viewport size * 0.5)???) of the grayscale masks but nothing has worked. The "highlight" of the person segmentations is off: scaled a little too big, offset little to far up and off to the side.
Hi,
I’m an iOS developer building an app with an use case that needs advanced playback on Apple Music subscription streams, specifically:
• Real-time tempo change (BPM) during playback — i.e., time-stretch with key-lock, not just crossfade.
• Beat-matched transitions between tracks.
From what I can tell, this capability seems to exist only for approved partners and isn’t available through public MusicKit.
Question: What’s the official request path to be evaluated for that restricted partner entitlement (application form, questionnaire, NDA, or internal team/BD contact)? If the entitlement identifier is internal, how can I get my account routed to the right Apple Music team?
For reference, publicly announced partners include Algoriddim djay, Serato DJ Pro, rekordbox (AlphaTheta), and Engine DJ—all of which appear to implement mixing features that imply advanced playback (tempo/beat-matching) on Apple Music content. I’d prefer not to share product details publicly for the moment and can provide specifics privately if needed.
Thanks in advance!
Topic:
Media Technologies
SubTopic:
Audio
Tags:
Apple Music API
FairPlay Streaming
MusicKit
AVFoundation
Hello, We have Video Stream app. It has HLS VOD Content. We supply 1080p, 4K Contents to users. Users were watching 1080p content before tvOS 26. Users can not watch 1080p content anymore when they update to tvOS 26. We have not changed anything at HLS playlist side and application version. This problem only occurs on Apple TV 4th Gen (A1625) tvOS 26 version. There is no problem with newer Apple TV devices. Would you help to resolve problem? Thanks in advance
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
Apple TV
tvOS
HTTP Live Streaming
I'm writing a simple app for iOS and I'd like to be able to do some text to speech in it. I have a basic audio manager class with a "speak" function:
import Foundation
import AVFoundation
class AudioManager {
static let shared = AudioManager()
var audioPlayer: AVAudioPlayer?
var isPlaying: Bool {
return audioPlayer?.isPlaying ?? false
}
var playbackPosition: TimeInterval = 0
func playSound(named name: String) {
guard let url = Bundle.main.url(forResource: name, withExtension: "mp3") else {
print("Sound file not found")
return
}
do {
if audioPlayer == nil || !isPlaying {
audioPlayer = try AVAudioPlayer(contentsOf: url)
audioPlayer?.currentTime = playbackPosition
audioPlayer?.prepareToPlay()
audioPlayer?.play()
} else {
print("Sound is already playing")
}
} catch {
print("Error playing sound: \(error.localizedDescription)")
}
}
func stopSound() {
if let player = audioPlayer {
playbackPosition = player.currentTime
player.stop()
}
}
func speak(text: String) {
let synthesizer = AVSpeechSynthesizer()
let utterance = AVSpeechUtterance(string: text)
utterance.voice = AVSpeechSynthesisVoice(language: "en-GB")
synthesizer.speak(utterance)
}
}
And my app shows text in a ScrollView:
ScrollView {
Text(self.description)
.padding()
.foregroundColor(.black)
.font(.headline)
.background(Color.gray.opacity(0))
}.onAppear {
AudioManager.shared.speak(text: self.description)
}
However, the text doesn't get read out (in the simulator). I see some output in the console:
Error fetching voices: Swift.DecodingError.dataCorrupted(Swift.DecodingError.Context(codingPath: [], debugDescription: "Invalid container metadata for _UnkeyedDecodingContainer, found keyedGraphEncodingNodeID", underlyingError: nil)). Using fallback voices.
I'm probably doing something wrong here, but not sure what.
Topic:
Media Technologies
SubTopic:
Audio
it seems that video toolbox in ios26 is having trouble decoding h.264 streams I have filed a bug report under https://feedbackassistant.apple.com/feedback/20741484
Topic:
Media Technologies
SubTopic:
Video