Has anyone tried to make an ILPD based AIME file?
When I try the resulting AIME switches to USDZ Mesh instead of saving the ILPD Data.
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I made a CMIOExtension (a virtual camera) which generates its own output, for use in our in-house software testing. I wanted to make a video source with 29.97, 30, 59.94 and 60fps output.
To this end, I created a CMIOExtensionDeviceSource which creates a CMIOExtensionDevice with one CMIOExtensionStreamSource with various stream formats contained in [CMIOExtensionStreamFormat], including one with both maxFrameDuration and minFrameDuration = CMTimeMake(value: 1000, timescale: 30000) and another with both maxFrameDuration and minFrameDuration = CMTimeMake(value: 1001, timescale: 30000)
I've held off on the creation of the 59.94/60fps source for now until this problem is resolved.
my virtual camera works, it produces a signal, but when I examine its associated AVCaptureDevice in the debugger, I find
(lldb) po self.captureDevice?.formats[0].videoSupportedFrameRateRanges[0].maxFrameDuration
▿ Optional<CMTime>
▿ some : CMTime
- value : 1000000
- timescale : 30000000
▿ flags : CMTimeFlags
- rawValue : 1
- epoch : 0
I get the same value, 1000000/30000000, or exactly 30fps, for all the formats of my AVCaptureDevice.
Is there something I'm doing wrong, or do CMIOExtensionDevices always round the frame rates?
I can't force CoreMediaIO to produce frames at exactly my desired frame interval, but I'd like to ensure that the average frame rate is my desired rate. How can I do that? Frame emission is governed by a repeating DispatchSourceTimer with a repeat time specified in nanoseconds with the TimerFlags set to 'strict'.
I’m getting Auto Layout constraint conflict warnings related to AVPlayerView in my project.
I’ve reproduced the issue on macOS Tahoe 26.2.
The conflict appears to originate inside AVPlayerView itself, between its internal subviews, rather than in my own layout code.
This issue can be easily reproduced in an empty project by simply adding an AVPlayerView as a subview using the code below.
class ViewController: NSViewController {
override func viewDidLoad() {
super.viewDidLoad()
let playerView = AVPlayerView()
view.addSubview(playerView)
}
}
After presenting that view controller, the following Auto Layout constraint conflict warnings appear in the console:
Conflicting constraints detected: <decode: bad range for [%@] got [offs:346 len:1057 within:0]>.
Will attempt to recover by breaking <decode: bad range for [%@] got [offs:1403 len:81 within:0]>.
Unable to simultaneously satisfy constraints:
(
"<NSLayoutConstraint:0xb33c29950 H:|-(0)-[AVDesktopPlayerViewContentView:0x10164dce0](LTR) (active, names: '|':AVPlayerView:0xb32ecc000 )>",
"<NSLayoutConstraint:0xb33c299a0 AVDesktopPlayerViewContentView:0x10164dce0.right == AVPlayerView:0xb32ecc000.right (active)>",
"<NSAutoresizingMaskLayoutConstraint:0xb33c62850 h=--& v=--& AVPlayerView:0xb32ecc000.width == 0 (active)>",
"<NSLayoutConstraint:0xb33d46df0 H:|-(0)-[AVEventPassthroughView:0xb33cfb480] (active, names: '|':AVDesktopPlayerViewContentView:0x10164dce0 )>",
"<NSLayoutConstraint:0xb33d46e40 AVEventPassthroughView:0xb33cfb480.trailing == AVDesktopPlayerViewContentView:0x10164dce0.trailing (active)>",
"<NSLayoutConstraint:0xb33ef8320 NSGlassView:0xb33ed8c00.trailing == AVEventPassthroughView:0xb33cfb480.trailing - 6 (active)>",
"<NSLayoutConstraint:0xb33ef8460 NSGlassView:0xb33ed8c00.width == 180 (active)>",
"<NSLayoutConstraint:0xb33ef84b0 NSGlassView:0xb33ed8c00.leading >= AVEventPassthroughView:0xb33cfb480.leading + 6 (active)>"
)
Will attempt to recover by breaking constraint
<NSLayoutConstraint:0xb33ef8460 NSGlassView:0xb33ed8c00.width == 180 (active)>
Set the NSUserDefault NSConstraintBasedLayoutVisualizeMutuallyExclusiveConstraints to YES to have -[NSWindow visualizeConstraints:] automatically called when this happens. And/or, set a symbolic breakpoint on LAYOUT_CONSTRAINTS_NOT_SATISFIABLE to catch this in the debugger.
Is it system bug or maybe someone knows how to fix that?
Thank you.
The front facing camera on iPhone 16 (and every model previous) gives the following values for AVCaptureDevice.RotationCoordinator.videoRotationAngleForHorizonLevelCapture:
90 degrees portrait
180 degrees landscape left
270 degrees for upside-down
0 degrees for landscape right
Using these values a transform is calculated:
var transform: CGAffineTransform {
let degrees = rotationCoordinator.videoRotationAngleForHorizonLevelCapture
let radians = degrees * .pi / 180.0
return CGAffineTransform(rotationAngle: radians)
}
And then applied to the AVAssetInput:
videoInput = AVAssetWriterInput(mediaType: .video,
outputSettings: videoSettings,
sourceFormatHint: videoFormatDescription)
videoInput.transform = transform
And this ensures the correct transform is added to the metadata so that the recorded video plays in the correct orientation.
However, with the iPhone 17 Pro and iPhone 17 Pro Max front facing cameras, AVCaptureDevice.RotationCoordinator.videoRotationAngleForHorizonLevelCapture return the different values:
0 degrees portrait
90 degrees landscape left
180 degrees for upside-down
270 degrees for landscape right
So this approach breaks down, and the video orientation is incorrect. How is this intended to be handled?
I'm relatively new to Swift development (and native iOS development for that matter)
I've got an iOS app that uses the iPhone / iPad built in cameras, and am looking to make this more compatible with macOS.
Using the normal AVCaptureDevice.DiscoverySession I seem to get the iPhone Continuity Camera and the in-built MacBook Pro camera but I don't see other input devices that I see in QuickTime Player (for example) such as connected external cameras or Virtual Inputs provided by NDI Virtual Input and OBS.
Is there a way to see access these without a specific Mac build (as the rest of the functionality works great, and I'd rather not diverge the codebase too much as it's easier to update one app than two!
Hello everyone,
I’m looking for more detailed information regarding UVC (USB Video Class) over MFi within the Apple ecosystem and would appreciate some clarification.
I’m interested in developing (or interfacing with) an accessory that transmits video over USB using the UVC standard, and I’d like to better understand how this works within the MFi (Made for iPhone) program.
Here are my main questions:
1. Do iOS devices provide native support for UVC over USB-C or Lightning within the MFi framework?
2. Are there any specific firmware or authentication requirements when the accessory is MFi-certified?
3. Does UVC support depend solely on the hardware interface (USB-C vs Lightning), or are there additional software-level requirements?
4. Is there any official documentation outlining the recommended flow for implementing UVC-based video capture accessories on iOS?
From what I understand, USB-C iPads appear to offer more direct support for standard UVC devices, but it’s not entirely clear how this integrates with the MFi ecosystem with iOS, especially for commercial product development.
If anyone has gone through this process or can point me to relevant technical documentation, I would greatly appreciate the guidance.
Thank you!