From what I understood from watching the session LockedCameraCapture, the extension doesn't have access to the App Group User Defaults, so I'm wondering how I can synchronize preferences between the extension and the main app.
From what I can see, the builtin iOS Camera app is able to synchronize its preferences. For example, the "Live Photo" mode toggle state is preserved whether in the main app or in the lock screen.
Is there anything I'm not aware of?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
I would like to enable users of my iOS app to place an object they want to scan, for example, on a turntable, instead of walking around it, in order to generate a 3D model of it afterward. How can I achieve this with the Object Capture API?
Hi there, whenever I use any third party editing software, the third clip and clips after that have no audio. Here’s how I did it
take any third party editing software
put 2 clips, cut one in half and delete the other half, cut a half of the 2nd clip
Any fixes?
Hello,
I have converted UIImage to CVPixelBuffer. I am creating a video writing app. In some cases, the same CVPixelBuffer should last in the video for 2 seconds or more.
However, I need to add 30 CVPixelBuffers per second because the video, to work on social media, must be 30 frames per second.
The problem is that whenever I try to add frames to long videos, like 50-minute videos, it gives an error.
The error is something like "Operation cannot be completed".
Give me an example of a loop to add 30 CVPixelBuffers per second to a currently written video.
Example:
while true {
if videoInput.isReadyForMoreMediaData {
break
}
if videoInput.isReadyForMoreMediaData,
let buffer = videoProvider.getNextFrame() {
adaptor.append(buffer, withPresentationTime: CMTime(value: 1, timescale: 30))
}
}
I await your response.
Hello,
my app works as Auv3 plugin.
I am interested in copying / pasting LogicPro chord track.
After I copy chord track in LogicPro and read UIPasteBoard.general in the app, I can see:
["LogicPasteBoardMarker": <OS_dispatch_data: data[0x3024599c0] = { leaf, size = 1, buf = 0x10a758000 }>]
How can I access these data? Thank you.
After the session video, "Build a great Lock Screen camera capture experience", was unclear about the UI.
So do developers need to provide a whole new UI in the extension? The main UI cannot be repurposed?
Just installed macOS Sequoia and observed that the mClientID and mProcessID parameters in the AudioServerPlugInClientInfo structure are empty when called AddDeviceClient and RemoveDeviceClient functions of the AudioServerPlugInDriverInterface.
This data is essential to identify the connected client, and its absence breaks the basic functionality of the HAL plugins.
FB13858951 ticket filed.
When I call requestAVAssetForVideo to retrieve a video for upload, the system appends a string of unknown characters to the returned path.
like this:
/var/mobile/Media/DCIM/101APPLE/IMG_1034.MOV#YnBsaXN0MDDRAQJfEBtSZxxxx1vZGUQAAgLKQAAAAAAAAEBAAAAAAAAAAMAAAAAAAAAAAAAAAAAAAAr
ps: I encountered a similar issue before when retrieving spatial videos on systems below iOS 18.
I'm developing an app which use "System Audio Recording Only" API to capture system audio.
Is there any API to check if app is authorized? So I can instruct user to give my app with this permission.
Thanks.
Hey!
I'm working on a camera app and I've noticed that the .builtInTripleCamera doesn't behave anything like the native app. Tested on iPhone 15 Pro Max and iPhone 12 Pro Max.
The documentation states the following, but that seems quite different from what is happening in the app:
Automatic switching from one camera to another occurs when the zoom factor, light level, and focus position allow.
So, does it automatically switch like the native camera, or do I need to do something?
Custom Camera vs Native Camera
Custom Camera
Native Camera
The code was adapted from the Apple's project
AVCamFilter.
Just download the AVCamFilter and update videoDeviceDiscoverySession:
private let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession(
deviceTypes: [.builtInTripleCamera],
mediaType: .video,
position: .unspecified
)
Hello there,
I am faced with the following situation:
We are building a web app that manages playlists for different platforms, including Apple music
We have the concept of teams in there, where a user can be part of multiple teams, and teams are managed by team admin
A team admin could manage multiple teams
The problem here is, that a team admin wouldn't be able to sign in to the Apple music account for multiple teams because if using the same computer we try to let the user sign in once and store the Music User Token, we can't do another login unless we unauthorized the previous one.
Is there anything we can do about this? Thanks
Dear
When I use iOS17 to save videos downloaded from the network to my album, it shows Domain=PHPhotosErrorDomain Code=3302. Through searching official documents, it was found that 3302 means "An error that indicates the asset resource validation failures." However, the specific reason is unknown. Is there any article to explain
We are using a VoiceProcessingIO audio unit in our VoIP application on Mac. In certain scenarios, the AudioComponentInstanceNew call blocks for up to five seconds (at least two). We are using the following code to initialize the audio unit:
OSStatus status;
AudioComponentDescription desc;
AudioComponent inputComponent;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
inputComponent = AudioComponentFindNext(NULL, &desc);
status = AudioComponentInstanceNew(inputComponent, &unit);
We are having the issue with current MacOS versions on a host of different Macs (x86 and x64 alike). It takes two to three seconds until AudioComponentInstanceNew returns.
We also see the following errors in the log multiple times:
AUVPAggregate.cpp:2560 AggInpStreamsChanged wait failed
and those right after (which I don't know if they matter to this issue):
KeystrokeSuppressorCore.cpp:44 ERROR: KeystrokeSuppressor initialization was unsuccessful. Invalid or no plist was provided. AU will be bypassed. vpStrategyManager.mm:486 Error code 2003332927 reported at GetPropertyInfo
I am trying to use the AVCamFilter Apple sample project discussed in this WWDC session to get depth data using the dual camera. The project has built-in features to get depth data from the dual camera.
When the sample project was written builtInDualWideCamera didn't exist yet, and the project only tries to get builtInDualCamera and builtInWideAngleCamera. When I run the project on my iPad Pro it doesn't show any of the depth-related UI because the device doesn't have a builtInDualCamera device. So I added builtInDualWideCamera in to the videoDeviceDiscoverySession, and it seems to get that device properly, but isDepthDataDeliverySupported is returning false still.
Is there some reason why isDepthDataDeliverySupported is false even though I seem to be using a dual camera device?
I know the device has a builtInLiDARDepthCamera but I wanted to try out the dual camera depth data to see how it performs for shorter distances. I wouldn't have expected the dual camera depth data delivery to be made unavailable on the device just because the LiDAR sensor is already available.
Using iPadOS 17.5.1, iPad Pro 11-inch 4th generation.
The depth feature of this sample app works fine on an iPhone 15 I tested. Also tried on an iPhone 15 Pro and it worked even though that device also has a LiDAR sensor, so the issue is presumably not related to the fact that the iPad Pro has a LiDAR sensor.
Hi there,
I ve Been wondering about getting a 'music user token' for manipulating users playlists.
The situation I found myself in is that that can only be done in the front-end, but by exposing the 'developer token' I need to generate, and the 'developer token' is the key to our app, if someone takes that, they can do anything with it, am I wrong?
Thanks for your time!
Hello everyone,
I am thrilled about the iPhone Mirroring demo on WWDC24 and I have a few thoughts to share.
Will it work through a local network, or can the iPhone be accessed within a global network? Will there be an API to initiate iPhone mirroring via an app? This would be a great feature for MDMs, allowing administrators to provide support for their users. Could you share more details from the development perspective?
Tested with library songs on an app targeted to Mac (Designed for iPad).
The same app running on iOS queries the same library songs and the duration is expressed correctly in seconds, as expected for the TimeInterval type.
Xcode 15.3
MacOS 14.5
FB13821671
I've had a photo app in the Store for 10+ years that just started behaving unexpectedly on the newly release iPad Pro M4 (May 2024). This is the first iPad with the camera on the landscape (longer) edge of the iPad.
The camera preview behaves as expected in my app, but the resulting photos are upside-down. How can I determine when I am dealing with the landscape camera? I'd like to avoid casing on a device by device basis.
I have been unable to find any mention of a new API call that would allow me to determine which front-facing camera I'm dealing with. Does something like this exist?
Thanks!
when audio file's magic number is 49443302 not 49443303, AVAudioPlayer's duration property return wrong value,
actually it cause by engiTunSMPB but I want to know why it happen only in ID3 version 2 (49443302)
example: only difference in two mp3 file is the magic number
and check the duration returns this result
source code under here
ContentView.swift
Using the hardware volume buttons on the iPhone, you have 16 steps you can adjust your volume to. I want to implement a volume control slider in my app. I am updating the value of the slider using AVAudioSession.sharedInstance().outputVolume. The problem is that this returns values rounded to the nearest 0 or 5. This makes the slider jump around. .formatted() is not causing this problem.
You can recreate the problem using code below.
@main
struct VolumeTestApp: App {
init() {
try? AVAudioSession.sharedInstance().setActive(true)
}
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
struct ContentView: View {
@State private var volume = Double()
@State private var difference = Double()
var body: some View {
VStack {
Text("The volume changed by \(difference.formatted())")
Slider(value: $volume, in: 0...1)
}
.onReceive(AVAudioSession.sharedInstance().publisher(for: \.outputVolume), perform: { value in
volume = Double(value)
})
.onChange(of: volume) { oldValue, newValue in // Only used to make the problem more obvious
if oldValue > newValue {
difference = oldValue - newValue
} else {
difference = newValue - oldValue
}
}
}
}
Here is a video of the problem in action:
https://share.icloud.com/photos/00fmp7Vq1AkRetxcIP5EXeAZA
What am I doing wrong or what can I do to avoid this?
Thank you