Hi guys,
How to achieve the following feature on macOS when a USB device (Camera/Mic/Speaker) is connected:
When a third-party video conferencing app is not in a meeting, ensure the app defaults to using the USB device (Camera/Mic/Speaker).
When a third-party conferencing app is in a meeting, ensure the app automatically switches to the USB device (Camera/Mic/Speaker).
I want to make use of IOKit extension to hidden or ignore build-in camera to realize the requirement.
however the extension can't be loaded for Invalid permissions in MacOS 15.4.1, buildVersion:24E263. I also tried to run in in MacOS 14.4.1, which can be loaded but can't auto load when restart laptop as KDK version not match.
Could you please give me some suggestion? Is it possible hidden build-in camera in MacOS M-series chip? Is there any other method to realize the feature. Thanks a lot.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Where can I find the documentation of the Genlock feature of the iPhone 17 Pro? How does it work and how can I use it in my app?
Hi,
I would like to use macro-mode for the custom camera using AVCaptureDevice in my project. This feature might help to automatically adjust and switch between lenses to get a close up clear image. It looks like this feature is not available and there are no open apis to achieve macro mode from Apple. Is there a way to get this functionality in the custom camera without losing the image quality. Please let me know if this is possible.
Thanks you,
Adil Thamarasseri
So experimenting with the new SpeechTranscriber, if I do:
let transcriber = SpeechTranscriber(
locale: locale,
transcriptionOptions: [],
reportingOptions: [.volatileResults],
attributeOptions: [.audioTimeRange]
)
only the final result has audio time ranges, not the volatile results.
Is this a performance consideration? If there is no performance problem, it would be nice to have the option to also get speech time ranges for volatile responses.
I'm not presenting the volatile text at all in the UI, I was just trying to keep statistics about the non-speech and the speech noise level, this way I can determine when the noise level falls under the noisefloor for a while.
The goal here was to finalize the recording automatically, when the noise level indicate that the user has finished speaking.
In our Apple TV application, we use the native AVPlayer for live playback functionality. Until tvOS 17.6 and during the tvOS 18 beta, the Pause/Resume feature worked as expected, allowing us to pause live playback. However, after updating to tvOS 18.1, the pause functionality no longer works.
The same app still works fine on tvOS 17, but on tvOS 18, attempting to pause live playback has no effect. We reviewed the tvOS 18 release notes but couldn't find any relevant changes or deprecations related to AVPlayer or live playback behavior.
Has there been any change in the handling of live playback or the Pause/Resume functionality in tvOS 18.1? Any guidance or suggestions to address this issue would be greatly appreciated.
Thank you!
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
Media Player
HTTP Live Streaming
I’m working on a macOS app, written in Swift. My goal is to record audio from an external microphone, e.g., one connected via USB.
For this, I’m using an AVCaptureSession and recording its output with an AVAssetWriter. This works perfectly in principle (and reliably with internal microphones, for example).
The problem occurs after my app has successfully completed the first recording and I then want to make additional recordings (which makes me think it might be process-dependent, because it works again after restarting the app).
The problem: Noisy or distorted-sounding audio files. In addition, the following error message appears in the Console from CoreAudio / its AudioConverter:
Input data proc returned inconsistent 512 packets for 2048 bytes; at 3 bytes per packet, that is actually 682 packets
It is easy to reproduce. This problem is reproducible even if I don’t configure the AVAssetWriter manually and instead let it receive its audioSettings using a preset from an AVOutputSettingsAssistant. I’m running on macOS 15.0 (24A335).
I’ve filed a feedback including a demo project → FB15333298 🎟️
I would greatly appreciate any help!
Have a great day,
Martin
Hi All I have some problem when I using the IOS 18.4.1
I have iphone16 pro and ipad Air, both are updated to IOS 18.4.1
I tried to following sample code.
However, when I run the app around 30 seconds to 1 minutes, the application would be crashed
When I using another Ipad with IOS 17, it would not have the same problem.
https://developer.apple.com/documentation/createml/creating-an-action-classifier-model
https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed#overview%29,
i tried combine speech detector and speech transciber to anlayzer.
but speech detector is not speech module. please help me
Hey everyone😊, I am building an app that includes a live camera feed preview. That's all I need to do along side identifying the images with createML's image classification. I don't need to capture images at all. I've seen some very complicated tutorials. I just want to use a couple of lines of code.
Topic:
Media Technologies
SubTopic:
Photos & Camera
Here is the demo from Apple's site
This issues is specific to iOS 18.
When running this demo, we are getting new text when we have a gap in speaking, the recognitionTask(with:resultHandler:) provides new text which is only spoken after the gap and not the concatenation of old text and the new spoken text.
I am seeing a crash on iOS 18.0 in my app due to PHImageManager.default().requestImage. Same crash is seen even while using requestImageDataAndOrientation.
Not sure why this is happening only on iOS 18.0.
Any help on this would be appreciated.
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_INVALID_ADDRESS at 0x0000000000000010
Exception Codes: 0x0000000000000001, 0x0000000000000010
VM Region Info: 0x10 is not in any region. Bytes before following region: 4310777840
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
UNUSED SPACE AT START
--->
Termination Reason: SIGNAL 11 Segmentation fault: 11
Terminating Process: exc handler [695]
Triggered by Thread: 14
Thread 14 name: Dispatch queue: */file=33) .Generic
Thread 14 Crashed:
0 libobjc.A.dylib 0x1944d7020 objc_msgSend + 32
1 PhotoLibraryServices 0x1b080262c +[PLUniformTypeIdentifier utiWithCompactRepresentation:conformanceHint:] + 40
2 Photos 0x1b0081354 _resourceInfoFromResultDict + 1048
3 Photos 0x1b0080a7c fetchResourcesForChoosing + 584
4 Photos 0x1b0080714 ___fetchNonHintResources_block_invoke.227 + 152
5 PhotoLibraryServices 0x1b026b3c4 __53-[PLManagedObjectContext _directPerformBlockAndWait:]_block_invoke + 48
6 CoreData 0x19f171b00 developerSubmittedBlockToNSManagedObjectContextPerform + 228
7 libdispatch.dylib 0x19eeff584 _dispatch_client_callout + 16
8 libdispatch.dylib 0x19eef5728 _dispatch_lane_barrier_sync_invoke_and_complete + 56
9 CoreData 0x19f1c2a0c -[NSManagedObjectContext performBlockAndWait:] + 308
10 PhotoLibraryServices 0x1b026cea4 -[PLManagedObjectContext _directPerformBlockAndWait:] + 144
11 PhotoLibraryServices 0x1b026cdf8 -[PLManagedObjectContext performBlockAndWait:] + 196
12 Photos 0x1b00804a4 _fetchNonHintResources + 292
13 Photos 0x1afeedb38 PHChooserListContinueEnumerating + 144
14 Photos 0x1afeed9f8 -[PHImageResourceChooser presentNextQualifyingResource] + 412
15 Photos 0x1afeed1d0 -[PHImageRequest startRequest] + 2424
16 libdispatch.dylib 0x19eee5aac _dispatch_call_block_and_release + 32
17 libdispatch.dylib 0x19eeff584 _dispatch_client_callout + 16
18 libdispatch.dylib 0x19eeee2d0 _dispatch_lane_serial_drain + 740
19 libdispatch.dylib 0x19eeeede0 _dispatch_lane_invoke + 440
20 libdispatch.dylib 0x19eef91dc _dispatch_root_queue_drain_deferred_wlh + 292
21 libdispatch.dylib 0x19eef8a60 _dispatch_workloop_worker_thread + 540
22 libsystem_pthread.dylib 0x2214d5660 _pthread_wqthread + 292
23 libsystem_pthread.dylib 0x2214d29f8 start_wqthread + 8
Hello everyone,
I am looking for a solution to programmatically, e.g. using AppleScript to import photos into the Photos library on MacOS and also push them to the shared library, like it can be done using the standard GUI of the Photos application.
Maybe it is not possible using AppleScript, but using a short Swift script and PhotoKit, I do not not know.
Any help is appreciated!
Thomas
I am trying to use the new SpeechAnalyzer framework in my Mac app, and am running into an issue for some languages.
When I call AssetInstallationRequest.downloadAndInstall() for some languages, it throws an error:
Error Domain=SFSpeechErrorDomain Code=1 "transcription.ar asset not found after attempted download."
The ".ar" appears to be the language code, which in this case was Arabic.
When I call AssetInventory.status(forModules:) before attempting the download, it is giving me a status of "downloading" (perhaps from an earlier attempt?). If this language was completely unsupported, I would expect it to return a status of "unsupported", so I'm not sure what's going on here.
For other languages (Polish, for example) SpeechTranscriber.supportedLocale(equivalentTo:) is returning nil, so that seems like a clearly unsupported language. But I can't tell if the languages I'm trying, like Arabic, are supported and something is going wrong, or if this error represents something I can work around.
Here's the relevant section of code. The error is thrown from downloadAndInstall(), so I never even get as far as setting up the SpeechAnalyzer itself.
private func setUpAnalyzer() async throws {
guard let sourceLanguage else {
throw Error.languageNotSpecified
}
guard let locale = await SpeechTranscriber.supportedLocale(equivalentTo: Locale(identifier: sourceLanguage.rawValue)) else {
throw Error.unsupportedLanguage
}
let transcriber = SpeechTranscriber(locale: locale, preset: .progressiveTranscription)
self.transcriber = transcriber
let reservedLocales = await AssetInventory.reservedLocales
if !reservedLocales.contains(locale) && reservedLocales.count == AssetInventory.maximumReservedLocales {
if let oldest = reservedLocales.last {
await AssetInventory.release(reservedLocale: oldest)
}
}
do {
let status = await AssetInventory.status(forModules: [transcriber])
print("status: \(status)")
if let installationRequest = try await AssetInventory.assetInstallationRequest(supporting: [transcriber]) {
try await installationRequest.downloadAndInstall()
}
}
...
I found that when the development tool above Xcode16 ran my app, I opened the suspended inscription function, and then opened the system camera, the content in the suspended window would not be displayed, and the suspended window would have a black screen. However, this phenomenon does not appear on Xcode15.4 development tools, it is the same code, I do not know why
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Swift Packages
App Clips
Developer Tools
iOS
I've been using CGWindowListCreateImage which automatically creates an image with the size of the captured window.
But SCScreenshotManager.captureImage(contentFilter:configuration:) always creates images with the width and height specified in the provided SCStreamConfiguration. I could be setting the size explicitly by reading SCWindow.frame or SCContentFilter.contentRect and multiplying the width and height by SCContentFilter.pointPixelScale , but it won't work if I want to keep the window shadow with SCStreamConfiguration.ignoreShadowsSingleWindow = false.
Is there a way and what's the best way to take full-resolution screenshots of the correct size?
import Cocoa
import ScreenCaptureKit
class ViewController: NSViewController {
@IBOutlet weak var imageView: NSImageView!
override func viewDidAppear() {
imageView.imageScaling = .scaleProportionallyUpOrDown
view.wantsLayer = true
view.layer!.backgroundColor = .init(red: 1, green: 0, blue: 0, alpha: 1)
Task {
let windows = try await SCShareableContent.excludingDesktopWindows(false, onScreenWindowsOnly: true).windows
let window = windows[0]
let filter = SCContentFilter(desktopIndependentWindow: window)
let configuration = SCStreamConfiguration()
configuration.ignoreShadowsSingleWindow = false
configuration.showsCursor = false
configuration.width = Int(Float(filter.contentRect.width) * filter.pointPixelScale)
configuration.height = Int(Float(filter.contentRect.height) * filter.pointPixelScale)
print(filter.contentRect)
let windowImage = try await SCScreenshotManager.captureImage(contentFilter: filter, configuration: configuration)
imageView.image = NSImage(cgImage: windowImage, size: CGSize(width: windowImage.width, height: windowImage.height))
}
}
}
I am using https://developer.apple.com/documentation/applemusicapi/add-tracks-to-a-library-playlist
to add tracks to playlists. This endpoint works fine for all playlists except for collaborative playlists.
For collaborative playlist I get the following 500 error as a response:
"errors": [
{
"id": "<some id>",
"title": "Upstream Service Error",
"detail": "Unable to update tracks",
"status": "500",
"code": "50001"
}
]
}
Steps to reproduce:
Create a playlist in your library.
Use the api to add a song.
Confirm that it works.
Make that same playlist collaborative.
Update the playlist ID in your api request (as making a playlist collaborative changes its id)
Confirm that you get the 500 error.
I've been having issues with authorization after switching from v1 to v3, have tried some of the suggestions provided by someone in a reply to a post of mine. Have tried to reach out to Apple Support a few times as well, though I haven't received any support that has helped me to move forward.
I have tested my token at https://jwt.io and I'm getting a "Signature Verified", tried multiple browsers in private/incognito mode, now when I try to sign into my Apple Account to test my player I am receiving an error that stats "There is a Problem Connecting. There May be an Issue with Your Network." (which is not the case).
I have tried everything I can think of, I'm at a loss and would appreciate any help to get my project moving forward. This is what I am seeing in the browser developer console (using Firefox):
Authorization failed: AUTHORIZATION_ERROR: Unauthorized
MKError https://js-cdn.music.apple.com/musickit/v3/musickit.js:13
authorize https://js-cdn.music.apple.com/musickit/v3/musickit.js:28
asyncGeneratorStep$w https://js-cdn.music.apple.com/musickit/v3/musickit.js:28
_next https://js-cdn.music.apple.com/musickit/v3/musickit.js:28
media.mydomain.com:398:19
https://media.mydomain.com/:398
I'm unable to play music using Musickit.js in the Chrome or Firefox browsers.
Even using the apple guide here: https://js-cdn.music.apple.com/musickit/v1/index.html - I've added my Music Developer Token and song/album url, but it only works in Safari and not in Chrome or Firefox.
I'm unsure if this is a global issue, or if there is something I need to do to enable playback in other browsers, but as it stands it's not working for me.
Thanks for any help in advance!
I have an app that edits photos in your library. When I call
try CIContext().writeHEIFRepresentation(of: editedImage, to: fileURL, format: .RGBA8, colorSpace: originalImage.colorSpace!)
The following is logged to the console:
writeImageAtIndex:1012: ⭕️ ERROR: 'App' is trying to save an opaque image (5712x4284) with 'AlphaLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha.
What does that mean and how can I resolve it?
Xcode Version 16.0 (16A242d)
iOS 18.1 (22B82)
Our iOS/AppleTV video content playback app uses AVPlayer to play HLS video streams and supports both custom and system playback UIs. The Fairplay content key is retrieved using AVContentKeySession. AirPlay is supported too.
When the iPhone is connected to a TV through the lightning Apple Digital AV Adapter (A1438), the app is mirrored as expected.
Problem: when using an iPhone or iPad on iOS 18.1.1, FairPlay-protected HLS streams are not played and a CoreMediaErrorDomain -12035 error is received by the AVPlayerItem. Also, once the issue has occurred, the mirroring freezes (the TV indefinitely displays the app playback screen) although the app works fine on the iOS device.
The content key retrieval works as expected (I can see that 2 content key requests are made by the system by the way, probably one for the local playback and one for the adapter, as when AirPlaying) and the error is thrown after providing the AVContentKeyResponse.
Unfortunately, and as far as I know, there is not documentation on CoreMediaErrorDomain errors so I don't know what -12035 means.
The issue does not occur:
on an iPhone on iOS 17.7 (even with FairPlay-protected HLS streams)
when playing DRM-free video content (whatever the iOS version)
when using the USB-C AV Adapter (whatever the iOS version)
Also worth noting: the issue does not occur with other video playback apps such as Apple TV or Netflix although I don't have any details on the kind of streams these apps play and the way the FairPlay content key is retrieved (if any) so I don't know if it is relevant.