In the past, when using Lightning, many external devices had to go through MFi certification. However, since the iPhone 15 switched from Lightning to USB-C, is MFi certification still required?
Our company has developed several UVC devices, and we have confirmed that iPads can read frames from external cameras through the external device type in AVFoundation. However, this is not supported on iPhones.
We are currently exploring feasible ways to enable UVC device support on iPhones. Is MFi certification the only option? If so, is the MFi certification process for USB-C the same as it was for Lightning? Does it still require purchasing an MFi chip and manufacturing specially designed USB-C cables?
General
RSS for tagExplore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello Apple Developer Community,
We are developing a music management platform for restaurants and cafes in Saudi Arabia. Our app enables businesses to schedule playlists and allows visitors to request songs via barcodes. Music playback is powered by Apple Music, and users must have their own Apple Music subscriptions to access the music. Our service charges a monthly subscription fee for these management features, not for music access itself.
Project Overview and MusicKit Role
Our app integrates MusicKit to leverage Apple Music’s catalog and playback capabilities. Users log in with their Apple Music accounts, ensuring they have an active subscription for music playback. Our platform’s value lies in its tools—playlist scheduling and song requests—which are built on top of MusicKit’s APIs. We offer these features exclusively in Saudi Arabia.
Legal Context in Saudi Arabia
In Saudi Arabia, to our understanding, no special licenses are required for playing music in commercial venues like restaurants and cafes. This means our clients can use Apple Music subscriptions for playback without additional performance rights licenses. While this aligns with local laws, we recognize that Apple’s global policies may impose stricter requirements, prompting our need for clarification.
Subscription Model and Monetization Concerns
We charge a monthly subscription fee for access to our app’s features (e.g., scheduling playlists and managing song requests). This fee is separate from the Apple Music subscription, which users must maintain for playback. However, Apple’s MusicKit terms state: "You agree not to require payment for or indirectly monetize access to the Apple Music service." We’re concerned whether our subscription model might be interpreted as indirectly monetizing Apple Music access, given its reliance on MusicKit for functionality.
Scheduling Feature and Synchronization Rights
Our app allows businesses to schedule playlists for general time slots (e.g., “play this playlist from 6 PM to 8 PM”). It does not support precise scheduling, such as playing a specific song at an exact moment (e.g., “play this song at 7:30 PM”). Apple’s guidelines mention that “deeper or more complex music integration” may require additional licenses, like synchronization rights. We’re unsure if our general scheduling feature crosses this threshold or remains within MusicKit’s standard usage.
Questions for Clarification
We’d greatly appreciate expert input on the following:
Monetization: Does our subscription fee for management features (scheduling and song requests) violate Apple’s policy against indirectly monetizing Apple Music access?
Local Context: Given that Saudi Arabia requires no additional licenses for commercial music playback, does this impact our compliance with Apple’s global terms?
Scheduling: Does our playlist scheduling for general time slots (not exact moments) fall within MusicKit’s permitted scope, or does it require further licensing?
Thank you in advance for any insights or guidance to ensure our app aligns with Apple’s policies!
Topic:
Media Technologies
SubTopic:
General
Tags:
Apple Music API
MusicKit
MusicKit JS
Apple Music Feed
I found some documentation about Kext, but I heard they have now moved to Dext.
So I was wondering if Dext could completely imitate the previous Kext.
https://developer.apple.com/documentation/kernel/implementing_drivers_system_extensions_and_kexts
This page is written like this
Important
In macOS 11 and later, the kernel doesn’t load a kext if an equivalent DriverKit solution exists. You may continue to use kexts in macOS 10.15 and earlier.
Topic:
Media Technologies
SubTopic:
General
Hi There, I have an app which access the media library, to save and load files. Since the IOS 18.2, the access to the media library stopped working.
Now, I've noticed that our App doesn't show in the List of apps with access to Files ( Privacy & Security -> Files & Folders).
Weird behavior is that, one iPhone with iOS 18.3.1 can access to the Files but others no, same iOS version 18.3.1. Test on Simulators (MAC) and works fine also.
My info.plist file have the keys to access media library for long time and hasn't changed (at least in the las 4 years) including the key "Privacy - Media Library Usage Description".
Also, I've noticed, that the message (popup) that request access to the media library, when using the app for the first time, doesn't show up anymore. We request access to the network (wifi) and this message still showing up but no the media library.
I'm using Visual Studio with Xamarin on a MAC.
I really appreciate any help you can because is very odd behavior and this started from the iOS 18.2.
Hello Apple,
i've been using ios for many years and never had any issues with urdu language keyboard, but since the new 18.4 beta update some words are not working correctly as it should like a name of my friend who's name is "راعنیہ" but the new updated version cannot type is together and keep seperating like "راعنی ہ" its so frustrating to use like that and its not just one but so many other words that it just cannot do properly also the new font and no gap concept its hurting my eyes so much while reading or even typing.. i hope apple fixes that asap..
thankyou
My iOS app can access the iphone media library, but if on Mac M4 is Error Domain=NSCocoaError Domain Code=257
Topic:
Media Technologies
SubTopic:
General
When building an application that can be built on iOS using macCatalyst, a link error like the one below will occur.
Undefined symbol: OBJC_CLASS$_AVPlayerViewController
The AVPlayerViewController documentation seems to support macCatalyst, but what is the reality?
[AVPlayerViewController](https://developer.apple.com/documentation/avkit/avplayerviewcontroller? language=objc)
Each version of the environment is as follows.
Xcode 16.2
macOS deployment target: macOS 10.15
iOS deployment target: iOS 13.0
Thank you for your support.
Topic:
Media Technologies
SubTopic:
General
On an iOS 18 phone, I use AVCaptureSession to capture HDR with x420 format. The output CMSampleBuffer is HLG colorspace, the propagated attachments contain kCVImageBufferAmbientViewingEnvironmentKey and kCVImageBufferSceneIlluminationKey. Now I use CAMetalLayer to render the CVPixelBuffer to the screen, but the brightness is brighter than AVSampleBufferDisplayLayer.
Here is my code.
- (void)_updateColorSpaceIfNeed:(CVPixelBufferRef)pixelBuffer {
CAMetalLayer *layer = (CAMetalLayer *)_mtkView.layer;
if (![layer isKindOfClass:CAMetalLayer.class]) return;
layer.wantsExtendedDynamicRangeContent = YES;
CFDataRef ambientViewingEnvironment = (CFDataRef)CVBufferCopyAttachment(pixelBuffer, kCVImageBufferAmbientViewingEnvironmentKey, NULL);
NSData *data = (__bridge NSData *)ambientViewingEnvironment;
if (ambientViewingEnvironment) CFRelease(ambientViewingEnvironment);
CAEDRMetadata *metadata = [CAEDRMetadata HLGMetadataWithAmbientViewingEnvironment:data];
// CAEDRMetadata *metadata = [CAEDRMetadata HLGMetadata];
layer.EDRMetadata = metadata;
layer.pixelFormat = MTLPixelFormatRGBA16Float;
CGColorSpaceRef colorspace = CGColorSpaceCreateWithName(kCGColorSpaceITUR_2100_HLG);
layer.colorspace = colorspace;
if (colorspace) CGColorSpaceRelease(colorspace);
}
Why does the CAEDRMetadata class have "HLGMetadataWithAmbientViewingEnvironment:" and "HLGMetadata" methods, but does not provide the "HLGMetadataWithAmbientViewingEnvironment:sceneIllumination" method?
I want to know how kCVImageBufferAmbientViewingEnvironmentKey and kCVImageBufferSceneIlluminationKey affect tone mapping. Is there any documentation I can refer to?
Hello,
Basically, I am reading and writing an asset.
To simplify, I am just reading the asset and rewriting it into an output video without any modifications.
However, I want to add a fade-out effect to the last three seconds of the output video.
I don’t know how to do this.
So far, before adding the CMSampleBuffer to the output video, I tried reducing its volume using an extension on CMSampleBuffer.
In the extension, I passed 0.4 for testing, aiming to reduce the video's overall volume by 60%.
My question is:
How can I directly adjust the volume of a CMSampleBuffer?
Here is the extension:
extension CMSampleBuffer {
func adjustVolume(by factor: Float) -> CMSampleBuffer? {
guard let blockBuffer = CMSampleBufferGetDataBuffer(self) else { return nil }
var length = 0
var dataPointer: UnsafeMutablePointer<Int8>?
guard CMBlockBufferGetDataPointer(blockBuffer, atOffset: 0, lengthAtOffsetOut: nil, totalLengthOut: &length, dataPointerOut: &dataPointer) == kCMBlockBufferNoErr else { return nil }
guard let dataPointer = dataPointer else { return nil }
let sampleCount = length / MemoryLayout<Int16>.size
dataPointer.withMemoryRebound(to: Int16.self, capacity: sampleCount) { pointer in
for i in 0..<sampleCount {
let sample = Float(pointer[i])
pointer[i] = Int16(sample * factor)
}
}
return self
}
}
We are planning to develop an application using the Apple Music API.
We would like to design our system based on the details of the rate limits mentioned below and have a few questions:
https://developer.apple.com/documentation/applemusicapi/generating-developer-tokens#Request-Rate-Limiting
Regarding the Catalog API (/v1/catalog/*), we understand that server-side caching is enabled, making it less likely to reach the rate limit. Is this understanding correct? (Excluding the search API)
For APIs like the Library API (/v1/me/library/*), where responses vary by user, we assume they are more likely to reach the rate limit. Is this correct?
We plan to implement optimizations to minimize unnecessary API calls. Given this, would the current Music API be able to handle a significant increase in users? (Assuming a DAU of around 100,000 to 1,000,000)
If the API cannot support this scale, would it be allowed under Apple’s policy to cache responses from the Catalog API (/v1/catalog/*) via our proxy server to avoid hitting the rate limit?
The third question is the one we most want to confirm.
I have created an app where you can speak using SFSpeechRecognizer and it will recognize you speech into text, translate it and then return it back using speech synthesis. All locales for SFSpeechRecognizer and switching between them work fine when the app is in the foreground but after I turn off my screen(the app is still running I just turned off the screen) and try to create new recognitionTask it it receives this error inside the recognition task: User denied access to speech recognition. The weird thing about this is it only happens with some languages. The error happens with Croatian or Hungarian locale for speech recognition but doesn't with English or Spanish locale.
I've been having issues with authorization after switching from v1 to v3, have tried some of the suggestions provided by someone in a reply to a post of mine. Have tried to reach out to Apple Support a few times as well, though I haven't received any support that has helped me to move forward.
I have tested my token at https://jwt.io and I'm getting a "Signature Verified", tried multiple browsers in private/incognito mode, now when I try to sign into my Apple Account to test my player I am receiving an error that stats "There is a Problem Connecting. There May be an Issue with Your Network." (which is not the case).
I have tried everything I can think of, I'm at a loss and would appreciate any help to get my project moving forward. This is what I am seeing in the browser developer console (using Firefox):
Authorization failed: AUTHORIZATION_ERROR: Unauthorized
MKError https://js-cdn.music.apple.com/musickit/v3/musickit.js:13
authorize https://js-cdn.music.apple.com/musickit/v3/musickit.js:28
asyncGeneratorStep$w https://js-cdn.music.apple.com/musickit/v3/musickit.js:28
_next https://js-cdn.music.apple.com/musickit/v3/musickit.js:28
media.mydomain.com:398:19
https://media.mydomain.com/:398
For some users in production, there's a high probability that after launching the App, using AVPlayer to play any local audio resources results in the following error. Restarting the App doesn't help.
issue:
[error: Error Domain=AVFoundationErrorDomain Code=-11800 "这项操作无法完成" UserInfo={NSLocalizedFailureReason=发生未知错误(24), NSLocalizedDescription=这项操作无法完成, NSUnderlyingError=0x30311f270 {Error Domain=NSPOSIXErrorDomain Code=24 "Too many open files"}}
I've checked the code, and there aren't actually multiple AVPlayers playing simultaneously. What could be causing this?
I am developing an iOS application that supports screen mirroring to Google TV (or Chromecast with Google TV). My goal is to mirror the iPhone/iPad screen in real time to a Google TV device.
What I Have Tried So Far
I have explored multiple approaches but haven't found a direct way to achieve low-latency screen mirroring. Here are some of my findings:
Google Cast SDK:
Google Cast SDK is primarily designed for casting media (videos, images, audio) rather than real-time mirroring. It supports custom receiver applications, but there are no direct APIs for full screen mirroring. Casting a recorded video is possible, but it introduces latency and is not real-time.
ReplayKit for Screen Capture:
RPScreenRecorder.shared().startCapture(handler: ...) allows capturing the iPhone screen as a video stream. However, sending this stream to Google TV in real time is a challenge. I could potentially encode the video as HLS and stream it, but the delay is significant.
RTSP/UDP Streaming:
Some third-party libraries support RTSP/UDP streaming for real-time screen sharing. Google TV does not natively support RTSP, making this approach difficult.
My Questions:
Is it possible to achieve real-time screen mirroring on Google TV using Google Cast SDK? Does Google TV support WebRTC or any low-latency streaming protocol that can be used from iOS? Are there any alternative approaches to mirror an iOS screen to Google TV with minimal latency? I would appreciate any guidance, code examples, or references to relevant documentation.
I noticed that Instagram and iMessage support receiving shared lyrics from Apple Music. Specifically, when users long-press lyrics, a sheet pops up showing iMessage and Instagram. Clicking on either app generates a beautifully formatted lyrics image.
I've looked through MusicKit documentation but couldn't find any related APIs.
How can I implement this functionality in my app?
Hello everyone,
I am currently working on an app project aimed at users who want to quickly and easily capture their ideas and notes while on the go. The basic concept is to develop an iOS app where users can store both typed notes and voice recordings – essentially a "brain dump" solution. The core functionality (storing, editing, synchronizing via CloudKit, etc.) will be handled within the iOS app.
In addition, I plan to integrate a CarPlay extension that allows the driver to start and stop a recording – ideally through a minimalist interface featuring a large record button and a "Done" button. Since the iPhone is often not within immediate reach in the car, the CarPlay integration should serve as a quick trigger to initiate the recording in the iOS app.
My questions are as follows:
Has anyone had experience implementing a CarPlay extension for an app that primarily handles notes and voice recordings, rather than falling into the traditional categories like navigation, audio, or communication?
Has such a concept ever been approved by Apple, or are there known hurdles and guidelines that must be observed?
Are there alternative approaches to implementing CarPlay integration in this context in a compliant and effective manner?
I would greatly appreciate any feedback, shared experiences, and tips on best practices.
Thank you in advance and best regards!
Hi,
On macOS Sonoma and earlier versions, I used my script along with a LaunchDaemon to continuously record my screen.
However, after upgrading to macOS Sequoia, the LaunchDaemon no longer works for screen recording. It only works when I use a LaunchAgent.
For my workflow, using a LaunchDaemon and running the ffmpeg process as root is much more efficient than running it as a regular user. Does anyone know how to run a script in the background using a LaunchDaemon on macOS Sequoia?
Here are my LaunchDaemon plist and script:
LaunchDaemon plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key> <string>local.ScreenRecord</string>
<key>Disabled</key> <false/>
<key>RunAtLoad</key> <true/>
<key>KeepAlive</key> <true/>
<key>Nice</key>
<integer>-20</integer>
<key>ProgramArguments</key>
<array>
<string>/bin/bash</string>
<string>/usr/local/distrib/record.bash</string>
</array>
</dict>
</plist>
Script
## !!! START CONFIGURATION !!!
#
FFMPEG_LOC="/usr/local/distrib/ffmpeg"
FFMPEG_INPUT="-hide_banner -f avfoundation -capture_cursor 1 -pixel_format uyvy422"
FFMPEG_VSET="-codec libx264 -r 22 -crf 26 -preset veryfast -b:v 5M -maxrate 7M -bufsize 14M"
TIME_REC="-t 600"
FOLDER_REC="/usr/local/distrib/REC/"
#
## !!! END CONFIGURATION !!!
#
# Find number id monitor
MON_ID=$($FFMPEG_LOC -hide_banner -f avfoundation -list_devices true -i "" 2>&1 | awk -F'[]|[]' '/Capture\ screen/ {print $4}')
#
if [ -n "$MON_ID" ]
then
# Time for name
DATE_REC=$(date +"%m-%d-%Y_%H-%M-%S")
# Number of output video file
OUTPUT_NUM=0
# Full command to record
FFMPEG_FULL_COMMAND=""
for MON_NUM in $MON_ID
do
FFMPEG_FULL_COMMAND=""$FFMPEG_FULL_COMMAND" "$FFMPEG_INPUT" -i "$MON_NUM" "$FFMPEG_VSET" "$TIME_REC" -map "$OUTPUT_NUM" "$FOLDER_REC""$DATE_REC"_Mon"$MON_NUM".mkv"
let OUTPUT_NUM=$OUTPUT_NUM+1
done
$FFMPEG_LOC $FFMPEG_FULL_COMMAND
else
echo "No monitors"
sleep 5
fi
Will the restrictions on callkit be lifted in China in 2025? If so, how can you legally use callkit in China?
I've been working with MusicKit without enrolling for a developer account and I haven't run into any issues until I noticed nil on the SysteyMusicPlayer item for some songs and I don't understand why. Are some songs blocked from the framework? Or is it somehow a limitation to not having registered MusicKit to the bundle ID? I am planning on using MusicKit properly in prod and this is just a test app for a package I'm working on.
These are the nil songs which I got from the Discovery Station: https://music.apple.com/ca/album/okay/950816298?i=950816304
https://music.apple.com/ca/album/youre-so-cool/1670485433?i=1670485446
I'm unable to play music using Musickit.js in the Chrome or Firefox browsers.
Even using the apple guide here: https://js-cdn.music.apple.com/musickit/v1/index.html - I've added my Music Developer Token and song/album url, but it only works in Safari and not in Chrome or Firefox.
I'm unsure if this is a global issue, or if there is something I need to do to enable playback in other browsers, but as it stands it's not working for me.
Thanks for any help in advance!