We are planning to develop an application using the Apple Music API.
We would like to design our system based on the details of the rate limits mentioned below and have a few questions:
https://developer.apple.com/documentation/applemusicapi/generating-developer-tokens#Request-Rate-Limiting
Regarding the Catalog API (/v1/catalog/*), we understand that server-side caching is enabled, making it less likely to reach the rate limit. Is this understanding correct? (Excluding the search API)
For APIs like the Library API (/v1/me/library/*), where responses vary by user, we assume they are more likely to reach the rate limit. Is this correct?
We plan to implement optimizations to minimize unnecessary API calls. Given this, would the current Music API be able to handle a significant increase in users? (Assuming a DAU of around 100,000 to 1,000,000)
If the API cannot support this scale, would it be allowed under Apple’s policy to cache responses from the Catalog API (/v1/catalog/*) via our proxy server to avoid hitting the rate limit?
The third question is the one we most want to confirm.
General
RSS for tagExplore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
I have created an app where you can speak using SFSpeechRecognizer and it will recognize you speech into text, translate it and then return it back using speech synthesis. All locales for SFSpeechRecognizer and switching between them work fine when the app is in the foreground but after I turn off my screen(the app is still running I just turned off the screen) and try to create new recognitionTask it it receives this error inside the recognition task: User denied access to speech recognition. The weird thing about this is it only happens with some languages. The error happens with Croatian or Hungarian locale for speech recognition but doesn't with English or Spanish locale.
I've been having issues with authorization after switching from v1 to v3, have tried some of the suggestions provided by someone in a reply to a post of mine. Have tried to reach out to Apple Support a few times as well, though I haven't received any support that has helped me to move forward.
I have tested my token at https://jwt.io and I'm getting a "Signature Verified", tried multiple browsers in private/incognito mode, now when I try to sign into my Apple Account to test my player I am receiving an error that stats "There is a Problem Connecting. There May be an Issue with Your Network." (which is not the case).
I have tried everything I can think of, I'm at a loss and would appreciate any help to get my project moving forward. This is what I am seeing in the browser developer console (using Firefox):
Authorization failed: AUTHORIZATION_ERROR: Unauthorized
MKError https://js-cdn.music.apple.com/musickit/v3/musickit.js:13
authorize https://js-cdn.music.apple.com/musickit/v3/musickit.js:28
asyncGeneratorStep$w https://js-cdn.music.apple.com/musickit/v3/musickit.js:28
_next https://js-cdn.music.apple.com/musickit/v3/musickit.js:28
media.mydomain.com:398:19
https://media.mydomain.com/:398
For some users in production, there's a high probability that after launching the App, using AVPlayer to play any local audio resources results in the following error. Restarting the App doesn't help.
issue:
[error: Error Domain=AVFoundationErrorDomain Code=-11800 "这项操作无法完成" UserInfo={NSLocalizedFailureReason=发生未知错误(24), NSLocalizedDescription=这项操作无法完成, NSUnderlyingError=0x30311f270 {Error Domain=NSPOSIXErrorDomain Code=24 "Too many open files"}}
I've checked the code, and there aren't actually multiple AVPlayers playing simultaneously. What could be causing this?
I am developing an iOS application that supports screen mirroring to Google TV (or Chromecast with Google TV). My goal is to mirror the iPhone/iPad screen in real time to a Google TV device.
What I Have Tried So Far
I have explored multiple approaches but haven't found a direct way to achieve low-latency screen mirroring. Here are some of my findings:
Google Cast SDK:
Google Cast SDK is primarily designed for casting media (videos, images, audio) rather than real-time mirroring. It supports custom receiver applications, but there are no direct APIs for full screen mirroring. Casting a recorded video is possible, but it introduces latency and is not real-time.
ReplayKit for Screen Capture:
RPScreenRecorder.shared().startCapture(handler: ...) allows capturing the iPhone screen as a video stream. However, sending this stream to Google TV in real time is a challenge. I could potentially encode the video as HLS and stream it, but the delay is significant.
RTSP/UDP Streaming:
Some third-party libraries support RTSP/UDP streaming for real-time screen sharing. Google TV does not natively support RTSP, making this approach difficult.
My Questions:
Is it possible to achieve real-time screen mirroring on Google TV using Google Cast SDK? Does Google TV support WebRTC or any low-latency streaming protocol that can be used from iOS? Are there any alternative approaches to mirror an iOS screen to Google TV with minimal latency? I would appreciate any guidance, code examples, or references to relevant documentation.
I noticed that Instagram and iMessage support receiving shared lyrics from Apple Music. Specifically, when users long-press lyrics, a sheet pops up showing iMessage and Instagram. Clicking on either app generates a beautifully formatted lyrics image.
I've looked through MusicKit documentation but couldn't find any related APIs.
How can I implement this functionality in my app?
Hello everyone,
I am currently working on an app project aimed at users who want to quickly and easily capture their ideas and notes while on the go. The basic concept is to develop an iOS app where users can store both typed notes and voice recordings – essentially a "brain dump" solution. The core functionality (storing, editing, synchronizing via CloudKit, etc.) will be handled within the iOS app.
In addition, I plan to integrate a CarPlay extension that allows the driver to start and stop a recording – ideally through a minimalist interface featuring a large record button and a "Done" button. Since the iPhone is often not within immediate reach in the car, the CarPlay integration should serve as a quick trigger to initiate the recording in the iOS app.
My questions are as follows:
Has anyone had experience implementing a CarPlay extension for an app that primarily handles notes and voice recordings, rather than falling into the traditional categories like navigation, audio, or communication?
Has such a concept ever been approved by Apple, or are there known hurdles and guidelines that must be observed?
Are there alternative approaches to implementing CarPlay integration in this context in a compliant and effective manner?
I would greatly appreciate any feedback, shared experiences, and tips on best practices.
Thank you in advance and best regards!
Hi,
On macOS Sonoma and earlier versions, I used my script along with a LaunchDaemon to continuously record my screen.
However, after upgrading to macOS Sequoia, the LaunchDaemon no longer works for screen recording. It only works when I use a LaunchAgent.
For my workflow, using a LaunchDaemon and running the ffmpeg process as root is much more efficient than running it as a regular user. Does anyone know how to run a script in the background using a LaunchDaemon on macOS Sequoia?
Here are my LaunchDaemon plist and script:
LaunchDaemon plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key> <string>local.ScreenRecord</string>
<key>Disabled</key> <false/>
<key>RunAtLoad</key> <true/>
<key>KeepAlive</key> <true/>
<key>Nice</key>
<integer>-20</integer>
<key>ProgramArguments</key>
<array>
<string>/bin/bash</string>
<string>/usr/local/distrib/record.bash</string>
</array>
</dict>
</plist>
Script
## !!! START CONFIGURATION !!!
#
FFMPEG_LOC="/usr/local/distrib/ffmpeg"
FFMPEG_INPUT="-hide_banner -f avfoundation -capture_cursor 1 -pixel_format uyvy422"
FFMPEG_VSET="-codec libx264 -r 22 -crf 26 -preset veryfast -b:v 5M -maxrate 7M -bufsize 14M"
TIME_REC="-t 600"
FOLDER_REC="/usr/local/distrib/REC/"
#
## !!! END CONFIGURATION !!!
#
# Find number id monitor
MON_ID=$($FFMPEG_LOC -hide_banner -f avfoundation -list_devices true -i "" 2>&1 | awk -F'[]|[]' '/Capture\ screen/ {print $4}')
#
if [ -n "$MON_ID" ]
then
# Time for name
DATE_REC=$(date +"%m-%d-%Y_%H-%M-%S")
# Number of output video file
OUTPUT_NUM=0
# Full command to record
FFMPEG_FULL_COMMAND=""
for MON_NUM in $MON_ID
do
FFMPEG_FULL_COMMAND=""$FFMPEG_FULL_COMMAND" "$FFMPEG_INPUT" -i "$MON_NUM" "$FFMPEG_VSET" "$TIME_REC" -map "$OUTPUT_NUM" "$FOLDER_REC""$DATE_REC"_Mon"$MON_NUM".mkv"
let OUTPUT_NUM=$OUTPUT_NUM+1
done
$FFMPEG_LOC $FFMPEG_FULL_COMMAND
else
echo "No monitors"
sleep 5
fi
Will the restrictions on callkit be lifted in China in 2025? If so, how can you legally use callkit in China?
I've been working with MusicKit without enrolling for a developer account and I haven't run into any issues until I noticed nil on the SysteyMusicPlayer item for some songs and I don't understand why. Are some songs blocked from the framework? Or is it somehow a limitation to not having registered MusicKit to the bundle ID? I am planning on using MusicKit properly in prod and this is just a test app for a package I'm working on.
These are the nil songs which I got from the Discovery Station: https://music.apple.com/ca/album/okay/950816298?i=950816304
https://music.apple.com/ca/album/youre-so-cool/1670485433?i=1670485446
I'm unable to play music using Musickit.js in the Chrome or Firefox browsers.
Even using the apple guide here: https://js-cdn.music.apple.com/musickit/v1/index.html - I've added my Music Developer Token and song/album url, but it only works in Safari and not in Chrome or Firefox.
I'm unsure if this is a global issue, or if there is something I need to do to enable playback in other browsers, but as it stands it's not working for me.
Thanks for any help in advance!
Hello,
I'm sporadically getting the unknown CoreHaptics error described in the title, when trying to create a CHHapticPatternPlayer from an .ahap file — this does not occur in a repeatable manner, unfortunately...
This occurs in a SpriteKit game, running on both iOS 17 and 18.
Anyone know what the error means, since it's apparently undocumented?
Note that the haptics engine is started right before creating the pattern, and the start request doesn't appear to fail...
Thanks,
D.
I'm building a camera app that does some post processing after the photo has been taken. With 12MP the processing is pretty good, but larger images 24MP is very slow.
I created a very simple example to demonstrate the issue, which is loading an image and the rendering it to data.
let context = CIContext()
let imageUrl = Bundle.main.url(forResource: "12mp", withExtension: "jpg")!
let data = try! Data(contentsOf: imageUrl)
let ciImage = CIImage(data: data)!
let start = CFAbsoluteTimeGetCurrent()
let data = context.jpegRepresentation(of: ciImage, colorSpace: context.workingColorSpace!)
print(data?.count)
print("Resize Completed: " + String(CFAbsoluteTimeGetCurrent() - start))
Running this code on an iPhone 16 Pro with different images produces these benchmarks:
12MP => 0.03s
24MP => 1.22s
48MP => 2.98s
I understand that processing time will increase with resolution but it doesn't seem linear. I have tried setting different CiContext options such as .useSoftwareRenderer: false but it has made no difference.
From profiling the process it looks like the JPEG decoding is the bottle neck. This is for a 48MP Image:
Is there any way this can be improved?
Hi,
I am trying to enable the default MIDINetworkSession in a Catalyst app on MacOS like this:
MIDINetworkSession.default().isEnabled = true
MIDINetworkSession.default().connectionPolicy = .anyone
In the AppSandbox I have both incoming and outgoing network connections enabled. And I also added the NSLocalNetworkUsageDescription key to the info.plist. Bonjour services are also added to the info.plist:
NSBonjourServices
_apple-midi._udp.
Nevertheless the session stays disabled. Running the same code works just fine on iOS.
Is there any special setup I need to make on MacOS to enable the MIDINetworkSession?
Thanks!
Hi All,
I am working on a DJ playout app (MACOS). The app has a few AVAudioPlayerNode's combined with the ApplicationMusicPlayer from Musickit. I can route the output of the AVaudioPlayer to a hardware device so that the audio files are directed to their own dedicated output on my Mac. The ApplicationMusicPlayer is following the default output and this is pretty annoying.
Has anyone found a solution to chain the ApplicationMusicPlayer and get it set to a output device?
Thanks
Pancras
Hello,
I am having difficulties with configuring MusicKit correctly for my web app that I am building, seeking assistance with the issues I am having. Would greatly appreciate any help!
After allowing access to the following,
"Access Request
media.mydomain.com would like to access Apple Music, media library, and listening activity for myemail'@icloud.com.",
I get a popup error that states, "Authorization failed. Please try again.".
Following is the information that is given in developer console:
[Error] Failed to load resource: the server responded with a status of 403 () (webPlayerLogout, line 0)
[Error] Authorization failed:
AUTHORIZATION_ERROR: Unauthorized
(anonymous function) (media.mydomain.com:398)
We use BassDSDPlayer / SFBAudioEngine to play just about any file, but playing Apple Music is failing. All subscriptions are up to date. We stop the SFBAudioEngine and the BassDSDPlayer before playing Apple Music to no avail.
PRINTS:
Supported files in /Users/dorian/Music/Music/Media.localized/Music/4: 28364
Apple Music is authorized and can play catalog.
Resetting default output device...
Releasing BassDSDPlayer audio device...
BassDSDPlayer: Audio device released.
STOPPED sfbAudioDevice
Default output device is ID: 76
applicationQueuePlayer _establishConnectionIfNeeded timeout [ping did not pong]
applicationQueuePlayer _establishConnectionIfNeeded timeout [ping did not pong]
Player State - After resetting output:
Playback Status: stopped
Queue Count: 0
No track is playing.
Music player reset successfully.
BassDSDPlayer: Audio device released.
Default output device set successfully: 76
Default output device is ID: 76
Default output device set successfully: 76
Default output device ID: 76
Validated PlayParameters for track: squabble up
PlayParameters: PlayParameters(id: 1781270321, kind: "song", isLibrary: nil, catalogID: nil, libraryID: nil, deviceLocalID: nil, rawValues: [:])
Starting playback...
Player State - After playback:
Playback Status: stopped
Queue Count: 1
No track is playing.
Notification BASS DSD NSConcreteNotification 0x600007ce2b00 {name = kUpdateSongInfo; object = {
AlbumTitle = GNX;
ArtistName = "Kendrick Lamar";
SongArtwork = "<NSImage 0x6000041b7ca0 Size={300, 300} RepProvider=<NSImageArrayRepProvider: 0x600003518770, reps:(\n "NSBitmapImageRep 0x600009ed9dc0 Size={300, 300} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=300x300 Alpha=NO Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600007ce15c0"\n)>>";
SongLength = "157.992";
SongTitle = "squabble up";
Source = AppleMusic;
}}
Apple Music track loaded: squabble up by Kendrick Lamar
Player State - Before play:
Playback Status: stopped
Queue Count: 1
No track is playing.
prepareToPlay failed [no target descriptor]
NSError Code: 1, Domain: MPMusicPlayerControllerErrorDomain
Player State - After play:
Playback Status: stopped
Queue Count: 1
No track is playing.
func playAppleMusicTracks(tracks: [Track]) {
AppleMusicManager.shared.isAuthorizedAndReadyForPlayback { isAuthorized in
guard isAuthorized else {
print("Apple Music authorization or capabilities insufficient for playback.")
return
}
print("Resetting default output device...")
self.stopSFBAudioDevice()
self.resetMusicPlayer()
self.resetAudioSystem()
self.ensureOutputDeviceReady()
Task {
for track in tracks {
guard self.validatePlayParameters(for: track) else { continue }
do {
try await ApplicationMusicPlayer.shared.queue.insert(track, position: .afterCurrentEntry)
guard !ApplicationMusicPlayer.shared.queue.entries.isEmpty else {
print("Queue is empty after queuing. Playback cannot proceed.")
return
}
self.notifyAppleMusicTrackInfo(track)
} catch {
print("Error starting playback: \(error)")
if let nsError = error as NSError? {
print("NSError Code: \(nsError.code), Domain: \(nsError.domain)")
}
}
}
MusicKitWrapper.shared.logPlayerState(message: "After playback")
}
}
}
@objc public class MusicKitWrapper: NSObject {
@objc public static let shared = MusicKitWrapper()
private let player = ApplicationMusicPlayer.shared
// Play the current track
@objc public func play() {
guard !player.queue.entries.isEmpty else {
print("Queue is empty. Cannot start playback.")
return
}
logPlayerState(message: "Before play")
Task {
do {
try await player.prepareToPlay()
try await player.play()
print("Playback started successfully.")
} catch {
if let nsError = error as NSError? {
print("NSError Code: \(nsError.code), Domain: \(nsError.domain)")
}
}
logPlayerState(message: "After play")
}
}
Any help would be appreciated.
Thanks!
What is the immersive space projection method? erp, fisheye, cube
We want to achieve the same effect as Apple immersive
Hi, I'm wondering about one of the properties in the MPNowPlayingInfoCenter: MPNowPlayingInfoPropertyElapsedPlaybackTime. The docs say that updating this property frequently is not required, because the system can automatically calculate elapsed playback time based on the infrequent values we provide.
Is performance harmed by updating this property every second? Should I add some filtering/throttling to update this property infrequently? Am I overthinking this, and it doesn't matter either way?
Kind regards.
We develop a video playback app on Apple TV which has the two following features:
Its content browsing screen has installed a gesture recognizer for presses on the PlayPause Siri remote button in order to directly launch a playback. The gesture recognizer is attached to the content browsing UIViewController view.
It presents its own custom playback UI with an AVPlayerLayer for the video and supports MPNowPlayingSession in order to publish current playback information and respond to remote commands. It also supports switching between fullscreen and Picture in Picture playback.
Both features work fine, ie. the playback is launched when pressing the PlayPause Siri remote button and, during playback, the playback info are properly advertised on other devices and remote commands are also triggered as expected.
However, when pressing the PlayPause Siri remote button while the video is playing in PiP, the "pause" remote command is sometimes triggered instead of the .playPause gesture recognizer. The issue may not occur the first time but for subsequent PlayPause presses. Navigating a bit in the app UI seems to help preventing the issue to occur.
Finally, the issue only occurs if the video is playing. If the video is paused, the PlayPause Siri remote button gesture is always recognized instead of the remote command.
Please note that, before using MPNowPlayingSession (and the corresponding MPRemoteCommandCenter), the app was using the default MPRemoteCommandCenter to support remote commands and the issue did not occur.
We don't reproduce this issue with the Apple TV app so there's probably something we are not doing right. Has someone any clue?