I am working to update a live blog in Apple News. As far as I know there is an update endpoint to update a content in Apple News. Is there any feature in Apple News to trigger an event when the original content updated and pull the updated content?
General
RSS for tagExplore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
When making a call to https://api.music.apple.com/v1/me/library/artists to get a user's library artists, it returns the following (as an example):
[
{
id: 'r.FCwruQb',
type: 'library-artists',
href: '/v1/me/library/artists/r.FCwruQb?l=en-US',
attributes: { name: 'A Great Big World' }
},
{
id: 'r.7VSWOgj',
type: 'library-artists',
href: '/v1/me/library/artists/r.7VSWOgj?l=en-US',
attributes: { name: 'Aaliyah' }
},
...
]
If I try and use an artist id from that retuned data to look up additional information about the artist by calling https://api.music.apple.com/v1/catalog/us/artists/{id}, it fails.
User Library Artists don't seem to equal Catalog Artists.
It'd be great if there was a way to use these interchangeably. Am I missing something?
How can media resources in my app be recommended to the system media control center, just like TikTok in the picture
I have implemented fetching Apple Music preview songs using a Swift framework integrated into a Unity app.
My requirement is to fetch full tracks from a user’s Apple Music library and play them inside Unity.
To do this, I understand that I need to handle authentication, generate a Developer Token, and then obtain a Music User Token to access the user’s Apple Music content.
Currently, I have an Individual Apple Developer account (not Organization).
Based on my research, it seems that:
With an Individual account, I can implement this functionality and even upload builds to TestFlight for internal testing.
However, when releasing the app publicly on the App Store, full-track playback may be restricted for Individual accounts and allowed only for Organization accounts.
👉 Can you confirm if this understanding is correct?
👉 Specifically, is it possible for an Individual account to fetch and play full-length tracks from a subscribed Apple Music user’s library (at least for internal/TestFlight testing)?
Topic:
Media Technologies
SubTopic:
General
Hello,
I am trying to access the Apple Music Feed API, but I am recieving a 401 Unauthorized error message whenever I try to access it.
I have tried using my own code to generate a JWT and directly call the API (which can call the standard Apple Music API successfully).
> GET /v1/feed/song/latest HTTP/2
> Host: api.media.apple.com
> user-agent: insomnia/2023.5.8
> authorization: Bearer [REDACTED]
> accept: */*
< HTTP/2 401
< content-type: application/json; charset=utf-8
< content-length: 0
< x-apple-jingle-correlation-key: AV5IOHBNM2UUJVOFQ4HZ2TGF6Q
< x-daiquiri-instance: daiquiri:10001:daiquiri-all-shared-ext-7bb7c9b9bb-r459v:7987:25RELEASE91:daiquiri-amp-kubernetes-shared-ext-ak8s-prod-pv4-amp-daiquiri-ingress-prod
and also the Apple provided Python example code, which gives me authentication errors too.
$ python3 ./apple_music_feed_example.py --key-id NMBH[...] --team-id 3TNZ[...] --secret-key-file-path "/Users/foxt/Documents/am-feed/NMBH[...].p8" --out-dir .
running....
INFO:__main__:Sending requests to https://api.media.apple.com
INFO:__main__:Getting the latest export for feed artist
Exception: Authentication Failed. Did you provide the correct team id, key id, and p8 file?
Does this API need to be enabled on my account separately from the main Apple Music API? The documentation reads to me as if anyone with an Apple Developer Programme membership can use this API and I did not see any information regarding any other requirements
Hello, I want to know if there are any restrictions with MusicKit to be used in a mobile app to be able to manipulate audio with an EQ on tracks coming from Apple Music, without modifying the actual track structure/data of course, just the audio output.
My Environment:
Device: Mac (Apple Silicon, arm64)
OS: macOS 15.6.1
Description:
I'm developing a music app and have encountered an issue where I cannot update the playbackState in MPNowPlayingInfoCenter after my app loses audio focus to another app. Even though my app correctly calls [MPNowPlayingInfoCenter defaultCenter].playbackState = .paused, the system's Now Playing UI (Control Center, Lock Screen, AirPods controls) does not reflect this change. The UI remains stuck until the app that currently holds audio focus also changes its playback state.
I've observed this same behavior in other third-party music apps from the App Store, which suggests it might be a system-level issue.
Steps to Reproduce:
Use two most popular music apps in Chinese app Store (NeteaseCloud music and QQ music) (let's call them App A and App B):
Start playback in App A.
Start playback in App B. (App B now has audio focus, and App A is still playing).
Attempt to pause App A via the system's Control Center or its own UI.
Observed Behavior: App A's audio stream stops, but in the system's Now Playing controls, App A still appears to be playing. The progress bar continues to advance, and the pause button becomes unresponsive.
If you then pause App B, the Now Playing UI for App A immediately corrects itself and displays the proper "paused" state.
My Questions:
Is there a specific procedure required to update MPNowPlayingInfoCenter when an app is not the current "Now Playing" application?
Is this a known issue or expected behavior in macOS?
Are there any official workarounds or solutions to ensure the UI updates correctly?
I have some question about the TTS
My device default language is zh-HK. (cantonese)
my device is iPhone 16 Pro, IOS 18.6
I create a function speakMandarin
I want the device to speak the zh-CN (putonghua)
however the device only can speak zh-HK (cantonese).
I already set the AVSpeechSynthesisVoice language as zh-CN
func speakMandarin(text: String) {
print("speakMandarin, \(text)")
lastError = nil // Reset error
// Stop any ongoing speech before starting new
if synthesizer.isSpeaking {
synthesizer.stopSpeaking(at: .immediate)
}
// Configure speech utterance
let utterance = AVSpeechUtterance(ssmlRepresentation: text)!
utterance.rate = 0.5 // Natural speaking speed
utterance.pitchMultiplier = 1.0
utterance.volume = 1.0
utterance.voice = AVSpeechSynthesisVoice(language: "zh-CN")
let preferredLanguages = [ "zh-CN" , "zh-TW"]
var selectedVoice: AVSpeechSynthesisVoice?
for lang in preferredLanguages {
if let voice = AVSpeechSynthesisVoice(language: lang) {
print(lang)
selectedVoice = voice
utterance.voice = voice
break
}
}
// If no Mandarin voice found, use system default
if selectedVoice == nil {
selectedVoice = AVSpeechSynthesisVoice(language: nil)
lastError = "未偵測到普通話語音包,將使用系統預設語音"
print (lastError)
}
utterance.voice = selectedVoice
print(utterance)
synthesizer.speak(utterance)
}
here is my log
speakMandarin, <speak>你好!我們來聊聊喜歡的動物吧。你喜歡什麼動物呢?</speak>
zh-CN
[AVSpeechUtterance 0x1194efb80] String: 你好!我們來聊聊喜歡的動物吧。你喜歡什麼動物呢?
Voice: [AVSpeechSynthesisVoice 0x104ceff90] Language: zh-CN, Name: Tingting, Quality: Default [com.apple.voice.compact.zh-CN.Tingting]
Rate: 0.50
Volume: 1.00
Pitch Multiplier: 1.00
Delays: Pre: 0.00(s) Post: 0.00(s)
Hello Apple Engineers,
I am developing a feature related to SpeechRecognizer (import Speech), and I have two questions:
After adding the NSSpeechRecognitionUsageDescription key in my Info.plist, when I initialize an SFSpeechRecognizer instance, the system shows an authorization alert. The alert contains a pre-defined message from Apple:
“Speech data from this app will be sent to Apple to process your requests. This will also help Apple improve its speech recognition technology.”
Is it possible to remove or customize this message?
My app runs in a network environment with a whitelist. I need to know which URL the SFSpeechRecognizer instance connects to, and which port it uses, so that I can add it to the whitelist.
Thank you very much for your support!
Best regards,
Yu Cheng
add a currently playing track endpoint on the apple music api. its kinda wild how apple music goes after spotify without having such a useful endpoint.
Hi, I'm working an a video editing software that lets you composite and export videos. I use a custom compositor to apply my effects etc.
In my crash dashboard, I am seeing a report of an EXC_BAD_ACCESS crash from objc_msgSend. Below is the stacktrace.
libobjc.A.dylib objc_msgSend
libdispatch.dylib _dispatch_sync_invoke_and_complete_recurse
libdispatch.dylib _dispatch_sync_f_slow
[symbolication failed]
libdispatch.dylib _dispatch_client_callout
libdispatch.dylib _dispatch_lane_barrier_sync_invoke_and_complete
AVFCore -[AVCustomVideoCompositorSession(AVCustomVideoCompositorSession_FigCallbackHandling) _customCompositorShouldCancelPendingFrames]
AVFCore _customCompositorShouldCancelPendingFramesCallback
MediaToolbox remoteVideoCompositor_HandleVideoCompositorClientMessage
CoreMedia __figXPCConnection_CallClientMessageHandlers_block_invoke
libdispatch.dylib _dispatch_call_block_and_release
libdispatch.dylib _dispatch_client_callout
libdispatch.dylib _dispatch_lane_serial_drain
libdispatch.dylib _dispatch_lane_invoke
libdispatch.dylib _dispatch_root_queue_drain_deferred_wlh
libdispatch.dylib _dispatch_workloop_worker_thread
libsystem_pthread.dylib _pthread_wqthread
libsystem_pthread.dylib start_wqthread
What stood out to me is that this is only being reported from IOS 26.0+ devices. A part of the stacktrace failed to be symbolicated [symbolication failed]. I'm 90% confident that this is Apple code, not my app's code.
I cannot reproduce this locally. Is this a known issue? What are the possible root-causes, and how can I verify/eliminate them?
Thanks,
Fetching the featured artists in a playlist, no longer works in iOS 26.1 beta
let detailedPlaylist = try await playlist.with([.tracks, .featuredArtists], preferredSource: .library)
Throws error when using .library and using .catalog returns empty array.
This works correctly in iOS 26.0 and iOS 18 versions
Hi there,
I recently launched a dj app to the mac app store, and was wondering how I could access raw data to songs from apple music just like how serato, rekordbox, djay, etc. do?
Thanks,
Gunek
Topic:
Media Technologies
SubTopic:
General
Tags:
Apple Music API
MusicKit
Performance Partners Program
Apple Music Feed
Hello, I'm trying to write a shortcut using Toolbox Pro that gets triggered by an accessibility trigger and then favorites the currently playing song. It's working pretty well, but I noticed that for some artists, especially asian ones, it simply doesn't work. While debugging, I noticed that the tool uses the same song ID, artist ID, everything as it should to search for the song and favorite it. However, I noticed that Apple Music treats artists with romanized names as two separate artists!
https://music.apple.com/br/artist/王菲/41760704
https://music.apple.com/br/artist/faye-wong/41760704?l=en-GB
You can see that the ID is the same (41760704). It seems that, when I search for the artist, the first artist (王菲) returns, so that when I open URLs on the web for the artist I can see a star next to the song name, meaning that it got a like. However, the romanized artist (faye-wong) doesn't have a like on the same song.
This is very weird, right?
With older iOS versions, when user taps Mute/Volume button on AVPLayerViewController to unmute, the system restores the sound volume of device to the level when user muted before.
On iOS 26, when user taps unmute button on screen, the volume starts from 0 (not restore). (but it still restores if user unmutes by pressing physical volume buttons).
As I understand, the Volume bar/button on AVPlayerViewController is MPVolumeView, and I can not control it. So this is a feature of the system.
But I got complaints that this is a bug. I did not find documents that describe this change of Mute button behavior.
I need some bases to explain this situation. Thank you.