I am getting high error rates from the Apple Music API. This has been happening for months now, and it is quite frustrating. It is a mix of 404, 504, and random 500 errors. I hit these endpoints all of the time, so it is not like I am hitting a resource that doesn't exist. Why is this happening? Is this a known issue that is getting worked on?
                    
                  
                General
RSS for tagExplore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
  
    
    Selecting any option will automatically load the page
  
  
  
  
    
  
  
            Post
Replies
Boosts
Views
Activity
                    
                      I have an app (currently in development stage) which needs to use ffmpeg, so I tried searching how to embed ffmpeg in apple apps and found this article https://doc.qt.io/qt-6/qtmultimedia-building-ffmpeg-ios.html
It is working correctly for iOS but not for macOS ( I have made changes macOS specific using chatgpt and traditional web searching)
Drive link for the file and instructions which I'm following: https://drive.google.com/drive/folders/11wqlvb8SU2thMSfII4_Xm3Kc2fPSCZed?usp=share_link
Please can someone from apple or in general help me to figure out what I'm doing wrong?
                    
                  
                
                    
                      Use case: When SharePlay -ing a fully immersive 3D scene (e.g. a virtual stage), I would like to shine lights on specific Personas, so they show up brighter when someone in the scene is recording the feed (think a camera person in the scene wearing Vision Pro).
Note: This spotlight effect only needs to render in the camera person's headset and does NOT need to be journaled or shared.
Before I dive into this, my technical question: Can environmental and/or scene lighting affect Persona brightness in a SharePlay? If not, is there a way to programmatically make Personas "brighter" when recording?
My screen recordings always seem to turn out darker than what's rendered in environment, and manually adjusting the contrast tends to blow out the details in a Persona's face (especially in visionOS 26).
                    
                  
                
                    
                      Hi Apple Music API / MusicKit / MediaPlayer Team,
Similar to the currentPlaybackRate keeps the same pitch, it would be great to have a currentPlaybackPitch parameter as well. Alternatively, adding a preservesPitch parameter would also work.
I see that iOS 26 AutoMix on Apple Music currently does pitch shifting during music transitions, so maybe this is something that could be exposed on the later betas of iOS 26?
Main feature request we get is to have simple pitch changes to Apple Music we play through our app. Is this being considered?
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Media Technologies
  	
                
                
                SubTopic:
                  
                    
	
		General
		
  	
                  
                
              
              
                Tags:
              
              
  
  
    
      
      
      
        
          
            Apple Music API
          
        
        
      
      
    
      
      
      
        
          
            FairPlay Streaming
          
        
        
      
      
    
      
      
      
        
          
            Media Player
          
        
        
      
      
    
      
      
      
        
          
            MusicKit
          
        
        
      
      
    
  
  
              
                
                
              
            
          
                    
                      Hello everyone,
I'm working on implementing a screen sharing feature using RPSystemBroadcastPickerView and a Broadcast Upload Extension to share the entire app screen in an iOS application.
The Broadcast Upload Extension is set up following Apple's ReplayKit guidelines. However, I’m encountering an issue during the broadcast startup sequence:
❗ Problem Description
The Screen Broadcast UI appears as expected
I tap “Start Broadcast”
The countdown (3 → 2 → 1) completes
Then it immediately reverts to the "Start Broadcast" screen, and screen sharing does not begin
No error messages are displayed
None of the extension lifecycle methods (broadcastStarted(withSetupInfo:), processSampleBuffer, etc.) are called
There are no logs or crash reports, neither in the main app nor in the extension
✅ What Has Been Verified
Info.plist of the Broadcast Upload Extension includes:
NSExtensionPointIdentifier = com.apple.broadcast-services-upload
NSExtensionPrincipalClass set correctly
RPBroadcastProcessMode = RPBroadcastProcessModeSampleBuffer
preferredExtension is set properly to the extension’s bundle identifier
Extension is listed in the main app's build settings under "Frameworks, Libraries, and Embedded Content"
⚠️ Additional Concern
We noticed that in Xcode (latest version), the Broadcast Upload Extension is listed under "Embedded Frameworks" with the setting "Embed Without Signing", and there is no option to change it to "Embed & Sign". We're wondering if this could be the reason the extension fails to launch correctly at runtime, despite being detected by the broadcast picker.
❓ Questions
Has anyone faced similar issues where the broadcast never starts despite correct setup?
Could the "Embed Without Signing" be causing the system to silently cancel or ignore the extension at runtime?
Are there any provisioning profile or entitlement requirements specific to Broadcast Upload Extensions that might trigger this behavior silently?
Any insights, suggestions, or workarounds would be greatly appreciated.
Thank you in advance!
                    
                  
                
                    
                      I am working to update a live blog in Apple News. As far as I know there is an update endpoint to update a content in Apple News. Is there any feature in Apple News to trigger an event when the original content updated and pull the updated content?
                    
                  
                
                    
                      When making a call to https://api.music.apple.com/v1/me/library/artists to get a user's library artists, it returns the following (as an example):
[
  {
    id: 'r.FCwruQb',
    type: 'library-artists',
    href: '/v1/me/library/artists/r.FCwruQb?l=en-US',
    attributes: { name: 'A Great Big World' }
  },
  {
    id: 'r.7VSWOgj',
    type: 'library-artists',
    href: '/v1/me/library/artists/r.7VSWOgj?l=en-US',
    attributes: { name: 'Aaliyah' }
  },
  ...
]
If I try and use an artist id from that retuned data to look up additional information about the artist by calling https://api.music.apple.com/v1/catalog/us/artists/{id}, it fails.
User Library Artists don't seem to equal Catalog Artists.
It'd be great if there was a way to use these interchangeably. Am I missing something?
                    
                  
                
                    
                      I donate INPlayMediaIntent to systerm(donate success), but not show in control center
My code is as follows
let mediaItems = mediaItems.map { $0.inMediaItem }
let intent = if #available(iOS 13.0, *) {
INPlayMediaIntent(mediaItems: mediaItems,
mediaContainer: nil,
playShuffled: false,
playbackRepeatMode: .none,
resumePlayback: true,
playbackQueueLocation: .now,
playbackSpeed: nil,
mediaSearch: nil)
} else {
INPlayMediaIntent(mediaItems: mediaItems,
mediaContainer: nil,
playShuffled: false,
playbackRepeatMode: .none,
resumePlayback: true)
}
    intent.suggestedInvocationPhrase = "播放音乐"
    let interaction = INInteraction(intent: intent, response: nil)
    interaction.donate { error in
        if let error = error {
            print("Intent 捐赠失败: \(error.localizedDescription)")
        } else {
            print("Intent 捐赠成功 ✅")
        }
    }
                    
                  
                
                    
                      How can media resources in my app be recommended to the system media control center, just like TikTok in the picture
                    
                  
                
                    
                      I have implemented fetching Apple Music preview songs using a Swift framework integrated into a Unity app.
My requirement is to fetch full tracks from a user’s Apple Music library and play them inside Unity.
To do this, I understand that I need to handle authentication, generate a Developer Token, and then obtain a Music User Token to access the user’s Apple Music content.
Currently, I have an Individual Apple Developer account (not Organization).
Based on my research, it seems that:
With an Individual account, I can implement this functionality and even upload builds to TestFlight for internal testing.
However, when releasing the app publicly on the App Store, full-track playback may be restricted for Individual accounts and allowed only for Organization accounts.
👉 Can you confirm if this understanding is correct?
👉 Specifically, is it possible for an Individual account to fetch and play full-length tracks from a subscribed Apple Music user’s library (at least for internal/TestFlight testing)?
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Media Technologies
  	
                
                
                SubTopic:
                  
                    
	
		General
		
  	
                  
                
              
              
              
  
  
    
    
  
  
              
                
                
              
            
          
                    
                      Hello,
I am trying to access the Apple Music Feed API, but I am recieving a 401 Unauthorized error message whenever I try to access it.
I have tried using my own code to generate a JWT and directly call the API (which can call the standard Apple Music API successfully).
> GET /v1/feed/song/latest HTTP/2
> Host: api.media.apple.com
> user-agent: insomnia/2023.5.8
> authorization: Bearer [REDACTED]
> accept: */*
< HTTP/2 401 
< content-type: application/json; charset=utf-8
< content-length: 0
< x-apple-jingle-correlation-key: AV5IOHBNM2UUJVOFQ4HZ2TGF6Q
< x-daiquiri-instance: daiquiri:10001:daiquiri-all-shared-ext-7bb7c9b9bb-r459v:7987:25RELEASE91:daiquiri-amp-kubernetes-shared-ext-ak8s-prod-pv4-amp-daiquiri-ingress-prod
and also the Apple provided Python example code, which gives me authentication errors too.
$ python3 ./apple_music_feed_example.py --key-id NMBH[...] --team-id 3TNZ[...] --secret-key-file-path "/Users/foxt/Documents/am-feed/NMBH[...].p8" --out-dir .
running....
INFO:__main__:Sending requests to https://api.media.apple.com
INFO:__main__:Getting the latest export for feed artist
Exception: Authentication Failed. Did you provide the correct team id, key id, and p8 file?
Does this API need to be enabled on my account separately from the main Apple Music API? The documentation reads to me as if anyone with an Apple Developer Programme membership can use this API and I did not see any information regarding any other requirements
                    
                  
                
                    
                      Hello, I want to know if there are any restrictions with MusicKit to be used in a mobile app to be able to manipulate audio with an EQ on tracks coming from Apple Music, without modifying the actual track structure/data of course, just the audio output.
                    
                  
                
                    
                      My Environment:
Device: Mac (Apple Silicon, arm64)
OS: macOS 15.6.1
Description:
I'm developing a music app and have encountered an issue where I cannot update the playbackState in MPNowPlayingInfoCenter after my app loses audio focus to another app. Even though my app correctly calls [MPNowPlayingInfoCenter defaultCenter].playbackState = .paused, the system's Now Playing UI (Control Center, Lock Screen, AirPods controls) does not reflect this change. The UI remains stuck until the app that currently holds audio focus also changes its playback state.
I've observed this same behavior in other third-party music apps from the App Store, which suggests it might be a system-level issue.
Steps to Reproduce:
Use two most popular music apps in Chinese app Store (NeteaseCloud music and QQ music) (let's call them App A and App B):
Start playback in App A.
Start playback in App B. (App B now has audio focus, and App A is still playing).
Attempt to pause App A via the system's Control Center or its own UI.
Observed Behavior: App A's audio stream stops, but in the system's Now Playing controls, App A still appears to be playing. The progress bar continues to advance, and the pause button becomes unresponsive.
If you then pause App B, the Now Playing UI for App A immediately corrects itself and displays the proper "paused" state.
My Questions:
Is there a specific procedure required to update MPNowPlayingInfoCenter when an app is not the current "Now Playing" application?
Is this a known issue or expected behavior in macOS?
Are there any official workarounds or solutions to ensure the UI updates correctly?
                    
                  
                
                    
                      Queria saber quando lança o IOS 26 oficialmente
I wanted to know when iOS 26 will be officially released.
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Media Technologies
  	
                
                
                SubTopic:
                  
                    
	
		General
		
  	
                  
                
              
              
              
  
  
    
    
  
  
              
                
                
              
            
          
                    
                      I have some question about the TTS
My device default language is zh-HK. (cantonese)
my device is iPhone 16 Pro, IOS 18.6
I create a function speakMandarin
I want the device to speak the zh-CN (putonghua)
however the device only can speak zh-HK (cantonese).
I already set the AVSpeechSynthesisVoice language as zh-CN
func speakMandarin(text: String) {
        print("speakMandarin, \(text)")
        lastError = nil // Reset error
        // Stop any ongoing speech before starting new
        if synthesizer.isSpeaking {
            synthesizer.stopSpeaking(at: .immediate)
        }
        
        // Configure speech utterance
        let utterance = AVSpeechUtterance(ssmlRepresentation: text)!
        utterance.rate = 0.5 // Natural speaking speed
        utterance.pitchMultiplier = 1.0
        utterance.volume = 1.0
        utterance.voice = AVSpeechSynthesisVoice(language: "zh-CN")
        let preferredLanguages = [  "zh-CN" , "zh-TW"]
        var selectedVoice: AVSpeechSynthesisVoice?
        for lang in preferredLanguages {
            if let voice = AVSpeechSynthesisVoice(language: lang) {
                print(lang)
                selectedVoice = voice
                utterance.voice = voice
                break
            }
        }
        
        // If no Mandarin voice found, use system default
        if selectedVoice == nil {
            selectedVoice = AVSpeechSynthesisVoice(language: nil)
            lastError = "未偵測到普通話語音包,將使用系統預設語音"
            print (lastError)
        }
        
        utterance.voice = selectedVoice
        print(utterance)
        synthesizer.speak(utterance)
    }
here is my log
speakMandarin, <speak>你好!我們來聊聊喜歡的動物吧。你喜歡什麼動物呢?</speak>
zh-CN
[AVSpeechUtterance 0x1194efb80] String: 你好!我們來聊聊喜歡的動物吧。你喜歡什麼動物呢?
Voice: [AVSpeechSynthesisVoice 0x104ceff90] Language: zh-CN, Name: Tingting, Quality: Default [com.apple.voice.compact.zh-CN.Tingting]
Rate: 0.50
Volume: 1.00
Pitch Multiplier: 1.00
Delays: Pre: 0.00(s) Post: 0.00(s)
                    
                  
                
                    
                      Hello Apple Engineers,
I am developing a feature related to SpeechRecognizer (import Speech), and I have two questions:
After adding the NSSpeechRecognitionUsageDescription key in my Info.plist, when I initialize an SFSpeechRecognizer instance, the system shows an authorization alert. The alert contains a pre-defined message from Apple:
“Speech data from this app will be sent to Apple to process your requests. This will also help Apple improve its speech recognition technology.”
Is it possible to remove or customize this message?
My app runs in a network environment with a whitelist. I need to know which URL the SFSpeechRecognizer instance connects to, and which port it uses, so that I can add it to the whitelist.
Thank you very much for your support!
Best regards,
Yu Cheng
                    
                  
                
                    
                      Hello,
I'm working on a Flutter app targeting both Android and iOS, where I implemented ShazamKit.
In order to achieve that, I first tried with the flutter_shazam_kit package, but since it's not maintained anymore, I forked it here, and tried to update it to meet the Google Play Store requirements, as you can see here:
https://github.com/mregnauld/flutter_shazam_kit/tree/fix-16k
Unfortunately, after trying everything, my app still doesn't meet the (not so) new 16 KB native library alignment. Also, I'm 100% sure it comes from that because the error message disappears if I remove that package from my app.
So after investigating, it seems that the problem comes from the ShazamKit for Android (that you can find here: https://developer.apple.com/download/all/?q=Android%20ShazamKit), and especially the .so files in the .aar file.
Is there anything I can do to fix that, or should I wait before the ShazamKit team fix that?
I'm totally stuck with that so any help is highly appreciated.
Thanks.
                    
                  
                
                    
                      add a currently playing track endpoint on the apple music api. its kinda wild how apple music goes after spotify without having such a useful endpoint.
                    
                  
                
                    
                      Hi, I'm working an a video editing software that lets you composite and export videos. I use a custom compositor to apply my effects etc.
In my crash dashboard, I am seeing a report of an EXC_BAD_ACCESS crash from objc_msgSend. Below is the stacktrace.
libobjc.A.dylib  objc_msgSend
libdispatch.dylib  _dispatch_sync_invoke_and_complete_recurse
libdispatch.dylib  _dispatch_sync_f_slow
[symbolication failed]
libdispatch.dylib  _dispatch_client_callout
libdispatch.dylib  _dispatch_lane_barrier_sync_invoke_and_complete
AVFCore  -[AVCustomVideoCompositorSession(AVCustomVideoCompositorSession_FigCallbackHandling) _customCompositorShouldCancelPendingFrames]
AVFCore  _customCompositorShouldCancelPendingFramesCallback
MediaToolbox  remoteVideoCompositor_HandleVideoCompositorClientMessage
CoreMedia  __figXPCConnection_CallClientMessageHandlers_block_invoke
libdispatch.dylib  _dispatch_call_block_and_release
libdispatch.dylib  _dispatch_client_callout
libdispatch.dylib  _dispatch_lane_serial_drain
libdispatch.dylib  _dispatch_lane_invoke
libdispatch.dylib  _dispatch_root_queue_drain_deferred_wlh
libdispatch.dylib  _dispatch_workloop_worker_thread
libsystem_pthread.dylib  _pthread_wqthread
libsystem_pthread.dylib  start_wqthread
What stood out to me is that this is only being reported from IOS 26.0+ devices. A part of the stacktrace failed to be symbolicated [symbolication failed]. I'm 90% confident that this is Apple code, not my app's code.
I cannot reproduce this locally. Is this a known issue? What are the possible root-causes, and how can I verify/eliminate them?
Thanks,
                    
                  
                
                    
                      If I fetch a library playlist like the generated "Favorites" playlist via MusicKit like this
guard let initialTracks = try await playlist.with([.tracks]).tracks else {
            return nil
}
I get a list of tracks like this:
...
TrackID:  i.e5gmPS6rZ856
TrackID:  i.4ZQMxU0OxNg0
TrackID:  i.J198KH4P85K4
TrackID:  i.J1AaRC4P85K4
TrackID:  i.4BPqWt0OxNg0
TrackID:  4473570282773028026
TrackID:  4473570282773028025
TrackID:  4015088256684964387
TrackID:  4473570282773028024
TrackID:  7541557725362154249
TrackID:  4473570282773028027
I save the IDs for later use, but when I want to fetch them, only the ones with ids that starts with "i." work.
static func getLibrarySong(from id: String) async -> Song? {
        var request = MusicLibraryRequest<Song>()
        request.filter(matching: \.id, equalTo: MusicItemID(id))
        
        do {
            let response = try await request.response()
            
            return response.items.first
        } catch {
            ...
        }
    }
Or the Apple Music API endpoint :
static func getLibrarySongFromAPI(with id: String) async -> Song? {
        guard let url = AppleMusicURL.getURL(for: .getSongById, id: id) else {
            return nil
        }
        do {
            let dataRequest = MusicDataRequest(urlRequest: URLRequest(url: url))
            let dataResponse = try await dataRequest.response()
            
            let response = try JSONDecoder().decode(SongsResponse.self, from: dataResponse.data)
            
            return response.data.first
        } catch {
            ...
        }
    }
Both functions above won't work for the non numeric like 4473570282773028024 so it seems the ID is wrong, but how do I make it work?
Otherwise I can fetch all the songs fine, in catalog or in the library, but these few songs can't be individually fetched, only with the try await playlist.with([.tracks])` fetch, that gets the whole playlist. But obviously this isn't always possible.
Thanks in advance!