I have some question about the TTS
My device default language is zh-HK. (cantonese)
my device is iPhone 16 Pro, IOS 18.6
I create a function speakMandarin
I want the device to speak the zh-CN (putonghua)
however the device only can speak zh-HK (cantonese).
I already set the AVSpeechSynthesisVoice language as zh-CN
func speakMandarin(text: String) {
print("speakMandarin, \(text)")
lastError = nil // Reset error
// Stop any ongoing speech before starting new
if synthesizer.isSpeaking {
synthesizer.stopSpeaking(at: .immediate)
}
// Configure speech utterance
let utterance = AVSpeechUtterance(ssmlRepresentation: text)!
utterance.rate = 0.5 // Natural speaking speed
utterance.pitchMultiplier = 1.0
utterance.volume = 1.0
utterance.voice = AVSpeechSynthesisVoice(language: "zh-CN")
let preferredLanguages = [ "zh-CN" , "zh-TW"]
var selectedVoice: AVSpeechSynthesisVoice?
for lang in preferredLanguages {
if let voice = AVSpeechSynthesisVoice(language: lang) {
print(lang)
selectedVoice = voice
utterance.voice = voice
break
}
}
// If no Mandarin voice found, use system default
if selectedVoice == nil {
selectedVoice = AVSpeechSynthesisVoice(language: nil)
lastError = "未偵測到普通話語音包,將使用系統預設語音"
print (lastError)
}
utterance.voice = selectedVoice
print(utterance)
synthesizer.speak(utterance)
}
here is my log
speakMandarin, <speak>你好!我們來聊聊喜歡的動物吧。你喜歡什麼動物呢?</speak>
zh-CN
[AVSpeechUtterance 0x1194efb80] String: 你好!我們來聊聊喜歡的動物吧。你喜歡什麼動物呢?
Voice: [AVSpeechSynthesisVoice 0x104ceff90] Language: zh-CN, Name: Tingting, Quality: Default [com.apple.voice.compact.zh-CN.Tingting]
Rate: 0.50
Volume: 1.00
Pitch Multiplier: 1.00
Delays: Pre: 0.00(s) Post: 0.00(s)
General
RSS for tagExplore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Hello Apple Engineers,
I am developing a feature related to SpeechRecognizer (import Speech), and I have two questions:
After adding the NSSpeechRecognitionUsageDescription key in my Info.plist, when I initialize an SFSpeechRecognizer instance, the system shows an authorization alert. The alert contains a pre-defined message from Apple:
“Speech data from this app will be sent to Apple to process your requests. This will also help Apple improve its speech recognition technology.”
Is it possible to remove or customize this message?
My app runs in a network environment with a whitelist. I need to know which URL the SFSpeechRecognizer instance connects to, and which port it uses, so that I can add it to the whitelist.
Thank you very much for your support!
Best regards,
Yu Cheng
I'm wondering if someone happened issues with currentPlaybackRate in released version of iOS 26.0?
There is no issue happened in case of iOS 18.5.
When I changed currentPlaybackRate on iOS 26.0, it seems to unexpectedly change currentPlaybackRate to be 0 or 1.0 forcibly no matter DRM or non-DRM contents. And also, playing music will be abnormal behavior and unstable with noise if currentPlaybackRate is not 1.0. And changes stop state and play state frequently.
Hello,
I'm working on a Flutter app targeting both Android and iOS, where I implemented ShazamKit.
In order to achieve that, I first tried with the flutter_shazam_kit package, but since it's not maintained anymore, I forked it here, and tried to update it to meet the Google Play Store requirements, as you can see here:
https://github.com/mregnauld/flutter_shazam_kit/tree/fix-16k
Unfortunately, after trying everything, my app still doesn't meet the (not so) new 16 KB native library alignment. Also, I'm 100% sure it comes from that because the error message disappears if I remove that package from my app.
So after investigating, it seems that the problem comes from the ShazamKit for Android (that you can find here: https://developer.apple.com/download/all/?q=Android%20ShazamKit), and especially the .so files in the .aar file.
Is there anything I can do to fix that, or should I wait before the ShazamKit team fix that?
I'm totally stuck with that so any help is highly appreciated.
Thanks.
add a currently playing track endpoint on the apple music api. its kinda wild how apple music goes after spotify without having such a useful endpoint.
My app has been using the iTunes Search API (itunes.apple.com/search) for a few years now, but at some point over the last week or so (late Sept. 2025) it is no longer returning track results with explicit content, regardless of whether I provide "explicit=Yes" (which is the default anyway, according to the API documentation - https://performance-partners.apple.com/search-api). Has anyone else experienced this with this API and have you figured out a workaround?
FYI, I do also use the more robust Apple Music API in another part of my app, which isn't going through this issue, so I know it's technically an alternative. I just need to stick with iTunes Search API in this particular case. Thanks.
Hi, I'm working an a video editing software that lets you composite and export videos. I use a custom compositor to apply my effects etc.
In my crash dashboard, I am seeing a report of an EXC_BAD_ACCESS crash from objc_msgSend. Below is the stacktrace.
libobjc.A.dylib objc_msgSend
libdispatch.dylib _dispatch_sync_invoke_and_complete_recurse
libdispatch.dylib _dispatch_sync_f_slow
[symbolication failed]
libdispatch.dylib _dispatch_client_callout
libdispatch.dylib _dispatch_lane_barrier_sync_invoke_and_complete
AVFCore -[AVCustomVideoCompositorSession(AVCustomVideoCompositorSession_FigCallbackHandling) _customCompositorShouldCancelPendingFrames]
AVFCore _customCompositorShouldCancelPendingFramesCallback
MediaToolbox remoteVideoCompositor_HandleVideoCompositorClientMessage
CoreMedia __figXPCConnection_CallClientMessageHandlers_block_invoke
libdispatch.dylib _dispatch_call_block_and_release
libdispatch.dylib _dispatch_client_callout
libdispatch.dylib _dispatch_lane_serial_drain
libdispatch.dylib _dispatch_lane_invoke
libdispatch.dylib _dispatch_root_queue_drain_deferred_wlh
libdispatch.dylib _dispatch_workloop_worker_thread
libsystem_pthread.dylib _pthread_wqthread
libsystem_pthread.dylib start_wqthread
What stood out to me is that this is only being reported from IOS 26.0+ devices. A part of the stacktrace failed to be symbolicated [symbolication failed]. I'm 90% confident that this is Apple code, not my app's code.
I cannot reproduce this locally. Is this a known issue? What are the possible root-causes, and how can I verify/eliminate them?
Thanks,
If I fetch a library playlist like the generated "Favorites" playlist via MusicKit like this
guard let initialTracks = try await playlist.with([.tracks]).tracks else {
return nil
}
I get a list of tracks like this:
...
TrackID: i.e5gmPS6rZ856
TrackID: i.4ZQMxU0OxNg0
TrackID: i.J198KH4P85K4
TrackID: i.J1AaRC4P85K4
TrackID: i.4BPqWt0OxNg0
TrackID: 4473570282773028026
TrackID: 4473570282773028025
TrackID: 4015088256684964387
TrackID: 4473570282773028024
TrackID: 7541557725362154249
TrackID: 4473570282773028027
I save the IDs for later use, but when I want to fetch them, only the ones with ids that starts with "i." work.
static func getLibrarySong(from id: String) async -> Song? {
var request = MusicLibraryRequest<Song>()
request.filter(matching: \.id, equalTo: MusicItemID(id))
do {
let response = try await request.response()
return response.items.first
} catch {
...
}
}
Or the Apple Music API endpoint :
static func getLibrarySongFromAPI(with id: String) async -> Song? {
guard let url = AppleMusicURL.getURL(for: .getSongById, id: id) else {
return nil
}
do {
let dataRequest = MusicDataRequest(urlRequest: URLRequest(url: url))
let dataResponse = try await dataRequest.response()
let response = try JSONDecoder().decode(SongsResponse.self, from: dataResponse.data)
return response.data.first
} catch {
...
}
}
Both functions above won't work for the non numeric like 4473570282773028024 so it seems the ID is wrong, but how do I make it work?
Otherwise I can fetch all the songs fine, in catalog or in the library, but these few songs can't be individually fetched, only with the try await playlist.with([.tracks])` fetch, that gets the whole playlist. But obviously this isn't always possible.
Thanks in advance!
Fetching the featured artists in a playlist, no longer works in iOS 26.1 beta
let detailedPlaylist = try await playlist.with([.tracks, .featuredArtists], preferredSource: .library)
Throws error when using .library and using .catalog returns empty array.
This works correctly in iOS 26.0 and iOS 18 versions
Hi there,
I recently launched a dj app to the mac app store, and was wondering how I could access raw data to songs from apple music just like how serato, rekordbox, djay, etc. do?
Thanks,
Gunek
Topic:
Media Technologies
SubTopic:
General
Tags:
Apple Music API
MusicKit
Performance Partners Program
Apple Music Feed
Hello, I'm trying to write a shortcut using Toolbox Pro that gets triggered by an accessibility trigger and then favorites the currently playing song. It's working pretty well, but I noticed that for some artists, especially asian ones, it simply doesn't work. While debugging, I noticed that the tool uses the same song ID, artist ID, everything as it should to search for the song and favorite it. However, I noticed that Apple Music treats artists with romanized names as two separate artists!
https://music.apple.com/br/artist/王菲/41760704
https://music.apple.com/br/artist/faye-wong/41760704?l=en-GB
You can see that the ID is the same (41760704). It seems that, when I search for the artist, the first artist (王菲) returns, so that when I open URLs on the web for the artist I can see a star next to the song name, meaning that it got a like. However, the romanized artist (faye-wong) doesn't have a like on the same song.
This is very weird, right?
When I’m on FaceTime, my phone will randomly end my call? I have an iPhone 17, iOS 26.1. Some times I won’t even be touching my phone screen and it’ll hang up. I’m not sure if this is a universal issue or just a me problem. It’s getting really annoying.
Topic:
Media Technologies
SubTopic:
General
I use
htttps://api.music.apple.com/v1/me/library/playlists/${playlistId}/tracks
to add tracks to a playlist I created.
How do I DELETE tracks from the playlist?
The documentation does not mention a method for this. I have tried calling DELETE methods in various combinations but nothing seems to work.
Is this possible?
With older iOS versions, when user taps Mute/Volume button on AVPLayerViewController to unmute, the system restores the sound volume of device to the level when user muted before.
On iOS 26, when user taps unmute button on screen, the volume starts from 0 (not restore). (but it still restores if user unmutes by pressing physical volume buttons).
As I understand, the Volume bar/button on AVPlayerViewController is MPVolumeView, and I can not control it. So this is a feature of the system.
But I got complaints that this is a bug. I did not find documents that describe this change of Mute button behavior.
I need some bases to explain this situation. Thank you.