Hi, I've recently been working with the Apple Music API and have had success in loading all the playlists on my account, loading songs from each playlist, and adding songs to the ApplicationMusicPlayer.share.queue. The problem I'm running into is that not all songs from the playlist are being added to the queue, despite confirming all the songs are being based on the PlaybackView.swift I'm about to share with you. I would also like to answer other underlying questions if possible. I am also open to any other suggestions. In this scenario were also assuming isShuffled is true every time.
How can I determine when a song has ended?
How can I get the album title information?
How can I get the current song title, album information, and artist information without pressing play? I can only seem to update the screen when I select my play meaning ApplicationMusicPlayer.shared.play() is being called.
How do I get the endTime of the song? I believe it should be ApplicationMusicPlayer.shared.queue.currentEntry.endTime but this doesn't seem to work.
//
// PlayBackView.swift
//
// Created by Justin on 8/16/24.
//
import SwiftUI
import MusicKit
import Foundation
enum PlayState {
case play
case pause
}
struct PlayBackView: View {
@State var song: Track
@Binding var songs: [Track]?
@State var isShuffled = false
@State private var playState: PlayState = .pause
@State private var isFirstPlay = true
private let player = ApplicationMusicPlayer.shared
private var isPlaying: Bool {
return (player.state.playbackStatus == .playing)
}
var body: some View {
VStack {
// Album Cover
HStack(spacing: 20) {
if let artwork = player.queue.currentEntry?.artwork {
ArtworkImage(artwork, height: 100)
} else {
Image(systemName: "music.note")
.resizable()
.frame(width: 100, height: 100)
}
VStack(alignment: .leading) {
// Song Title
Text(player.queue.currentEntry?.title ?? "Song Title Not Found")
.font(.title)
.fixedSize(horizontal: false, vertical: true)
// How do I get AlbumTitle from here??
// Artist Name
Text(player.queue.currentEntry?.subtitle ?? "Artist Name Not Found")
.font(.caption)
}
}
.padding()
Spacer()
// Progress View
// endTime doesn't work and not sure why.
ProgressView(value: player.playbackTime, total: player.queue.currentEntry?.endTime ?? 1.00)
.progressViewStyle(.linear)
.tint(.red.opacity(0.5))
// Duration View
HStack {
Text(durationStr(from: player.playbackTime))
.font(.caption)
Spacer()
if let duration = player.queue.currentEntry?.endTime {
Text(durationStr(from: duration))
.font(.caption)
}
}
Spacer()
Button {
Task {
do {
try await player.skipToNextEntry()
} catch {
print(error.localizedDescription)
}
}
} label: {
Label("", systemImage: "forward.fill")
.tint(.white)
}
Spacer()
// Play/Pause Button
Button(action: {
handlePlayButton()
isFirstPlay = false
}, label: {
Text(playState == .play ? "Pause" : isFirstPlay ? "Play" : "Resume")
.frame(maxWidth: .infinity)
})
.buttonStyle(.borderedProminent)
.padding()
.font(.largeTitle)
.tint(.red)
}
.padding()
.onAppear {
if isShuffled {
songs = songs?.shuffled()
if let songs, let firstSong = songs.first {
player.queue = .init(for: songs, startingAt: firstSong)
player.state.shuffleMode = .songs
}
}
}
.onDisappear {
player.stop()
player.queue = []
player.playbackTime = .zero
}
}
private func handlePlayButton() {
Task {
if isPlaying {
player.pause()
playState = .pause
} else {
playState = .play
await playTrack()
}
}
}
@MainActor
private func playTrack() async {
do {
try await player.play()
} catch {
print(error.localizedDescription)
}
}
private func durationStr(from duration: TimeInterval) -> String {
let seconds = Int(duration)
let minutes = seconds / 60
let remainder = seconds % 60
// Format the string to ensure two digits for the remainder (seconds)
return String(format: "%d:%02d", minutes, remainder)
}
}
//#Preview {
// PlayBackView()
//}
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Post
Replies
Boosts
Views
Activity
Title says it all.
Hello,
I am a deaf-blind wheelchair user, and I program in Swift using a braille display.
I’m reaching out for your help on an issue I’ve been struggling to solve.
Basically, when I extract a CMSampleBuffer from an AVAsset of a video, it comes with the Audio Format ID as Linear PCM. However, when I try to pass this CMSampleBuffer to write another video using AVAssetWriter, the video ends up muted.
The audio settings of the output video are configured to MPEG-4 AAC, but the input CMSampleBuffer has the Audio Format ID as Linear PCM.
I would like to request an extension for CMSampleBuffer that converts Linear PCM audio to MPEG-4 AAC.
I’ve searched extensively and couldn’t find anything.
Looking forward to your help.
Thank you.
Hi everyone,
I’m experiencing an issue where audio interruptions (e.g., phone calls) are not being intercepted while running sound classification in an app that uses the AVAudioSession. Classification works fine, but interruptions aren’t handled, even though I’ve followed Apple’s guidelines on handling audio interruptions [1_Document].
The classification was initially based on [2_Classifer], where it worked perfectly. However, when I adopted classification in a more camera-focused app using [3_Cam], the interruption behavior stopped working. The classification setup is functioning with [3_Cam], but audio interruptions are not triggered.
The listener is invoked before starting sound analysis as suggested in [2_Classifier].
startListeningForAudioSessionInterruptions()
try startAnalyzing([(request, observer)])
FYI, one change I have made for classifications is following. This works fine in all cases.
// try audioSession.setCategory(.record, mode: .default)
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth])
I suspect the issue might be related to the AVAudioSession configuration or how the app handles recording and playback together. Is there anything else I should check related to AVAudioSession? Are there additional APIs I could use to pre-check or better handle audio interruptions?
Any suggestions or guidance would be greatly appreciated!
Platform: Swift 5, Xcode 16, iOS 18.
References:
Document
Classifier
Cam
Best Regards
Here is the demo from Apple's site
This issues is specific to iOS 18.
When running this demo, we are getting new text when we have a gap in speaking, the recognitionTask(with:resultHandler:) provides new text which is only spoken after the gap and not the concatenation of old text and the new spoken text.
I have a USB audio interface that is causing kernel traps and the audio output to "skip" or dropout every few seconds. This behavior occurs with a completely fresh install of Catalina as well as Big Sur with the stock Music app on a 2019 MacBook Pro 16 (full specs below).
The Console logs show coreaudiod got an error from a kernel trap, a "USB Sound assertion" in AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644, and the Music app "skipping cycle due to overload."
I've added a short snippet from Console logs around the time of the audio skip/drop out. The more complete logs are at this gist:
https://gist.github.com/djflux/08d9007e2146884e6df1741770de5105
I've also opened a Feedback Assistant ticket (FB9037528):
https://feedbackassistant.apple.com/feedback/9037528
Does anyone know what could be causing this issue?
Thanks for any help.
Cheers,
Flux aka Andy.
Hardware Overview:
Model Name: MacBook Pro
Model Identifier: MacBookPro16,1
Processor Name: 8-Core Intel Core i9
Processor Speed: 2.4 GHz
Number of Processors: 1
Total Number of Cores: 8
L2 Cache (per Core): 256 KB
L3 Cache: 16 MB
Hyper-Threading Technology: Enabled
Memory: 64 GB
System Firmware Version: 1554.80.3.0.0 (iBridge: 18.16.14347.0.0,0)
System Software Overview:
System Version: macOS 11.2.3 (20D91)
Kernel Version: Darwin 20.3.0
Boot Volume: Macintosh HD
Boot Mode: Normal
Computer Name: mycomputername
User Name: myusername
Secure Virtual Memory: Enabled
System Integrity Protection: Enabled
USB interface: Denon DJ DS1
Snippet of Console logs
error 21:07:04.848721-0500 coreaudiod HALS_IOA1Engine::EndWriting: got an error from the kernel trap, Error: 0xE00002D7
default 21:07:04.848855-0500 Music HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload
default 21:07:04.857903-0500 kernel USB Sound assertion (Resetting engine due to error returned in Read Handler) in /AppleInternal/BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644
...
default 21:07:05.102746-0500 coreaudiod Audio IO Overload inputs: 'private' outputs: 'private' cause: 'Unknown' prewarming: no recovering: no
default 21:07:05.102926-0500 coreaudiod CAReportingClient.mm:508 message {
HostApplicationDisplayID = "com.apple.Music";
cause = Unknown;
deadline = 2615019;
"input_device_source_list" = Unknown;
"input_device_transport_list" = USB;
"input_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2";
"io_buffer_size" = 512;
"io_cycle" = 1;
"is_prewarming" = 0;
"is_recovering" = 0;
"issue_type" = overload;
lateness = "-535";
"output_device_source_list" = Unknown;
"output_device_transport_list" = USB;
"output_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2";
}: (null)
Hi!
I have an AVAudioSequencer with some AVMusicTracks that are filled with AVParameterEvents.
If I toggle the isMuted property of a track, it will instantly mute when changed to true. However, after turning the muting to false, the events will only triggers on the next round of a loop and not instantly. Is this intended behaviour, and is there some way to get the events to trigger immediately after toggling the isMuted to be false?
After updating iOS18.1 Beta Version, I have a lot of issues with my Apple Car Play as per following.
Audio Quality Really Bad (it’s not playing with media instead voice channel.
Sometime, it’s playing with the phone speakers even though I have connected the car play via cable.
Its not operating well with car steering control such as Volume up & down, skip button.
After receiving the phone call, it’s go back to the original audio quality but when my phone screen locked, its go bad again.
I expect to fix these problems asap as I love to play music when I drive around.
SoundRecognition causes Input/Output callbacks to have varying Buffer sizes and introduces Glitching
Hello,
We have noticed an issue with SoundRecognition that causes glitching with our AudioUnit setup in Smule.
Input and output frame sizes are inconsistent.
Input frame size does not match [AVAudioSession sharedInstance].IOBufferDuration
My best guess is that SoundRecognition influences the input frame size and not the output frame size.
To reproduce use the example app here:
https://github.com/MarkoGill/SoundRecognitionBug
Hardware/OS
iPhone 14 Pro on iOS 18 -> Experiences the problem
iPhone 11 on iOS 18 -> Experiences the problem
iPhone 15 on iOS 18 -> Not experiencing the problem
Reproduction Steps
Enable Sound Recognition (Settings > Accessibility > Sound Recognition > On)
Enable a Sound for detection (Sounds > Dog > On)
Open the example app with headset (it routes input to output)
Notice glitching occurs
Check the logs. Record and Playback buffer sizes vary
Example Log:
AU input sample rate: 48000.000000
AU output sample rate: 48000.000000
hardware sample rate: 48000.000000
hardware buffer size: 1104.000000
updated record frame counts: 1024
updated playback frame counts: 1104
Notes:
You can disable Sound Recognition, restart the app, and playback behaves correctly.
I have 14 pro max after update 18, sound problem in AirPods I can’t hear proper sound, I am a Pubg gamer after update I can’t hear right left front back sound anyone help me please
I have an app that plays sound files stored locally. I'm using a single SwiftUI view with a MPVolumeView so the user can control system volume from the player in my app. When I'm playing the sound file on the iPhone, my volume slider operates as expected. When I AirPlay to my AppleTV, the volume slider still works to control the volume, but when I hit play in my app, the volume snaps to a different value, but actual sound volume doesn't change. Control still works. Flipping to control center, I see a volume mismatch between system volume and the MPVolumeView.
Here's the code that I use to put the slider in my app.
struct VolumeSlider: UIViewRepresentable {
func makeUIView(context: Context) -> MPVolumeView {
let vv = MPVolumeView(frame: .zero)
vv.showsVolumeSlider = true
vv.setVolumeThumbImage(UIImage() ,for: UIControl.State.normal)
return vv
}
func updateUIView(_ uiView: MPVolumeView, context: Context) {
// No need to update the view in this case
}
}
I'm using AVFoundation and AVAudioPlayer to playback the sound file. I'm using MediaPlayer to tell MPNowPlayingInfoCenter the track info and AlbumArt. Audio control via control center works perfectly. Does the same if I target iOS 16 or 17.
Is this a bug with the MPVolumeView or the way I added it to the app?
Am a musician/DJ. Jumped from 14 Pro Max iOS 17 to 16 Pro Max iOS 18.1 b4. For each audio source(music app/yt music etc.) same track/eq/volume compared side by side. With the new device, overall it's a bit muffled and damping music when highs and lows are mixed. Most noticeable when listening to high vocals and acoustic instruments. Drum and bass sound much like on an old Nokia. On 14 Pro it's nothing like that. Thank you
Hello,
As explained in this link, the AVAssetReaderTrackOutput.copyNextSampleBuffer() returns a CMSampleBuffer in linear PCM audio format.
I want to place this audio buffer into an AVAssetWriterInput of type kAudioFormatMPEG4AAC, but I can't manage the conversion.
Could you help me by providing an extension that returns a CMSampleBuffer converted from linear PCM audio format to kAudioFormatMPEG4AAC?
Example:
extension CMSampleBuffer {
func fromPCMToAAC() -> CMSampleBuffer? {
// Here, get a new AudioStreamBasicDescription, create a CMSampleBuffer and a CMBlockBuffer
}
}
I've tried multiple times but without success.
Software: iOS 18.1
XCode: 16.0
Thank you!
Hi,
I'm trying to insert CMSampleBuffers into an AVAssetWriterInput that has been configured with expectsMediaDataInRealTime = false with pauses. That is, I insert fixed-length audio at specific (increasing and non-overlapping) time points with large gaps in between. E.g., 5 seconds of audio at t=3.0, 5 seconds of audio at t=12.0, etc.
The first audio sample plays at t=3 in the final output video as expected. But then all the other samples are bunched up immediately after it instead of being scheduled at the correct time. Below is my code.
I'm just loading the asset and then readjusting its timestamps to be correct in the absolute timeline. Why do they not get scheduled correctly when the timestamps and durations are definitely correct and non-overlapping?
func addFrame(_ pixelBuffer: CVPixelBuffer) {
guard CGSize(width: pixelBuffer.width, height: pixelBuffer.height) == outputSize else { return }
let frameTime = CMTimeMake(value: frameCount, timescale: frameRate)
if videoInput?.isReadyForMoreMediaData == true {
pixelBufferAdaptor?.append(pixelBuffer, withPresentationTime: frameTime)
frameCount += 1
currentTime = frameTime
}
}
func addMP3AudioClip(_ audioData: Data) async throws {
let tempURL = FileManager.default.temporaryDirectory.appendingPathComponent(UUID().uuidString + ".mp3")
defer {
try? FileManager.default.removeItem(at: tempURL)
}
try audioData.write(to: tempURL)
let asset = AVAsset(url: tempURL)
let duration = try await asset.load(.duration)
let audioTrack = try await asset.loadTracks(withMediaType: .audio).first!
let audioReader = try AVAssetReader(asset: asset)
let outputSettings: [String: Any] = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2,
AVLinearPCMBitDepthKey: 16,
AVLinearPCMIsFloatKey: false,
AVLinearPCMIsBigEndianKey: false,
AVLinearPCMIsNonInterleaved: false
]
let audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: outputSettings)
audioReader.add(audioReaderOutput)
guard audioReader.startReading() else {
throw NSError(domain: "AudioReaderError", code: 0, userInfo: [NSLocalizedDescriptionKey: "Failed to start reading audio"])
}
let baseInsertionTime = currentTime.convertScale(duration.timescale, method: .default) // Capture the current video time when the method is called
print("Adding audio clip at \(baseInsertionTime.seconds) seconds, duration: \(duration.seconds) seconds")
var audioTime = CMTime.zero
var totalDuration: Double = 0
while let sampleBuffer = audioReaderOutput.copyNextSampleBuffer() {
let bufferDuration = CMSampleBufferGetDuration(sampleBuffer)
let adjustedBuffer = adjustTimeStamp(of: sampleBuffer, by: baseInsertionTime)
while !audioInput!.isReadyForMoreMediaData {
try await Task.sleep(nanoseconds: 100_000_000) // 0.1 second
}
audioInput!.append(adjustedBuffer)
print(" Adjusted time: \(adjustedBuffer.presentationTimeStamp.seconds)")
audioTime = CMTimeAdd(audioTime, bufferDuration)
totalDuration += bufferDuration.seconds
}
print("Finished adding audio clip. Last sample at: \(CMTimeAdd(baseInsertionTime, audioTime).seconds) seconds")
print(" totalDuration=\(totalDuration)")
}
private func adjustTimeStamp(of sampleBuffer: CMSampleBuffer, by timeOffset: CMTime) -> CMSampleBuffer {
var count: CMItemCount = 0
CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: 0, arrayToFill: nil, entriesNeededOut: &count)
var timingInfo = [CMSampleTimingInfo](repeating: CMSampleTimingInfo(), count: count)
CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: count, arrayToFill: &timingInfo, entriesNeededOut: nil)
for i in 0..<count {
timingInfo[i].presentationTimeStamp = CMTimeAdd(timingInfo[i].presentationTimeStamp, timeOffset)
if timingInfo[i].decodeTimeStamp != .invalid {
timingInfo[i].decodeTimeStamp = CMTimeAdd(timingInfo[i].decodeTimeStamp, timeOffset)
} else {
timingInfo[i].decodeTimeStamp = timingInfo[i].presentationTimeStamp
}
}
var adjustedBuffer: CMSampleBuffer?
CMSampleBufferCreateCopyWithNewTiming(allocator: nil, sampleBuffer: sampleBuffer, sampleTimingEntryCount: count, sampleTimingArray: &timingInfo, sampleBufferOut: &adjustedBuffer)
return adjustedBuffer!
}
Our app, Universalis (Apple ID 284942719) plays audio successfully on all versions of iOS up to and including iOS 17. It uses the old MediaPlayer interface because it is targeted at versions all the way down to iOS 12.
On iOS 18, it plays audio but CarPlay fails to show the Now Playing screen. Instead, a message box pops up in CarPlay saying "There was a problem loading this content", with an OK button. Nevertheless the audio plays correctly.
This has been reported in the wild by a user of iOS 18 with CarPlay. I am also able to reproduce it locally, running the app in Xcode with the CarPlay Simulator, with an iPhone using iOS 18.0 or iOS 18.1. Earlier versions work correctly.
Looking at the console log in CarPlay, the following error message appears about 10 seconds before the error message pops up:
MSVEntitlementUtilities - Process Universalis PID[1173] - Group: (null) - Entitlement: com.apple.mediaremote.external-artwork-validation - Entitled: NO - Error: (null)
The message has an orange background which appears to mean that it does not come directly from NSLog in the app. The message appears immediately after the request handler of MPMediaItemArtwork has been called requesting a 128 x 128 image and has successfully returned a 128x128 UIImage object.
This has been reported through Feedback Assistant: Bug report ID is FB15343941
How can we work round this error?
I upgraded my IOS to 18 in both of my phones and none allowm e to rate my music with the stars I want to rate them. I disable the option, an try it, then I disable it and re-enable it and nothing... Something may be wrong with the IOS 18 version
I had this situation recently where suddenly with the new Beta update the sound of my airpods pro 2 was all gltichy and spatial audio didn’t work at all.
I've already check my airpods with other devices and they're completely fine.
Hope this helps!
We have a VOIP calling application that releases resources at the end of call. When the AudioOutputUnitStop API is invoked, it takes upto 700millisecond to return back sometimes.
If we comment that API call as a test then the AudioUnitUninitialize API takes upto 700ms.
Once the cleanup is done, as part of the call flow, the application invokes a BYE SIP message. Hence in cases where the API takes more than 200 ms, the Bye message is sent with that much delay and gets blocked as per the server DDOS settings. (DDOS timer will start as soon as a the UDP socket is disconnected with the client and will timeout within 200 ms and Bye request coming post that time will get blocked)
We need to understand why there is a delay of more than 200ms observed sometimes while in other cases it requires less than 50ms?
I have an app that plays audio and the behaviour of it has changed in watchOS 11. I can no longer figure out how to play the audio through the headphones.
To play audio I..
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback,
mode: .default,
policy: .longFormAudio,
options: []
let activated = try await session.activate()
if activated {
// play audio
}
In previous versions, 'try await session.activate()` would bring up a route picker where the user could select their headphones. Now on watchOS 11 it just plays the audio out of the speaker.
Maybe that's what some people want but if they do want it to play out of the headphones I can't see how I can give that option now? There's no AVRoutePickerView available on watchOS for selecting it.
I've tried setting the category to .multiRoute instead of .playback and that does bring up the picker but selecting the speaker results in an error code and selecting the headphones results in it saying it cannot find my headphones (which shouldn't be the case since Apple Music on watchOS finds them).
Tried overriding the output with try session.overrideOutputAudioPort(.speaker) but the compiler complains that speaker isn't available on watchOS, which is strange as if I understand correctly it's possible to play through the speaker now at least on some Apple Watches.
So is this a bug or is there some way I've not found of playing audio through the headphones?
I am using the AVAudioEngine to play back samples in an iOS game. I would like to change the play back rate of a sample in real time.
When using AVAudioUnitVarispeed for chaging the play back rate it creates stutters in the game as it isn't processed in real time (as stated here:AVAudioUnitTimeEffect)
The other option I found to change the rate is by using an AVAudioEnvironmentNode and change the rate of the AVAudioPlayerNode. That works without creating stutters but limits the valid values for the rate from 0.5 to 2.0 (I need higher rates then 2.0). See here: AVAudio3DMixing.
Are there any other ways to play back a sample with a rate control in real time?