After upgrading to iOS 18 CarPlay with 2023 Lexus and iPhone 15 Pro Max shows multiple issues:
• speakers reduced to Mono sound (going back to normal after some minutes and then reducing again)
• no speaker sound at all
• touching / moving phone while driving resulting in “on and off” sound
No Reboot / Shutdown helps
No Cable connection works
@Apple: do you test your software professionally or is this outsourced to the community? Doesn’t look at all like a professional approach?
Please solve this dangerous (traffic!) and annoying topic ASAP!
Thanks - Torsten
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Post
Replies
Boosts
Views
Activity
Hello everyone,
I'm new to Core Audio and still haven't found my footing. I'm learning how to capture audio from the default device, using Audio Units. On my MacBook, the default audio input is mono. But when I write a piece of code to capture audio using AUHAL, I'm discovering that I need to provide an AudioBufferList with two channels, not one. Also, when I try to capture audio from an audio interface with 20 audio inputs, I must provide an AudioBufferList with two channels, and not with 20 channels. To investigate the issue, I wrote a small diagnostic program, which opens the default audio device and probes it for the number of channels. Depending on which way I'm probing, I'm getting different results. When I probe the stream format, I'm getting information that there is 1 channels. But when I probe the input audio unit, I'm getting information that there are 2 input channels.
Here's my program to demonstrate the issue:
// InputDeviceChannels.m
// Compile with:
// clang -framework CoreAudio -framework AudioToolbox -framework CoreFoundation -framework AudioUnit -o InputDeviceChannels InputDeviceChannels.m
//
// On my system, this prints:
// Device Name: MacBook Pro Microphone
// Number of Channels (Stream Format): 1
// Number of Elements (Element Count): 2
#import <AudioToolbox/AudioToolbox.h>
#import <AudioUnit/AudioUnit.h>
#import <CoreAudio/CoreAudio.h>
#import <Foundation/Foundation.h>
void printDeviceInfo(AudioUnit audioUnit) {
UInt32 size;
OSStatus err;
AudioStreamBasicDescription streamFormat;
size = sizeof(streamFormat);
err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1,
&streamFormat, &size);
if (err != noErr) {
printf("Error getting stream format\n");
exit(1);
}
int numChannels = streamFormat.mChannelsPerFrame;
UInt32 elementCount;
size = sizeof(elementCount);
err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0,
&elementCount, &size);
if (err != noErr) {
printf("Error getting element count\n");
exit(1);
}
printf("Number of Channels (Stream Format): %d\n", numChannels);
printf("Number of Elements (Element Count): %d\n", elementCount);
}
void printDeviceName(AudioDeviceID deviceID) {
UInt32 size;
OSStatus err;
CFStringRef deviceName = NULL;
size = sizeof(deviceName);
err = AudioObjectGetPropertyData(
deviceID,
&(AudioObjectPropertyAddress){kAudioDevicePropertyDeviceNameCFString,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain},
0, NULL, &size, &deviceName);
if (err != noErr) {
printf("Error getting device name\n");
exit(1);
}
char deviceNameStr[256];
if (!CFStringGetCString(deviceName, deviceNameStr, sizeof(deviceNameStr),
kCFStringEncodingUTF8)) {
printf("Error converting device name to C string\n");
exit(1);
}
CFRelease(deviceName);
printf("Device Name: %s\n", deviceNameStr);
}
int main(int argc, const char *argv[]) {
@autoreleasepool {
OSStatus err;
// Get the default input device ID
AudioDeviceID input_device_id = kAudioObjectUnknown;
{
UInt32 property_size = sizeof(input_device_id);
AudioObjectPropertyAddress input_device_property = {
kAudioHardwarePropertyDefaultInputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain,
};
err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &input_device_property, 0, NULL,
&property_size, &input_device_id);
if (err != noErr || input_device_id == kAudioObjectUnknown) {
printf("Error getting default input device ID\n");
exit(1);
}
}
// Print the device name using the input device ID
printDeviceName(input_device_id);
// Open audio unit for the input device
AudioComponentDescription desc = {kAudioUnitType_Output, kAudioUnitSubType_HALOutput,
kAudioUnitManufacturer_Apple, 0, 0};
AudioComponent component = AudioComponentFindNext(NULL, &desc);
AudioUnit audioUnit;
err = AudioComponentInstanceNew(component, &audioUnit);
if (err != noErr) {
printf("Error creating AudioUnit\n");
exit(1);
}
// Enable IO for input on the AudioUnit and disable output
UInt32 enableInput = 1;
UInt32 disableOutput = 0;
err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input,
1, &enableInput, sizeof(enableInput));
if (err != noErr) {
printf("Error enabling input on AudioUnit\n");
exit(1);
}
err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output,
0, &disableOutput, sizeof(disableOutput));
if (err != noErr) {
printf("Error disabling output on AudioUnit\n");
exit(1);
}
// Set the current device to the input device
err =
AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global, 0, &input_device_id, sizeof(input_device_id));
if (err != noErr) {
printf("Error setting device for AudioUnit\n");
exit(1);
}
// Initialize AudioUnit
err = AudioUnitInitialize(audioUnit);
if (err != noErr) {
printf("Error initializing AudioUnit\n");
exit(1);
}
// Print device info
printDeviceInfo(audioUnit);
// Clean up
AudioUnitUninitialize(audioUnit);
AudioComponentInstanceDispose(audioUnit);
}
return 0;
}
It prints:
Device Name: MacBook Pro Microphone
Number of Channels (Stream Format): 1
Number of Elements (Element Count): 2
I tried to set the number of channels to 1 on the input unit, but it didn’t change anything. After calling setNumberOfChannels(1, audioUnit), I’m still getting the same output.
Note 1: I know that I can ignore one channel, etc, etc. My purpose here is not to "somehow get it to work", I already did that. My purpose is to understand the API, so that I'll be able to write code that handles any number of audio inputs.
Note 2: I already read a bunch of documentation, especially this here: https://developer.apple.com/library/archive/technotes/tn2091/ - perhaps the channel map could help here, but I can’t make sense of it - I tried to use it based on my understanding but I only got the -50 OSStatus.
How should I understand this? Is it that that audio unit is an abstraction layer and automatically converts mono input into stereo input? Can I ask AUHAL to provide me the same number of input channels that the audio device has?
Hello,
I'm getting an unknown, never-before-seen error at application launch, when running my iOS SpriteKit game on the iOS 18 arm64 simulator from Xcode 16.0 (16A242d) —
AudioConverterOOP.cpp:847 Failed to prepare AudioConverterService: -302
This is occurs on all iOS 18 simulator devices, between application(_:didFinishLaunchingWithOptions:) and the first applicationDidBecomeActive(_:) — the SKScene object may have been already initialized by SpriteKit, but the scene's didMove(to:) method hasn't been called yet.
Also, note that the error message is being emitted from a secondary (non-main) thread, obviously not created by the app.
After the error occurs, no SKScene is able to play audio — this had never occurred on iOS versions prior to 18, neither on physical devices nor on the simulator.
Has anyone seen anything like this on a physical device running 18?
Unfortunately, at the moment I cannot test myself on an 18 device, only on the simulator...
Thank you,
D.
Hello,
I am trying to create an MP4 by obtaining the content from another source MP4.
The source MP4 would be read with AVAssetReader and the output written with AVAssetWriter.
I wanted to do partial tests: first, I placed only the video in the output MP4.
Now, I am trying to place only the audio in the output MP4.
I even managed to get the output MP4 to have the same length (in seconds) as the source MP4.
But the problem is simple: the output MP4 is simply silent.
Naturally, I want it to have audio.
Below are two excerpts from the source code.
Reading and writing.
Note: The variable videoURL is from the class where the function writeVideo() is located. Its assignment happens in another scope, already debugged.
Snippet:
let semaphore = DispatchSemaphore(value: 0)
func writeVideo() {
var audioReaderBuffers = [CMSampleBuffer]()
// File directory
url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0].appendingPathComponent("*****/output.mp4")
guard let url = url else { return }
try FileManager.default.createDirectory(at: url.deletingLastPathComponent(), withIntermediateDirectories: true)
if FileManager.default.fileExists(atPath: url.path()) {
try FileManager.default.removeItem(at: url)
}
if let videoURL = videoURL {
let videoAsset = AVAsset(url: videoURL)
Task {
let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first!
let reader = try AVAssetReader(asset: videoAsset)
let audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2
] as [String : Any]
let audioOutputTest = try await audioTrack.getAudioSettings()
let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings)
reader.add(readerAudioOutput)
reader.startReading()
while let sampleBuffer = readerAudioOutput.copyNextSampleBuffer() {
audioReaderBuffers.append(sampleBuffer)
}
semaphore.signal()
}
}
semaphore.wait()
let audioInput = createAudioInput(sampleBuffer: audioReaderBuffers[0])
let assetWriter = try AVAssetWriter(outputURL: url, fileType: .mp4)
assetWriter.add(audioInput)
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: .zero)
for (index, buffer) in audioReaderBuffers.enumerated() {
while !audioInput.isReadyForMoreMediaData {
usleep(1000)
}
audioInput.append(buffer)
}
assetWriter.finishWriting {
switch assetWriter.status {
case .completed:
print("Operation completed successfully: \(url.absoluteString)")
case .failed:
if let error = assetWriter.error {
print("Error description: \(error.localizedDescription)")
} else {
print("Error not found.")
}
default:
print("Error not found.")
}
}
}
Below is the createAudioInput method:
func createAudioInput(sampleBuffer: CMSampleBuffer) -> AVAssetWriterInput {
let audioSettings = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVSampleRateKey: 48000,
AVEncoderBitRatePerChannelKey: 64000,
AVNumberOfChannelsKey: 1
] as [String : Any]
let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: sampleBuffer.formatDescription)
audioInput.expectsMediaDataInRealTime = false
return audioInput
}
I await your help, please.
tvOS 18 doesn't provide passthrough of multichannel audio for streaming apps offering content where it it promoted as available. This is true for devices for which the functionality existed before the 18.0 tvOS update. What's more, the 18.1 Public Beta did not provide a resolution for the issue. All streaming apps appear to be affected. Notably, Home Sharing does not appear to be affected, and continues to provide multichannel audio as it did before the 18.0 update.
I've got a bunch of Audio Units I've been developing over the last few months - in Xcode 14 based on the Audio Unit template that ships in Xcode.
The DSP heavy-lifting is all done in Rust libraries, which I build for Intel and Apple Silicon, merge using lipo and build XCFrameworks from. There are several of these - one with the DSP code, and several others used from the UI - a mix of SwiftUI and Cocoa.
Getting the integration of Rust, Objective C, C++ and Swift working and automated took a few weeks (my first foray into C++ since the 1990s), but has been solid, reliable and working well - everything loads in Logic Pro and Garage Band and works.
But now I'm attempting the (woefully underdocumented) process of making 13 audio unit projects able to be loaded in-process - which means moving all of the actual code into a Framework. And therein lies the problem:
None of the xcframeworks are modular, which leads to the dreaded "Include of non-modular header inside framework module". Imported directly into the app extension project with "Allow Non-modular Includes in Framework Modules" works fine - but in a framework this seems to be verboten.
So, the obvious solution would be to add a module map to each xcframework, and then, poof, they're modular.
BUT... due to a peculiar limitation of Xcode's build system I've spent days searching for a workaround for, it is only possible to have ONE xcframework containing a module.modulemap file in any project. More than that and xcodebuild will try to clobber them with each other and the build will fail. And there appears to be NO WAY to name the file anything other than module.modulemap inside an xcframework and have it be detected. So I cannot modularize my frameworks, because there is more than one of them.
How the heck does one work around this? A custom module map file somewhere (that the build should find and understand applies to four xcframeworks - how?)? Something else?
I've seen one dreadful workaround - https://medium.com/@florentmorin/integrating-swift-framework-with-non-modular-libraries-d18098049e18 - given that I'm generating a lot of the C and Objective C code for the audio in Rust, I suppose I could write a tool that parses the header files and generates Objective C code that imports each framework and declares one method for every single Rust call. But it seems to me there has to be a correct way to do this without jumping through such hoops.
Thoughts?
How does a third party developer go about supporting the new Enhanced Dialogue option for video apps in tvOS 18?
If an app is using the standard AVPlayerViewController, I had assumed it would be a simple-ish matter of building against the tvOS 18 SDK but apparently not, the options don't appear, not even greyed out.
Calling MIDIFlushOutput on a network endpoint is not cancelling events scheduled with future timestamps -- they continue to send.
e.g.
func send(eventLists: [MIDIEventList]) {
let outputPortRef = ...
let networkDestination = ...
for var eventList in eventLists {
MIDISendEventList(outputPortRef, networkDestination.objectRef, &eventList)
}
}
...
MIDIFlushOutput(networkDestination.objectRef)
I'm seeing that MIDIFlushOutput does successfully cancel scheduled events on a non-network endpoint. How can I clear all scheduled outgoing events over a MIDI Network connection?
Hello! The new lower latency support for AirPods in Game Mode is impressive, but I'm not sure of the best way to handle the transition into/out of Game Mode while audio is playing. In order to lower the latency, the system appears to drop some number of samples, with the result being a good deal less latency. My use case is macOS where it's easier to switch in/out of the fullscreen game (a simple swipe left), thus causing more issues for Game Mode since the audio is playing the entire time. It would be nice if offscreen games could remain in game mode, but I understand not wanting to give developers that control.
Are there any best practices for avoiding or masking the audio glitch caused by this skip-ahead? Is there a system event I can receive to know when Game Mode is about to be enabled or disabled, where I could perhaps fade out the audio? My callback checks the inTimestamp->mSampleTime value to detect gaps, but it only rarely detects a Game Mode gap, even though the audio skip-ahead always happens.
BTW, I am currently only developing on macOS (15.0) and I'm working at a low level with AudioUnit callbacks and a SpatialMixer. I am not currently using any higher-level audio APIs.
And here's a few questions I don't necessarily expect answers to, but it doesn't hurt to ask: Is there any additional technical details about how this latency reduction works, or exactly how much of a reduction is achieved (or said another way, how many samples are dropped)? How much does this affect AirPods battery life? And finally, is there a way to query the actual latency value? I check the value for kAudioDevicePropertyLatency but it seems to always report 160ms for AirPods. Thanks!
Hi, I recently updated to ios 18. And yes the music is playing in background with camera app. But unable to play music with notes app open
I am fetching playlist songs from the users library and also need the releaseDate (or year) for the song for my use case. However, the releaseDate is always nil since I have upgraded to sequoia. I am pretty sure, this was working before the upgrade, but I couldn't find any documentation on changes related to this.
Furthermore I noticed, the IDs also now seem to be the catalog IDs instead of the global ones like i.PkdZbQXsPJ4DX04
Here's in a nutshell what I am doing
func fetchSongs(playlist: Playlist) async throws {
let detailedPlaylist = try await playlist.with([.tracks])
var currentTracks: MusicItemCollection<Track>? = detailedPlaylist.tracks
repeat {
for track in currentTracks! {
guard case .song(let song) = track else {
print("This is not a song")
continue
}
print(song.releaseDate)
}
currentTracks = try await currentTracks?.nextBatch()
} while currentTracks != nil
}
Iphone 13mini updated to ios18.
Carplay is wired on my 2021 RAM Laramie.
After the update => Premium audio is lost, I can only hear low quality audio.
When I manually change to Bluetooth instead of usb on the car, then audio comes in speaker mode on my phone and not on the truck.
I successfully retrieved strings, arrays, and other data through a custom AudioObjectPropertySelector, but I can only get fixed returns. Whenever I modify it to use dynamic data, it results in an error. Below is my code.
case kPlugIn_CustomPropertyID:
{
*((CFStringRef*)outData) = CFSTR("qin@@@123");
*outDataSize = sizeof(CFStringRef);
}
break;
case kPlugIn_ContainDic:
{
CFMutableDictionaryRef mutableDic1 = CFDictionaryCreateMutable(kCFAllocatorDefault,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(mutableDic1, CFSTR("xingming"), CFSTR("qinmu"));
*((CFDictionaryRef*)outData) = mutableDic1;
*outDataSize = sizeof(CFPropertyListRef);
// *((CFPropertyListRef*)outData) = mutableDic;
}
break;
case kPlugIn_ContainArray:
{
CFMutableArrayRef mutableArray = CFArrayCreateMutable(kCFAllocatorDefault, 0, &kCFTypeArrayCallBacks);
CFArrayAppendValue(mutableArray, CFSTR("Hello"));
CFArrayAppendValue(mutableArray, CFSTR("World"));
*((CFArrayRef*)outData) = mutableArray;
*outDataSize = sizeof(CFArrayRef);
}
break;
These are fixed returns, and there are no issues when I retrieve the data.
When I change the return data in kPlugIn_ContainDic to the following, the first time I restart the CoreAudio service and retrieve the data, it works fine. However, when I attempt to retrieve it again, it results in an error:
case kPlugIn_ContainDic:
{
*outDataSize = sizeof(CFPropertyListRef);
*((CFPropertyListRef*)outData) = mutableDic;
}
break;
error code:
HALC_ShellDevice::CreateIOContextDescription: failed to get a description from the server
HAL_HardwarePlugIn_ObjectGetPropertyData: no object
HALPlugIn::ObjectGetPropertyData: got an error from the plug-in routine, Error: 560947818 (!obj)
The declaration and usage of mutableDic are as follows:
static CFMutableDictionaryRef mutableDic;
static OSStatus BlackHole_Initialize(AudioServerPlugInDriverRef inDriver, AudioServerPlugInHostRef inHost)
{
OSStatus theAnswer = 0;
gPlugIn_Host = inHost;
if (mutableDic == NULL){
mutableDic = CFDictionaryCreateMutable(kCFAllocatorDefault,
100,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
}
}
static OSStatus BlackHole_AddDeviceClient(AudioServerPlugInDriverRef inDriver, AudioObjectID inDeviceObjectID, const AudioServerPlugInClientInfo* inClientInfo)
{
CFStringRef string = CFStringCreateWithFormat(kCFAllocatorDefault, NULL, CFSTR("%u"), inClientInfo->mClientID);
CFMutableDictionaryRef dic = CFDictionaryCreateMutable(kCFAllocatorDefault,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(dic, CFSTR("clientID"), string);
CFDictionarySetValue(dic, CFSTR("bundleID"), inClientInfo->mBundleID);
CFDictionarySetValue(mutableDic, string, dic);
}
Can someone tell me why
Hi,
I'm facing an issue with my Unity-based app when deploying it to the AVP. Often, after building and running the app on the device, the audio gets muted. I couln't find any setting that let me unmute it. The only solution I've found is to reset the device settings, which makes the audio work again.
Here are a few things I’ve noticed:
The sound works fine when I reset my device’s settings.
I haven't changed any sound or audio settings on the device before or after deploying the app.
The issue doesn’t always occur immediately, but when it does, resetting settings seems to be the only fix.
Could there be something in the AVP audio configuration that causes this problem? I’d appreciate any advice or suggestions.
Thanks!
I have a user who is reporting an error and has been kind enough to share screen recordings to help diagnose. I am not experiencing this error, nor am I able to replicate on other devices I've tried, so I'm stuck trying to fix. His & other devices tested were all running iOS 17.5.1. Any details on the cause of this error or potential workarounds I could use to resolve would be greatly appreciated.
try await ApplicationMusicPlayer.shared.play()
throws:
The operation couldn't be completed (MPMusicPlayerControllerErrorDomain error 6.)
MusicAuthorization.currentStatus is .authorized
ApplicationMusicPlayer.shared.isPreparedToPlay is false
ApplicationMusicPlayer.shared.queue.currentEntry is nil (I've noticed this to be the case even when I am able to successfully play as well)
Queue was loaded using ApplicationMusicPlayer.shared.queue = [album] but I also tried ApplicationMusicPlayer.shared.queue = ApplicationMusicPlayer.Queue(album:startingAt:) and it made no difference. album.playParameters are correct. He experiences the error when attempting to play any album.
Any and all help is truly appreciated. Feedback Assistant filed has gone unanswered.
How can I search for a single song by using its song ID? I tried the following code but it's not working:
MusicCatalogResourceRequest(matching: Song.self, equalTo: song.id.rawValue)
I get the following errors in Xcode:
Cannot convert value of type 'Song.Type' to expected argument type 'KeyPath<MusicItemType.FilterType, String>'
Generic parameter 'MusicItemType' could not be inferred
I have the song ID saved as a string inside song so I just want to grab the full MusicKit MusicItemType from the API. I looked at the documentation but it doesn't make any sense to me.
Please help!
1, I saw nullAudio custom properties of the static const AudioObjectPropertySelector kPlugIn_CustomPropertyID = 'PCst'; But I don't know how to use this in a project.
2. What is the difference between PlugIn and Device's custom properties?
3. When I try to customize the PropertySelector for deivce. After adding kAudioObjectPropertyCustomPropertyInfoList NullAudio_HasDeviceProperty method to compile again after restarting coreAudio service, found that virtual devices don't show.
Haptics are often represented as audio for presentation purposes by Apple in videos and learning resources.
I am curious if:
...Apple has released, or is willing to release any tools that may have been used synthesize audio to represent a haptic patterns (such as in their WWDC19 Audio-Haptic presentation)?
...there are any current tools available that take haptic instruction as input (like AHAP) and outputs an audio file?
...there is some low-level access to the signal that drives the Taptic Engine, so that it can be repurposed as an audio stream?
...you have any other suggestions!
I can imagine some crude solutions that hack together preexisting synthesizers and fudging together a process to convert AHAP to MIDI instructions, dialing in some synth settings to mimic the behaviour of an actuator, but I'm not too interested in that rabbit hole just yet.
Any thoughts? Very curious what the process was for the WWDC videos and audio examples in their documentation...
Thank you!
Here is crash logs
NSInvalidArgumentException - *** -[AVContentKeyRequest processContentKeyResponse:] AVContentKeySession's keySystem is not same as that of keyResponse
We observed few 16.X.X Devices are not able to download audio media content, and if they trying multiple times. app got crash and above error encountered while crashing.
Please let us know if any issues.
I've encountered a critical issue while developing a music player app using SwiftUI and MusicKit. The problem persists across multiple devices and iOS versions, specifically with the endSeeking() method of ApplicationMusicPlayer, which fails to stop the fast-forward operation as expected.
Development Environment:
Xcode 16 Beta 6
macOS Sonoma 15.0 Beta 7 (24A5327a)
Affected Devices:
iPhone 11 Pro Max (iOS 17.6)
iPhone SE 3 (iOS 18.0 Beta 7)
Here's the relevant code snippet:
Image(systemName: "forward.end.circle")
.foregroundStyle(.accent)
.gesture(
TapGesture()
.onEnded { _ in
vm.nextTrack()
}
)
.simultaneousGesture(
LongPressGesture(minimumDuration: 0.5)
.onChanged { isPressing in
if isPressing {
vm.player.beginSeekingForward()
}
}
.onEnded { _ in
vm.player.endSeeking()
}
)
The issue manifests when the long press ends: despite invoking the endSeeking() method, the fast-forward operation persists.
To troubleshoot, I've taken the following steps:
Confirmed that vm.player is set to ApplicationMusicPlayer.shared.
Attempted to combine endSeeking() with beginSeekingForward(), as per the documentation guidelines.
Despite these efforts, the problem persists across all tested devices and OS versions. This leads me to two critical questions:
Has anyone else encountered a similar issue?
Could this potentially be an undocumented bug in the latest MusicKit implementation?