Hi,
I am in need to get the total number of parquet files that are present in the apple music feed api for songs, artists. As there is option for limit and offset. But limit is limited to 200 records and offset is uncertain.
How to get total number of parquet files number without quering apple music feed api mulitple times?
Need help regarding this. Thanks!
                    
                  
                Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
  
    
    Selecting any option will automatically load the page
  
  
  
  
    
  
  
          Post
Replies
Boosts
Views
Activity
                    
                      According to the doc, I did a simple demo to verify.
My env:
ProductName:		macOS
ProductVersion:		15.5
BuildVersion:		24F74
2.4 GHz 四核Intel Core i5
Info.plist:
 <?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>IOKitPersonalities</key>
	<dict>
		<key>UVCamera</key>
		<dict>
			<key>CFBundleIdentifierKernel</key>
			<string>com.apple.kpi.iokit</string>
			<key>IOClass</key>
			<string>IOUserService</string>
			<key>IOMatchCategory</key>
			<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
			<key>IOProviderClass</key>
			<string>IOUserResources</string>
			<key>IOResourceMatch</key>
			<string>IOKit</string>
			<key>IOUserClass</key>
			<string>UVCamera</string>
			<key>IOUserServerName</key>
			<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
            <key>IOProbeScore</key>
            <integer>100000</integer>
            <key>idVendor</key>
            <integer>1452</integer>
            <key>idProduct</key>
            <integer>34068</integer>
		</dict>
	</dict>
	<key>OSBundleUsageDescription</key>
	<string></string>
</dict>
</plist>
UVCamera.cpp
//
//  UVCamera.cpp
//  UVCamera
//
//  Created by DTEN on 2025/6/12.
//
#include <os/log.h>
#include <DriverKit/IOUserServer.h>
#include <DriverKit/IOLib.h>
#include "UVCamera.h"
kern_return_t
IMPL(UVCamera, Start)
{
    kern_return_t ret;
    ret = Start(provider, SUPERDISPATCH);
    os_log(OS_LOG_DEFAULT, "Hello World");
    return ret;
}
UVCamera.iig
//
//  UVCamera.iig
//  UVCamera
//
//  Created by DTEN on 2025/6/12.
//
#ifndef UVCamera_h
#define UVCamera_h
#include <Availability.h>
#include <DriverKit/IOService.iig>
class UVCamera: public IOService
{
public:
    virtual kern_return_t
    Start(IOService * provider) override;
};
#endif /* UVCamera_h */
Then I build by xcode and mv it to  /Library/DriverExtensions:
sudo mv com.lqs.MyVirtualCam.UVCamera.dext /Library/DriverExtensions
sudo kmutil install -R / -r /Library/DriverExtensions
kmutil rebuild done
However,the dext can't be loaded:
 kmutil showloaded --list-only | grep UVCamera
No variant specified, falling back to release
What's the problem? anyone can help me?
                    
                  
                
                    
                      Is there any way for me to use an AutoMix api in my IOS apps, I would play tracks using the Apple Music api and use AutoMix to attempt to merge tracks.
Is this feature/api available to developers.
                    
                  
                
                    
                      Hello there!
Is there any list of voices that are always available on iOS/iPadOS devices?
It seems that AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-US.Samantha") is always available on all devices.
I thought that AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Nicky_en-US_compact") and AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Aaron_en-US_compact") were available by default on certain newer devices. Is this true?
I also noticed that on the same iPad where I was using those 2 voices (Nicky and Aaron) - when I updated to the iPadOS 26 beta, those voices were no longer available.
Any information you can share about which voices should be reliably available on which devices would be extremely helpful for our development. Thanks so much!
                    
                  
                
                    
                      I found that the aggregated device correctly obtains input channels in the standard microphone mode. However, in voice isolation mode, it only retrieves channels from the first sub-device in the aggregated device's list. If I want to properly obtain channel information in voice isolation mode, how should I do it?
                    
                  
                
                    
                      When using the Apple Devices to sync Apple Music to iPhone where is the Apple Devices backup being written to?
Apple Devices->music->sync.
Not trying to backup the iPhone via Apple Devices app.
                    
                  
                
                    
                      Title: Unable to Access Microphone in Control Center Widget – Is It Possible?
Hello everyone,
I'm attempting to create a widget in the Control Center that accesses the microphone, similar to how Shazam does it. However, I'm running into an issue where the widget always prints "Microphone permission denied." It's worth mentioning that microphone access works fine when I'm using the app itself.
Here's the code I'm using in the widget:
swift
Copy code
func startRecording() async {
logger.info("Starting recording...")
print("Starting recording...")
recognizedText = ""
isFinishingRecognition = false
// First, check speech recognition authorization
let speechAuthStatus = await withCheckedContinuation { continuation in
    SFSpeechRecognizer.requestAuthorization { status in
        continuation.resume(returning: status)
    }
}
guard speechAuthStatus == .authorized else {
    logger.error("Speech recognition not authorized")
    return
}
// Then, request microphone permission using our manager
let micPermission = await AudioSessionManager.shared.requestMicrophonePermission()
guard micPermission else {
    logger.error("Microphone permission denied")
    print("Microphone permission denied")
    return
}
// Continue with recording...
}
Issues:
The code consistently prints "Microphone permission denied" when run from the widget.
Microphone access works without issues when the same code is executed from within the app.
Questions:
Is it possible for a Control Center widget to access the microphone?
If yes, what might be causing the "Microphone permission denied" error in the widget?
Are there additional permissions or configurations required to enable microphone access in a widget?
Any insights or suggestions would be greatly appreciated!
Thank you.
                    
                  
                
                    
                      Among the millions of users of our online product, we have identified through data metrics that the silent audio data capture rate on iPadOS 18.4.1 or 18.5 has increased abnormally. However, we are unable to reproduce the issue. Has anyone encountered a similar issue? The parameters we used are as follows:
AudioSession:
category:AVAudioSessionCategoryPlayAndRecord
mode:AVAudioSessionModeDefault
option:77
preferredSampleRate:48000.000000
preferredIOBufferDuration:0.010000
AudioUnit
format.mFormatID = kAudioFormatLinearPCM;
format.mSampleRate = 48000.0;
format.mChannelsPerFrame = 2;
format.mBitsPerChannel = 16;
format.mFramesPerPacket = 1;
format.mBytesPerFrame = format.mChannelsPerFrame * 16 / 8;
format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket;
format.mFormatFlags = kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger;
component.componentType = kAudioUnitType_Output;
component.componentSubType = kAudioUnitSubType_RemoteIO;
component.componentManufacturer = kAudioUnitManufacturer_Apple;
component.componentFlags = 0;
component.componentFlagsMask = 0;
                    
                  
                
                    
                      Hi,
May I ask if there is any API or similar way inside the iOS app to set up/switch the transparency and ANC modes of the AirPods Pro 2? One way is to set up one shortcut and activate that shortcut in the app, but it requires manually setting for a shortcut, which is not convenient.
Thx for any advice on that!
                    
                  
                
                    
                      I have an iPad app, written in objective-c and distributed through Enterprise developer, as it is not for public use but specific to some large companies.
The app has a local database and works offline
For some functions of the app I need to display images (not edit or cut them, just display them)
Right now there is integrated MWPhotoBrowser viewer, which has not been maintained for almost 10 years, so in addition to warnings in compilation I have to fight with some historical bugs especially on high resolution images. https://github.com/mwaterfall/MWPhotoBrowser
Do you know of a modern and maintained OFFLINE photo viewer? I evaluate both free and paid (maybe an SDK). My needs are very basic
I have found this one https://github.com/TimOliver/TOCropViewController, but I need to disable the photos edit features and especially I would lose the useful feature of displaying multiple images (mwphoto for multiple images showed a gallery)
                    
                  
                
                    
                      In our app we have implemented a AVAssetResourceLoaderDelegate to handle encrypted downloaded files. We have it working on all iOS versions but we are seeing issues on iOS 15 (15.8.3) with large files (> 1 Gb). We have so far seen two cases where either the load method on the AVURLAsset fails early and throws an unknown error code or starts requesting more data than the device has available RAM. The CPU usage is almost always over 100%, even after pausing playback. The memory issue can happen even though the player has successfully started playback.
When running this on devices running iOS 16 and above we set the isEntireLengthAvailableOnDemand to true on the AVAssetResourceLoadingContentInformationRequest. This seems to be key to solving the issue those devices that support it. If we set the property to false we see the same memory issue as on iOS 15.
So we have a solution for iOS 16 and upwards but are at a loss for how to handle iOS 15. Is there something we have overlooked or is it in fact an issue with that iOS version?
                    
                  
                
                    
                      I am attempting to do batch Transcription of audio files exported from Voice Memos, and I am running into an interesting issue. If I only transcribe a single file it works every time, but if I try to batch it, only the last one works, and the others fail with No speech detected. I assumed it must be something about concurrency, so I implemented what I think should remove any chance of transcriptions running in parallel. And with a mocked up unit of work, everything looked good. So I added the transcription back in, and
1: It still fails on all but the last file. This happens if I am processing 10 files or just 2.
2: It no longer processes in order, any file can be the last one that succeeds. And it seems to not be related to file size. I have had paragraph sized notes finish last, but also a single short sentence that finishes last.
I left the mocked processFiles() for reference.
Any insights would be greatly appreciated.
import Speech
import SwiftUI
struct ContentView: View {
    @State private var processing: Bool = false
    @State private var fileNumber: String?
    @State private var fileName: String?
    @State private var files: [URL] = []
    
    let locale = Locale(identifier: "en-US")
    let recognizer: SFSpeechRecognizer?
    
    init() {
        self.recognizer = SFSpeechRecognizer(locale: self.locale)
    }
    
    var body: some View {
        VStack {
            if files.count > 0 {
                ZStack {
                    ProgressView()
                    Text(fileNumber ?? "-")
                        .bold()
                }
                Text(fileName ?? "-")
            } else {
                Image(systemName: "folder.badge.minus")
                Text("No audio files found")
            }
        }
        .onAppear {
            files = getFiles()
            Task {
                await processFiles()
            }
        }
    }
    private func getFiles() -> [URL] {
        do {
            let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
            let path = documentsURL.appendingPathComponent("Voice Memos").absoluteURL
            
            let contents = try FileManager.default.contentsOfDirectory(at: path, includingPropertiesForKeys: nil, options: [])
            
            let files = (contents.filter {$0.pathExtension == "m4a"}).sorted { url1, url2 in
                url1.path < url2.path
            }
            
            return files
        }
        catch {
            print(error.localizedDescription)
            return []
        }
    }
    
    private func processFiles() async {
        var fileCount = files.count
        for file in files {
            fileNumber = String(fileCount)
            fileName = file.lastPathComponent
            await processFile(file)
            fileCount -= 1
        }
    }
    
//    private func processFile(_ url: URL) async {
//        let seconds = Double.random(in: 2.0...10.0)
//        await withCheckedContinuation { continuation in
//            DispatchQueue.main.asyncAfter(deadline: .now() + seconds) {
//                continuation.resume()
//                print("\(url.lastPathComponent) \(seconds)")
//            }
//        }
//    }
    private func processFile(_ url: URL) async {
        let recognitionRequest = SFSpeechURLRecognitionRequest(url: url)
        recognitionRequest.requiresOnDeviceRecognition = false
        recognitionRequest.shouldReportPartialResults = false
        
        await withCheckedContinuation { continuation in
            recognizer?.recognitionTask(with: recognitionRequest) { (transcriptionResult, error) in
                guard transcriptionResult != nil else {
                    print("\(url.lastPathComponent.uppercased())")
                    print(error?.localizedDescription ?? "")
                    return
                }
                if ((transcriptionResult?.isFinal) == true) {
                    if let finalText: String = transcriptionResult?.bestTranscription.formattedString {
                        print("\(url.lastPathComponent.uppercased())")
                        print(finalText)
                    }
                }
            }
            continuation.resume()
        }
    }
}
                    
                  
                
                    
                      My app is not a VOIP application. I use devices that support character centering, such as iPad 10 or iPad 13.18. The system is iOS 18.0, 18.1, or 18.1.1. When entering live classes, the "Character Centering" button does not appear in the control center, as shown in the following picture. However, if Voice over IP is selected for Background Modes in the project and the app is run again, it will not be reproduced, even if it is uninstalled or reinstalled. Could you please help me investigate the reason? thank you!
                    
                  
                
                    
                      I have a music app I'm developing and having a weird issue where I can see now playing info for every other platform than tvOS. As far as I can tell I have correctly configured the MPNowPlayingInfoCenter
MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo    MPNowPlayingInfoCenter.default().playbackState = .playing
Are there any extra requirements to get my app's now-playing info showing in control center on tvOS? Another strange issue that might be related is I can use the apple TV remote to pause audio but not resume playback, so I feel like there's something I'm missing about registering audio playback on tvOS specifically.
                    
                  
                
                    
                      I am developing a macOS 15 MediaExtension plugin to enable additional codecs and container formats in AVFoundation
My Plugin is sort of working, but i'd like to debug the XPC process that AVFoundation 'hoists' for me from the calling app (ie - the process hosting my plugin instance that is managing the MESampleBuffer protocol calls for example)
Is there a method to configure XCode for interactive attaching to this background process for interactive debugging?
Right now I have to use Console + Print which is not fun or productive.
Does Apple have a working example of a MediaExtension anywhere?
This is an exciting API that is very under-documented.
I'm willing to spend a Code Review 'credit' for this, but my issues are not quite focused.
Any assistance is highly appreciated!
                    
                  
                
                    
                      I noticed that while playing back the same tracks via MusicKit on different OSes I get different results regarding the audio files being streamed.
Playing back a lossless file with 24Bit 48kHz and watching the Console for RemotePlayerService I get:
on iPadOS: Lossless; groupID: audio-alac-stereo-48000-24; bitDepth: 24-bit; sampleRate: 48khz; codec: alac; channels: 2; layout: Stereo;
on macOS: Creating AudioQueue with format:'paac', framesPerPacket:1024, sampleRate:44100
While the iPad looks perfect, the Mac does not. Is there a way to fix this issue on macOS.
BTW: I switched the Audio-Midi Settings before, after and while the macOS App was lunched. I also switched to different output devices. I wasn't able to change the bad audio-output on the mac. I tested this under Sequoia 15.5 and Tahoe beta 1, Xcode 16.4 and 26 beta 1.
The AudioVariants of the Album/Tracks are .dolbyAtmos, .lossless, .lossyStereo
Apple Music displays Lossless 24 Bit/48 kHz ALAC when clicking on the playercontroll icon on macOS
I hope there are only some missing or misconfigured properties to get macOS up to par.
Thanks :-)
                    
                  
                
                    
                      I am work an app development on an app which request an audio function in background as an alert sound.
during debug testing , the function work fine,
but once I testing standalone without debugging , The function not work , it will play out the sound when I back to app.
does any way to trace the issues ?
                    
                  
                
                    
                      ApplicationMusicPlayer is not available on watchOS but all other platforms. Is there a technical reason for that like battery life? Same goes for SystemMusicPlayer and MPMusicPlayerController. I already filed feedbacks for that.
                    
                  
                
                    
                      Hi 👋! We have a SpriteKit-based app where we play AVAudio sounds in three different ways:
Effects (incl. UI sounds) with AVAudioPlayer.
Long looping tracks with AVAudioPlayer.
Short animation effects on the timeline of SpriteKit's SKScene files (effectively SKAudioNode nodes).
We've found that when you exit the app or otherwise interrupt audio plays, future audio plays often fail. For example, there's a WebKit-based video trailer inside the app, and if you play it, our looping background music track (2.) will stop playing, and won't resume as you close the trailer (return from WebKit). This is probably due to us not manually restarting the track (so may well be easily fixed). Periodically played AVAudioPlayer audio (1.) are not affected.
However, the more concerning thing is that the audio tracks on SKScene file timelines (3.) will no longer play. My hypothesis is that AVAudioEngine gets interrupted, and needs to be restarted for those AVAudioNode elements to regain functionality. Thing is, we don't deal with AVAudioEngine at all currently in the app, meaning it is never initiated to begin with.
Obviously things return to normal when you remove the app from short-term memory and restart it. However, it seems many of our users aren't doing this, and often report audio failing presumably due to some interruption in the past without the app ever being cleared from memory.
Any idea why timeline-run SKAudioNodes would fail like this? Should the app react to app backgrounding/foregrounding regarding audio?
Any help would be very much appreciated ✌️!
                    
                  
                
                    
                      Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2
We're using AVAudioPlayer to play a sound when a button is tapped. In our use case, this button can be tapped very frequently — roughly every 0.1 to 0.2 seconds. Each tap triggers the following function:
var audioPlayer: AVAudioPlayer?
func soundPlay(resource: String, type: String){ 
        guard let path = Bundle.main.path(forResource: resource, ofType: type) else { 
            return 
        } 
        do { 
            audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path)) 
            audioPlayer!.delegate = self 
            try audioSession.setCategory(.playback) 
        } catch { 
            return 
        } 
        self.audioPlayer!.play() 
    } 
The issue is that under high-frequency tapping (especially around 0.1–0.15s intervals), the app occasionally crashes. The crash does not occur every time, but it happens randomly — sometimes within 30 seconds, within 1 minute, or even 3 minutes of continuous tapping.
Interestingly, adding a delay of 0.2 seconds between button taps seems to prevent the crash entirely. Delays shorter than 0.2 seconds (e.g.,0.15s,0.18s) still result in occasional crashes.
My questions are:
**Is this expected behavior from AVAudioPlayer or AVAudioSession?
Could this be a known issue or a limitation in AVFoundation?
Is there any documentation or guidance on handling frequent sound playback safely?**
Any insights or recommendations on how to handle rapid, repeated audio playback more reliably would be appreciated.