Hi!
I get personal recommendations MusicItemCollection using this code:
func getRecommendations() async throws -> MusicItemCollection<MusicPersonalRecommendation> {
let request = MusicPersonalRecommendationsRequest()
let response = try await request.response()
let recommendations = response.recommendations
return recommendations
}
However, all recommendations contain no more than 12 MusicItem's, while the Music.app application provides much more for some recommendations, for example, for the You recently listened recommendation, the Music.app application displays 40 items. Each recommendation has an items property that contains a collection of musical items MusicItemCollection<MusicPersonalRecommendation.Item>, the hasNextBatch property for these collections is always false. I expected that for some collections loading of new items would be available. Please tell me if I'm doing something wrong or is this a MusicKit bug?
Thank you!
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi I'm new to the forum,
I'm planning an app just for Apple watch, I would like to use bluetooth audio in background, how can I do it?
The messages I send via bluetooth stop as soon as the watch display turns off.
Thank you!
Nax
Songs can be unavailable (greyed out) in Apple Music. How can I check if a song is unavailable via the MusicKit framework? Obviously the playback will fail with MPMusicPlayerControllerErrorDomain Code=6 "Failed to prepare to play" but how can I know that in advance? I need to check the availability of hundreds of albums and therefore initiating a playback for each of them is not an option.
Things I have tried:
Checking if the release date property is set to a future date. This filters out all future releases but doesn't solve the problem for already released songs.
Checking if the duration is 0. This does not work since the duration of unavailable songs does not have to be 0.
Initiating a playback and checking for the "Failed to prepare to play" error. This is not suitable for a huge amount of Albums.
I couldn't find a solution yet but somehow other third-party-apps are able ignore/don't shows these albums. I believe the Apple Music app is only displaying albums where at least one song is available.
I am using this function to fetch all albums of an artist.
private func fetchAlbumsFor(_ artist: Artist) async throws -> [Album] {
let artistWithAlbums = try await artist.with(.albums)
var allAlbums = [Album]()
guard var currentBadge = artistWithAlbums.albums else {
return []
}
allAlbums.append(contentsOf: currentBadge)
while currentBadge.hasNextBatch {
if let nextBatch = try await currentBadge.nextBatch() {
currentBadge = nextBatch
allAlbums.append(contentsOf: nextBatch)
} else {
break
}
}
return allAlbums
}
Here is an example album where I am unable to detect its unavailability (at least in Germany):
https://music.apple.com/de/album/die-haferhorde-immer-den-n%C3%BCstern-nach-h%C3%B6rspiel-zu-band-3/1755774804
Furthermore I was unable to navigate to this album via the Apple Music app directly.
Thanks for any help
Edit: Apparently this album is not included in an apple music subscription but can be bought seperately. The question remains: How can I check that?
So I'm using AVAudioEngine. When playing audio I become the 'now playing' app using MPNowPlayingInfoCenter/MPRemoteCommandCenter APIs.
When configuring MPRemoteCommandCenter I add a play/pause command target via -addTargetWithHandler on the togglePlayPauseCommand property.
Now I also have a play/pause button in my app's UI. When I pause playback from my app's UI (which means I'm the active app, I'm in the foreground), what I do is this:
-I pause the AVAudioPlayerNode I'm using with AVAudioEngine.
I do not, stop, reset, etc. the AVAudioEngine. I only pause the player node. My thought process here is that the user just pressed pause and it is very likely that he will hit 'play' to resume playback in the near future because
My app is in the foreground and the user just hit the pause button.
Now if my app moves to the background and if I receive a memory warning I presume it'd make sense to tear down the engine or pause it. Perhaps I'm wrong about this?
So when I initially hit the play button from my app's UI I also activate my AVAudioSession. I do this in high priority NSOperation since the documentation warns that "we recommend that applications not activate their session from a thread where a long blocking operation will be problematic."
So now I'm playing and I hit pause from my app's UI. Then I quickly bring up the "Now Playing" center and I see I'm the "Now Playing" app but the play-pause button is showing the pause icon instead of the play icon but I'm in the pause state. I do set MPNowPlayingInfoCenter's playbackState to MPNowPlayingPlaybackStatePaused when I pause. Not surprisingly this doesn't work. The documentation states this is for macOS only.
So the only way to get MPRemoteCommandCenter to show the "play" image for the play-pause button is to deactivate my AVAudioSession when I pause playback? Since I change the active state of my audio session in a NSOperation because documentation recommends "we recommend that applications not activate their session from a thread where a long blocking operation will be problematic." the play-pause toggle in the remote command center won't immediately update since I'm doing it on another thread.
IMO it feels kind of inappropriate for a play-pause button to wait on a NSOperation activating the audio session before updating its UI when I already know my play/paused state, it should update right away like the button in my app does. Wouldn't it be nicer to just use MPNowPlayingInfoCenter's playbackState property on iOS too? If I'm no the longer the now playing app/active audio session it doesn't matter since I'm not in the now playing UI, just ignore it?
Also is it recommended that I deactivate my audio session explicitly every time the user pauses audio in my app (when I'm in the foreground)?
Also when I do deactivate the audio session I get an error: AVAudioSessionErrorCodeIsBusy (but the button in the now playing center updates to the proper image). I do this :
-(void)pause
{
[self.playerNode pause];
[self runOperationToDeactivateAudioSession];
// This does nothing on iOS:
MPNowPlayingInfoCenter *nowPlayingCenter = [MPNowPlayingInfoCenter defaultCenter];
nowPlayingCenter.playbackState = MPNowPlayingPlaybackStatePaused;
}
So in -runOperationToDeactivateAudioSession I get the AVAudioSessionErrorCodeIsBusy. According to the documentation
Starting in iOS 8, if the session has running I/Os at the time that deactivation is requested, the session will be deactivated, but the method will return NO and populate the NSError with the code property set to AVAudioSessionErrorCodeIsBusy to indicate the misuse of the API.
So pausing the player node when pausing isn't enough to meet the deactivation criteria. I guess I have to pause or stop the audio engine. I could probably wait until I receive a scene went to background notification or something before deactivating my audio session (which is async, so the button may not update to the correct image in time). This seems like a lot of code to have to write to get a play-pause toggle to update, especially in iPad-multi window scene environment.
What's the recommended approach?
Should I pause the AudioEngine instead of the player node always?
Should I always explicitly deactivate my audio session when the user pauses playback from my app's UI even if I'm in the foreground?
I personally like the idea of just being able to set
[MPNowPlayingInfoCenter defaultCenter].playbackState = MPNowPlayingPlaybackStatePaused;
But maybe that's because that would just make things easier on me. This does feels overcomplicated though. If anyone can share some tips on how I should handle this, I'd appreciate it.
Hi,
I have configured the stream as interleaved, but I am unsure if the function produces interleaved samples. So here my question:
Does AudioDeviceCreateIOProcID produce interleaved samples with microphone input?
We have application using PTT Framework to record audio messages when app is backgrounded. Right now we are using AVAudioRecorder for that purpose. And problem is one specific user has frequent issue - recorded audio contains only silence.
I've checked almost everything I can imagine but didn't find any possible reason of issue.
Conditions:
AVAudioRecorder uses following configuration:
[
AVEncoderAudioQualityKey: AVAudioQuality.low.rawValue,
AVFormatIDKey : kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey: 1,
AVSampleRateKey: 16000.0
]
App waits both didBeginTransmitting and didActivate audioSession from PTChannelManager (audio session has playback category at that moment)
App does AVAudioSession category change to playAndRecord
App gets routeChangeNotification with categoryChange and category = playAndRecord
There is no any interruption notifications from AVAudioSession during recording
There is no any error notification from AVAudioRecorder
Any idea what exactly I do wrong? Is there anything else I should check?
Thanks in advance.
P.S. it looks like recording audio with AudioUnit has the same issue, but let's exclude it from question atm for simplicity.
Hi,
I am getting into a trap. Please check stack-trace, howto fix this?
regards, Joël
stack-trace with ExtAudioFileWrite
i have a CarPlay implementation eand I want to show previous/next track button on player UI
MPRemoteCommandCenter.shared().seekForwardCommand.isEnabled = false
MPRemoteCommandCenter.shared().seekBackwardCommand.isEnabled = false
MPRemoteCommandCenter.shared().previousTrackCommand.isEnabled = true
MPRemoteCommandCenter.shared().nextTrackCommand.isEnabled = true
It works correctly on CarPlay simulator , but on some car only SEEK button are shown .
I have to suppose that it is that a problem on the car side , but I would ask about your opinion , maybe there is some pieces I'm missing
Bug Report: ScreenCaptureKit System Audio Capture Crashes with EXC_BAD_ACCESS
Summary
When using ScreenCaptureKit to capture system audio for extended periods, the application crashes with EXC_BAD_ACCESS in Swift's error handling runtime. The crash occurs in swift_getErrorValue when trying to process an error from the SCStream delegate method didStopWithError. This appears to be a framework-level issue in ScreenCaptureKit or its underlying ReplayKit implementation.
Environment
macOS Sonoma 14.6.1
Swift 5.8
ScreenCaptureKit framework
Detailed Description
Our application captures system audio using ScreenCaptureKit's audio capture capabilities. After successfully capturing for several minutes (typically after 3-4 segments of 60-second recordings), the application crashes with an EXC_BAD_ACCESS error. The crash happens when the Swift runtime attempts to process an error in the SCStreamDelegate.stream(_:didStopWithError:) method.
The crash consistently occurs in swift_getErrorValue when attempting to access the class of what appears to be a null object. This suggests that the error being passed from the system framework to our delegate method is malformed or contains invalid memory.
Steps to Reproduce
Create an SCStream with audio capture enabled
Add audio output to the stream
Start capture and write audio data to disk
Allow the capture to run for several minutes (3-5 minutes typically triggers the issue)
The app will crash with EXC_BAD_ACCESS in swift_getErrorValue
Code Sample
func stream(_ stream: SCStream, didStopWithError error: Error) {
print("Stream stopped with error: \(error)") // Crash occurs before this line executes
}
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
guard type == .audio, sampleBuffer.isValid else { return }
// Process audio data...
}
Expected Behavior
The error should be properly propagated to the delegate method, allowing for graceful error handling and recovery.
Actual Behavior
The application crashes with EXC_BAD_ACCESS when the Swift runtime attempts to process the error in swift_getErrorValue.
Crash Log Details
Thread #35, queue = 'com.apple.NSXPCConnection.m-user.com.apple.replayd', stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
frame #0: 0x0000000194c3088c libswiftCore.dylib`swift::_swift_getClass(void const*) + 8
frame #1: 0x0000000194c30104 libswiftCore.dylib`swift_getErrorValue + 40
frame #2: 0x00000001057fba30 shadow`NewScreenCaptureService.stream(stream=0x0000600002de6700, error=Swift.Error @ 0x000000016b7b5e30) at NEW+ScreenCaptureService.swift:365:15
frame #3: 0x00000001057fc050 shadow`@objc NewScreenCaptureService.stream(_:didStopWithError:) at <compiler-generated>:0
frame #4: 0x0000000219ec5ca0 ScreenCaptureKit`-[SCStreamManager stream:didStopWithError:] + 456
frame #5: 0x00000001ca68a5cc ReplayKit`-[RPScreenRecorder stream:didStopWithError:] + 84
frame #6: 0x00000001ca696ff8 ReplayKit`-[RPDaemonProxy stream:didStopWithError:] + 224
Printing description of stream._streamQueue:
error: ObjectiveC.id:4294967281:18: note: 'id' has been explicitly marked unavailable here
public typealias id = AnyObject
^
error: /var/folders/v4/3xg1hmp93gjd8_xlzmryf_wm0000gn/T/expr23-dfa421..cpp:1:65: 'id' is unavailable in Swift: 'id' is not available in Swift; use 'Any'
Swift._DebuggerSupport.stringForPrintObject(Swift.UnsafePointer<id>(bitPattern: 0x104ae08c0)!.pointee)
^~
ObjectiveC.id:2:18: note: 'id' has been explicitly marked unavailable here
public typealias id = AnyObject
^
warning: /var/folders/v4/3xg1hmp93gjd8_xlzmryf_wm0000gn/T/expr23-dfa421..cpp:5:7: initialization of variable '$__lldb_error_result' was never used; consider replacing with assignment to '_' or removing it
var $__lldb_error_result = __lldb_tmp_error
~~~~^~~~~~~~~~~~~~~~~~~~
_
Before the crash, we observed this error message in the console:
[ERROR] *****SCStream*****RemoteAudioQueueOperationHandlerWithError:1015 Error received from the remote queue -16665
Additional Context
The issue occurs consistently after approximately 3-4 successful audio segment recordings of 60 seconds each
Commenting out custom segment rotation logic does not prevent the crash
The crash involves XPC communication with Apple's ReplayKit daemon
The error appears to be corrupted or malformed when crossing the XPC boundary
Workarounds Attempted
Added proper thread safety for all published properties using DispatchQueue.main.async
Implemented more robust error handling in the delegate methods
None of these approaches prevented the crash since it occurs at the Swift runtime level before our code executes.
Impact
This issue prevents reliable long-duration audio capture using ScreenCaptureKit.
This bug significantly limits the usefulness of ScreenCaptureKit for any application requiring continuous system audio capture for more than a few minutes.
Perhaps this issue might be related to a macOS bug where the system dialog indicates that the screen is being shared, even though nothing is actually being shared. Moreover, when attempting to stop sharing, nothing happens.
iPhoneやiPadにおいて、画面上のボタンなどをタップした際に、特定の楽器音を発音させる方法をご存知の方いらっしゃいませんか?
現在音楽学習アプリを作成途中で、画面上の鍵盤や指板のボタン状のframeに、単音又は和音を割当て発音させる事を考えております
SwiftUIのcodeのみで実現できないでしょうか
嘗て、MIDIのlevel1の楽器の発音機能があった様に記憶していますが、現在のOS上では同様の機能を実装してないように思えます
皆様のお知恵をお貸しください
Hi,
I test AVMIDIPlayer in order to replace classes written based on AVAudioEngine with callbacks functions sending MIDI events
to test, I use an NSMutableData filled with:
the MIDI header
a track for time signature
a track containing a few midi events.
I then create an instance of the AVMIDIPlayer using the data
Everything works fine for some instrument (00 … 20) or 90 but not for other instruments 60, 70, …
The MiDI header and the time signature track are based on the MIDI.org sample,
https://midi.org/standard-midi-files-specification
RP-001_v1-0_Standard_MIDI_Files_Specification_96-1-4.pdf
the midi events are:
UInt8 trkEvents[] = {
0x00, 0xC0, instrument, // Tubular bell
0x00, 0x90, 0x4C, 0xA0, // Note 4C
0x81, 0x40, 0x48, 0xB0, // TS + Note 48
0x00, 0xFF, 0x2F, 0x00}; // End
for (UInt8 i=0; i<3; i++) {
printf("0x%X ", trkEvents[i]);
}
printf("\n");
[_midiTempData appendBytes:trkEvents length:sizeof(trkEvents)];
A template application is used to change the instrument in a NSTextField
I was wondering if specifics are required for some instruments?
The interface header:
#import <AVFoundation/AVFoundation.h>
NS_ASSUME_NONNULL_BEGIN
@interface TestMIDIPlayer : NSObject
@property (retain) NSMutableData *midiTempData;
@property (retain) NSURL *midiTempURL;
@property (retain) AVMIDIPlayer *midiPlayer;
- (void)createTest:(UInt8)instrument;
@end
NS_ASSUME_NONNULL_END
The implementation:
#pragma mark -
typedef struct _MThd {
char magic[4]; // = "MThd"
UInt8 headerSize[4]; // 4 Bytes, MSB first. Always = 00 00 00 06
UInt8 format[2]; // 16 bit, MSB first. 0; 1; 2 Use 1
UInt8 trackCount[2]; // 16 bit, MSB first.
UInt8 division[2]; //
}MThd;
MThd MThdMake(void);
void MThdPrint(MThd *mthd) ;
typedef struct _MIDITrackHeader {
char magic[4]; // = "MTrk"
UInt8 trackLength[4]; // Ignore, because it is occasionally wrong.
} Track;
Track TrackMake(void);
void TrackPrint(Track *track) ;
#pragma mark - C Functions
MThd MThdMake(void) {
MThd mthd = {
"MThd",
{0, 0, 0, 6},
{0, 1},
{0, 0},
{0, 0}
};
MThdPrint(&mthd);
return mthd;
}
void MThdPrint(MThd *mthd) {
char *ptr = (char *)mthd;
for (int i=0;i<sizeof(MThd); i++, ptr++) {
printf("%X", *ptr);
}
printf("\n");
}
Track TrackMake(void) {
Track track = {
"MTrk",
{0, 0, 0, 0}
};
TrackPrint(&track);
return track;
}
void TrackPrint(Track *track) {
char *ptr = (char *)track;
for (int i=0;i<sizeof(Track); i++, ptr++) {
printf("%X", *ptr);
}
printf("\n");
}
@implementation TestMIDIPlayer
- (id)init {
self = [super init];
printf("%s %p\n", __FUNCTION__, self);
if (self) {
_midiTempData = nil;
_midiTempURL = [[NSURL alloc]initFileURLWithPath:@"midiTempUrl.mid"];
_midiPlayer = nil;
[self createTest:0x0E];
NSLog(@"_midiTempData:%@", _midiTempData);
}
return self;
}
- (void)dealloc {
[_midiTempData release];
[_midiTempURL release];
[_midiPlayer release];
[super dealloc];
}
- (void)createTest:(UInt8)instrument {
/* MIDI Header */
[_midiTempData release];
_midiTempData = nil;
_midiTempData = [[NSMutableData alloc]initWithCapacity:1024];
MThd mthd = MThdMake();
MThd *ptrMthd = &mthd;
ptrMthd->trackCount[1] = 2;
ptrMthd->division[1] = 0x60;
MThdPrint(ptrMthd);
[_midiTempData appendBytes:ptrMthd length:sizeof(MThd)];
/* Track Header
Time signature */
Track track = TrackMake();
Track *ptrTrack = &track;
ptrTrack->trackLength[3] = 0x14;
[_midiTempData appendBytes:ptrTrack length:sizeof(track)];
UInt8 trkEventsTS[]= {
0x00, 0xFF, 0x58, 0x04, 0x04, 0x04, 0x18, 0x08, // Time signature 4/4; 18; 08
0x00, 0xFF, 0x51, 0x03, 0x07, 0xA1, 0x20, // tempo 0x7A120 = 500000
0x83, 0x00, 0xFF, 0x2F, 0x00 }; // End
[_midiTempData appendBytes:trkEventsTS length:sizeof(trkEventsTS)];
/* Track Header
Track events */
ptrTrack->trackLength[3] = 0x0F;
[_midiTempData appendBytes:ptrTrack length:sizeof(track)];
UInt8 trkEvents[] = {
0x00, 0xC0, instrument, // Tubular bell
0x00, 0x90, 0x4C, 0xA0, // Note 4C
0x81, 0x40, 0x48, 0xB0, // TS + Note 48
0x00, 0xFF, 0x2F, 0x00}; // End
for (UInt8 i=0; i<3; i++) {
printf("0x%X ", trkEvents[i]);
}
printf("\n");
[_midiTempData appendBytes:trkEvents length:sizeof(trkEvents)];
[_midiTempData writeToURL:_midiTempURL atomically:YES];
dispatch_async(dispatch_get_main_queue(), ^{
if (!_midiPlayer.isPlaying)
[self midiPlay];
});
}
- (void)midiPlay {
NSError *error = nil;
_midiPlayer = [[AVMIDIPlayer alloc]initWithData:_midiTempData soundBankURL:nil error:&error];
if (_midiPlayer) {
[_midiPlayer prepareToPlay];
[_midiPlayer play:^{
printf("Midi Player ended\n");
[_midiPlayer stop];
[_midiPlayer release];
_midiPlayer = nil;
}];
}
}
@end
Call from AppDelegate
- (IBAction)actionInstrument:(NSTextField*)sender {
[_testMidiplayer createTest:(UInt8)sender.intValue];
}
After investing more than a week into getting a bunch of audio unit projects converted into app + appex + framework, they all are now correctly loaded in-process in the demo host app that is part of Xcode's template.
However, Logic Pro adamantly refuses to load them in-process.
Does Logic Pro simply not do that ever, or is there some hint or configuration my plugins need to provide to enable that? If it is unsupported, will it be supported in some future version of Logic?
The entire point of investing that week was performance, which is moot if it is impossible to test the impact of loading in-process in a real-world usage scenario.
Hi I just released an app which is live. i have a strange issue: while the audio files in the app play fine on my device, but some users are unable to hear. One friend said it played yesterday but not today. Any idea why? The files are mp3, I see them in Build Phase, and in the project obviously. Here's the audio view code, thank you!
import AVFoundation
struct MeditationView: View {
@State private var player: AVAudioPlayer?
@State private var isPlaying = false
@State private var selectedMeditation: String?
var isiPad = UIDevice.current.userInterfaceIdiom == .pad
let columns = [GridItem(.flexible()),GridItem(.flexible())]
let tracks = ["Intro":"intro.mp3",
"Peace" : "mysoundbath1.mp3",
"Serenity" : "mysoundbath2.mp3",
"Relax" : "mysoundbath3.mp3"]
var body: some View {
VStack{
VStack{
VStack{
Image("dhvani").resizable().aspectRatio(contentMode: .fit)
.frame(width: 120)
Text("Enter the world of Dhvani soundbath sessions, click lotus icon to play.")
.font(.custom("Times New Roman", size: 20))
.lineLimit(nil)
.multilineTextAlignment(.leading)
.fixedSize(horizontal: false, vertical: true)
.italic()
.foregroundStyle(Color.ashramGreen)
.padding()
}
LazyVGrid(columns:columns, spacing:10){
ForEach(tracks.keys.sorted(),id:\.self){ track in
Button {
self.playMeditation(named: tracks[track]!)
} label: {
Image("lotus")
.resizable()
.frame(width: 40,height: 40)
.background(Color.ashramGreen)
.cornerRadius(10)
}
Text(track)
.font(.custom("Times New Roman", size: 22))
.foregroundStyle(Color.ashramGreen)
.italic()
}
}
HStack(spacing:20) {
Button(action: { self.togglePlayPause() }) {
Image(systemName: isPlaying ? "playpause.fill" : "play.fill")
.resizable()
.frame(width: 20, height: 20)
.foregroundColor(Color.ashramGreen)
}
Button(action: {
self.stopMeditation()
}) {
Image(systemName: "stop.fill")
.resizable()
.frame(width: 20, height: 20)
.foregroundColor(Color.ashramGreen)
}
}
}.padding()
.background(Color.ashramBeige)
.cornerRadius(20)
Spacer()
//video play
VStack{
Text("Chant")
.font(.custom("Times New Roman", size: 24))
.foregroundStyle(Color.ashramGreen)
.padding(5)
WebView(urlString: "https://www.youtube.com/embed/ny3TqP9BxzE") .frame(height: isiPad ? 400 : 200)
.cornerRadius(10)
.padding()
Text("Courtesy Sri Ramanasramam").font(.footnote).italic()
}
}.background(Color.ashramBeige)
}
//View
func playMeditation(named name: String) {
if let url = Bundle.main.url(forResource: name, withExtension: nil) {
do {
player = try AVAudioPlayer(contentsOf: url)
player?.play()
isPlaying = true
} catch {
print("Error playing meditation")
}
}
}
func togglePlayPause() {
if let player = player {
if player.isPlaying {
player.pause()
isPlaying = false
} else {
player.play()
isPlaying = true
}
}
}
func stopMeditation() {
player?.stop()
isPlaying = false
}
}
#Preview {
MeditationView()
}
Topic:
Media Technologies
SubTopic:
Audio
I'm developing the VisionOS app. I want to know how to play spatial audio in addition to RealityKit? If it's iOS or macOS, how to play spatial audio in addition to RealityKit?
Hi,
I am looking for a good way to play sounds at a high frequency.
At the moment I am using the AVAudioEngine, and create a couple AVAudioPlayerNode and for each sound I need to play I create a AVAudioPCMBuffer.
When the app needs to play a sound, I get the correct AVAudioPCMBuffer for the sound and use the first available AVAudioPlayerNode and feed it to the buffer.
The timing for a metronome app has to be very precise because if it's of by about 16ms the user can hear that it is not playing had the right interval. For low speeds this is working without any problems, but at high speeds it is getting worse.
Maybe anyone has an idea on how I can improve my method.
Its a Plugin for Flutter.
import AVFoundation
class FastSoundPlayer {
private var audioPlayers: [SoundPlayer?] = []
private var sounds: [String: Sound] = [:]
private var engine = AVAudioEngine()
let session = AVAudioSession.sharedInstance()
init() {
do {
try session.setCategory(AVAudioSession.Category.playback, mode: AVAudioSession.Mode.default, options: [AVAudioSession.CategoryOptions.mixWithOthers])
try session.setActive(true)
createSoundPlayers(count: 20)
try engine.start()
} catch {
print("Error starting audio engine: \(error.localizedDescription)")
}
}
// Selector method to handle applicationDidBecomeActiveNotification
func applicationDidBecomeActive() {
// Reinitialize AVAudioEngine and reattach all nodes
do {
engine.reset()
objc_sync_enter(audioPlayers)
audioPlayers.removeAll()
createSoundPlayers(count: 20)
objc_sync_exit(audioPlayers)
try engine.start()
} catch {
print("Error starting audio engine: \(error.localizedDescription)")
}
}
func createSoundPlayers(count: Int) {
for _ in 0..<count {
let player = SoundPlayer()
engine.attach(player.player)
engine.connect(player.player, to: engine.mainMixerNode, format: nil)
audioPlayers.append(player)
}
}
func load(sound: Data, name: String) {
let sound = Sound(soundData: sound)
sounds[name] = sound
}
func play(name: String) {
if !engine.isRunning {
applicationDidBecomeActive()
}
guard let sound = sounds[name] else {
print("Sound not found")
return
}
if let player = getAvailablePlayer() {
player.play(sound: sound)
}
}
func getAvailablePlayer() -> SoundPlayer? {
for player in audioPlayers {
if !player!.isPlaying {
return player
}
}
return nil
}
}
class SoundPlayer {
let player = AVAudioPlayerNode()
var isPlaying = false
init() {
player.volume = 1.0
}
func play(sound: Sound) {
player.scheduleBuffer(sound.sound!, at: nil, options: .interrupts, completionCallbackType: .dataPlayedBack) { _ in
self.complete()
}
if (player.engine != nil && player.engine!.isRunning) {
player.play()
isPlaying = true
}
}
func complete() {
isPlaying = false
}
}
class Sound {
var sound: AVAudioPCMBuffer?
init(soundData: Data) {
do {
let temporaryURL = FileManager.default.temporaryDirectory.appendingPathComponent("tempSound.wav")
try soundData.write(to: temporaryURL)
// Create AVAudioFile from the temporary file URL
let audioFile = try AVAudioFile(forReading: temporaryURL)
// Define the format for the PCM buffer (44100Hz, stereo)
let format = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44100, channels: 2, interleaved: false)
// Create AVAudioPCMBuffer
guard let pcmBuffer = AVAudioPCMBuffer(pcmFormat: format!, frameCapacity: AVAudioFrameCount(audioFile.length)) else {
// Failed to create PCM buffer
self.sound = nil
return
}
// Read audio file into PCM buffer
try audioFile.read(into: pcmBuffer)
// Assign the created AVAudioPCMBuffer to the sound property
self.sound = pcmBuffer
} catch {
print("Error loading sound file: \(error.localizedDescription)")
self.sound = nil
}
}
}
Thanks!
I'm trying to implement airplay into my app. I can successfully playback sound and trigger the airplay selector sheet. If the target device is a Bluetooth only device I can connect with no problem and stream the audio to the Bluetooth device, but if the audio device is a airplay specific device like a HomePod or an Apple TV when I select it, I get a spinning icon, indicating that it is trying to connect, and eventually it times out and stops without connecting.
I don't believe it is an AirPlay audio issue because if I go to a different app, for example a podcast app and select my HomePods for output, and then switch back to my app. My audio will correctly stream to the HomePod. Not only that, I have it so that my icon will change color to indicate that it is connected via airplay and it is correctly indicating that it is connected via AirPlay. But I cannot then disconnect it using the Airplay selector.
The issue appears to be in the AirPlay selection side, which I have spent several days attempting to troubleshoot mostly using ChatGPT to suggest code different than what I have to maybe work around the issue. Mostly it is focused on the audio player section, but it doesn't seem like that is really the route that is the problem.
I have a SwiftUI app - (https://youtu.be/VbAfUk_eYl0?si=JxUBh0Bpb-vc1E1U) - which I thought was almost ready for release - a manager for airdropped audio files from Logic Pro or other music creation applications. It uses AVAudioEngine and AVAudioPlayerNode to play audio, and the MediaPlayer API to integrate with car audio and similar, all of which works well.
It does not currently have an explicit CarPlay integration (and I'm slightly horrified at the amount of work that is going to require).
I had the good or bad luck of getting a loaner car with carplay while mine is being repaired yesterday, and lo and behold, when connected to the vehicle via CarPlay, there is no audio output in the vehicle at all. The now playing panel correctly shows the information my app provides about the currently playing song; the player node believes it is playing, the AVAudioSession is configured as it should be. But there is no sound.
Obviously I cannot ship it in this state.
I've tried fiddling with the parameters the AVAudioSession is configured with, in case there was some parameter that was preventing audio output, to no avail - currently:
var options = AVAudioSession.CategoryOptions()
options.insert(.allowAirPlay)
options.insert(.allowBluetooth)
options.insert(.allowBluetoothA2DP)
try session.setCategory(.playback, mode: .default, options: options)
try? session.setPreferredIOBufferDuration(0.002) // ~96 samples at 44.1kHz
try? session.setPrefersNoInterruptionsFromSystemAlerts(true)
try? session.setPrefersInterruptionOnRouteDisconnect(false)
try session.setActive(true, options: [.notifyOthersOnDeactivation])
All diagnostics within the app show the player operating correctly - files are played and flushed; AVAudioPlayerNodeCompletionCallbacks are called when they should be. But the output is not audible in the vehicle.
I would much prefer to ship this app without full-blown CarPlay integration, but with working audio when connected via CarPlay, and work on full CarPlay integration for the next release.
Is there some secret handshake I am just missing to make this work?
Hello! I'm use AVFoundation for preview video and audio from selected device, and I try use AVAudioEngine for preview audio in real-time, but I can't or I don't understand how select input device? I can hear only my microphone in real-time
So far, I'm using AVCaptureAudioPreviewOutput for in real-time hear audio, but I think has delay.
On iOS works easy with AVAudioEngine, but on macOS bruh...
Topic:
Media Technologies
SubTopic:
Audio
Tags:
AudioToolbox
AVAudioSession
AVAudioEngine
AVFoundation
Hello,
I’m new here. I'm developing an iOS app and I’d like to know whether it is possible to detect if a phone call is being recorded by another app running in the background.
I’ve already reviewed the documentation for CallKit and AVAudioSession, but I couldn’t find anything related. My expectation was that iOS might provide some callback or API to indicate if a call is being recorded (third-party apps), but so far I haven’t found a way.
My questions are:
Does iOS expose any API to detect if a call is being recorded?
If not, is there any indirect, Apple's policy compliant method (e.g., microphone usage events) that can be relied upon?
Or is this something that iOS explicitly prevents for privacyreasons?
Expecting solutions that align with Apple’s policies and would be accepted under the App Store Review Guidelines.
Thanks in advance for any guidance.
AVAudioSessionCategoryOptionAllowBluetooth is marked as deprecated in iOS 8 in iOS 26 beta 5 when this option was not deprecated in iOS 18.6. I think this is a mistake and the deprecation is in iOS 26. Am I right?
It seems that the substitute for this option is "AVAudioSessionCategoryOptionAllowBluetoothHFP". The documentation does not make clear if the behaviour is exactly the same or if any difference should be expected... Has anyone used this option in iOS 26? Should I expect any difference with the current behaviour of "AVAudioSessionCategoryOptionAllowBluetooth"?
Thank you.