Hi,
I would like to be able to create a playlist from my app or add songs to an existing selected playlist in the user music library, How can I do that ?
Is there an API in MusicKit to do that?
The Shazam app is doing something similar with the My Shazam Songs playlist...
Thanks
Post not yet marked as solved
Would like to play with this example project shown on session wwdc21-10187
Post not yet marked as solved
Hi,
My account was approved for the com.apple.developer.playable-content entitlement, but now it's deprecated and I want to switch to the new one com.apple.developer.carplay-audio.
I have some problems making the transition, do I need to submit a new request to Apple for the new entitlement?
Thanks.
Post not yet marked as solved
Hi,
I want to implement ShazamKit in my project.
But I have some problems.
I use AVCaptureSession to take photos in my app and I'm unable to use ShazamKit.
I tried to use three different ways
Use an AVAudioEngine during my AVCaptureSession
But I didn't obtain any result from Shazam.
Try to use ShazamKit after stopping my AvCaptureSession but this causes some problems, and some crashes.
Try to use the buffer of my AVCaptureSession to catch audio directly without use AVAudioEngine.
This is the code that I use with AVAudioEngine:
try! audioSession.setActive(true, options: .notifyOthersOnDeactivation)
let inputNode = self.audioEngine.inputNode
let recordingFormat = inputNode.outputFormat(forBus: 0)
let audioFormat = recordingFormat //AVAudioFormat(standardFormatWithSampleRate: self.audioEngine.inputNode.outputFormat(forBus: 0).sampleRate,
// channels: 1)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
try! self.signatureGenerator.append(buffer, at: nil)
self.session.matchStreamingBuffer(buffer, at: nil)
}
self.audioEngine.prepare()
try! self.audioEngine.start()
I can choose two ways to do this, use AVCaptureSession output to pass it to ShazamKit or use an AVAudioSession after the stop of AVCaptureSession.
So I have two questions:
Can I use a CMSampleBufferRef from AVCaptureSession buffer in a SHSession?
And if the answer is yes how?
How can I prevent this error if I want to use an AVAudioSession after I stopped my AVCaptureSession?
[aurioc] AURemoteIO.cpp:1117 failed: -10851 (enable 1, outf< 2 ch, 0 Hz, Float32, deinterleaved> inf< 2 ch, 0 Hz, Float32, deinterleaved>)
[avae] AVAEInternal.h:76 required condition is false: [AVAEGraphNode.mm:834:CreateRecordingTap: (IsFormatSampleRateAndChannelCountValid(format))]
*** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: IsFormatSampleRateAndChannelCountValid(format)'
Thanks
Post not yet marked as solved
Hi, I had my app developed and have found that when the iPhone locks screensaver it stops playing the track and doesn't recognise it has a sound track playing. Is there something my developer needs to put in or is it an IOS security issues? Love any recommendations please.
Post not yet marked as solved
Hi,
I'm attempting to call audioComponentFindNext() from an iOS application (built with juce) to get a list of all available plugins.
I've got an issue whereby the function is only returning the generic system plugins and missing any the 3rd party installed plugins.
This issue is currently found when called from within another auv3 plugin though I have also seen it from within a normal iOS app. (Ran on iPad air 4), it the moment is working fine from an iOS app.
I've tried setting microphone access and inter-app audio capabilities as I saw it suggested on similar forum posts though it has not solved my problem.
Any advice would be very appreciated
Thanks
Hello folks,
Now, I'm trying to make continuous loop playback function with AVAudioPlayer. After I implemented sample code as below, and I checked memory usage. As a result of 1 hour loop, memory usage kept increasing for a hour from start to end.
My app is expected to run for over 10 hours or over. so, I think that I need to solve memory leak problem.
Does someone know how to avoid kind of memory leak?
My implementation way(code design standard or @escaping async function, etc...) is not good or not typical way for continuous audio loop playback? or The way to use AVAudioPlayer is not correct?
If someone knows that, that will save my life.
Best regards,
Post not yet marked as solved
Hello,
I made a post earlier comparing the performance between the latest release of tensorflow with apple silicon support: https://developer.apple.com/forums/thread/687654.
In my testing the GitHub alpha greatly outperforms the current release.
I provided an installation guide for the GitHub alpha in that post.
sounddevice: https://pypi.org/project/sounddevice/
My goal is to get the python library sounddevice working in either of the virtual environments created by the two different tensorflow releases for apple silicon. Preferably the GitHub alpha, since its much faster.
The two releases being the current release https://developer.apple.com/metal/tensorflow-plugin/ using a conda virtual environment, or the GitHub alpha release which can be setup using the installer script.
GitHub alpha venv errors
Installation:
First I make an environment following the installation guide I provided in my first post (linked above).
I activate the virtual environment
I install sounddevice using the following command:
$ python3 -m pip install sounddevice
When I try to import sounddevice I get the following errors:
Traceback (most recent call last):
File "/Users/sadedwar/code/fun/ga-synth/venv/lib/python3.8/site-packages/sounddevice.py", line 72, in <module>
_lib = _ffi.dlopen(_libname)
OSError: cannot load library '/usr/local/lib/libportaudio.dylib': dlopen(/usr/local/lib/libportaudio.dylib, 2): no suitable image found. Did find:
/usr/local/lib/libportaudio.dylib: mach-o, but wrong architecture
/usr/local/Cellar/portaudio/19.7.0/lib/libportaudio.2.dylib: mach-o, but wrong architecture
tensorflow-metal PluggableDevice errors
Installation:
exactly as described in: https://developer.apple.com/metal/tensorflow-plugin/
I install python-sounddevice from: https://anaconda.org/conda-forge/python-sounddevice
importing works fine
When I try to run the following code:
myrecording = sd.rec(int(duration * samplerate), samplerate=samplerate, channels=channels)
it yields this error:
Traceback (most recent call last):
File "/Users/sadedwar/code/fun/ga-synth/makeDatasets.py", line 41, in <module>
render_dataset(make_simple_dataset(100))
File "/Users/sadedwar/code/fun/ga-synth/makeDatasets.py", line 37, in render_dataset
data, samplerate = audioRecorder.play_and_rec()
File "/Users/sadedwar/code/fun/ga-synth/audioRecorder.py", line 29, in play_and_rec
recording, samplerate = rec_mono_16bit_8kHz(duration=0.1)
File "/Users/sadedwar/code/fun/ga-synth/audioRecorder.py", line 17, in rec_mono_16bit_8kHz
myrecording = sd.rec(int(duration * samplerate), samplerate=samplerate, channels=channels)
File "/Users/sadedwar/miniforge3/envs/machinelearning/lib/python3.9/site-packages/sounddevice.py", line 274, in rec
ctx.start_stream(InputStream, samplerate, ctx.input_channels,
File "/Users/sadedwar/miniforge3/envs/machinelearning/lib/python3.9/site-packages/sounddevice.py", line 2573, in start_stream
self.stream = StreamClass(samplerate=samplerate,
File "/Users/sadedwar/miniforge3/envs/machinelearning/lib/python3.9/site-packages/sounddevice.py", line 1415, in __init__
_StreamBase.__init__(self, kind='input', wrap_callback='array',
File "/Users/sadedwar/miniforge3/envs/machinelearning/lib/python3.9/site-packages/sounddevice.py", line 836, in __init__
def callback_ptr(iptr, optr, frames, time, status, _):
MemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this. For more information, see https://cffi.readthedocs.io/en/latest/using.html#callbacks
Any help is appreciated, thank you.
Post not yet marked as solved
Hey guys,
I'm working on an App that plays Apple Music tracks. Therefore I'm using
MPMusicPlayerController.applicationMusicPlayer
everything is working great so far - including setting the
musicPlayer.currentPlaybackRate
which is kind of important for my use case.
Now I'm trying to setup of the lock screen-controls for my App correctly, especially because using the default ones will reset the currentPlaybackRate to 1.0 after play/pausing. Also I have to sync my internal playState if "pause" is pressed in the lock screen.
Here is what I wrote:
class AudioPlayerViewModel {
...
private let commandCenter = MPRemoteCommandCenter.shared()
init(album: LibraryAlbum? = nil) {
...
self.setupRemoteCommandCenter()
}
...
func setupRemoteCommandCenter() {
debugPrint("setupRemoteCommandCenter()")
commandCenter.previousTrackCommand.isEnabled = false
commandCenter.previousTrackCommand.addTarget { event in
debugPrint("remote previousTrackCommand")
self.previousTrack()
return .success
}
commandCenter.nextTrackCommand.isEnabled = false
commandCenter.nextTrackCommand.addTarget { event in
debugPrint("remote nextTrackCommand")
self.nextTrack()
return .success
}
commandCenter.pauseCommand.isEnabled = false
commandCenter.pauseCommand.addTarget { event in
debugPrint("remote pauseCommand")
self.isPlaying = false
self.pausePlayback()
return .success
}
commandCenter.playCommand.isEnabled = false
commandCenter.playCommand.addTarget { (event) -> MPRemoteCommandHandlerStatus in
debugPrint("remote playCommand")
self.isPlaying = true
self.continuePlayback()
return .success
}
}
}
I call setupRemoteCommandCenter() once(!) before playing anything in MPMusicPlayerController.applicationMusicPlayer
My problem is these handlers never get called. The lock screen-controls always remain the default ones. My code has no effect, but also throws no errors.
I find several posts about this. The Apple Documentation (https://developer.apple.com/documentation/mediaplayer/handling_external_player_events_notifications) states the this should be possible.
What am I doing wrong?
Any help would be greatly appreciated.
Post not yet marked as solved
where and how would I start to build a dj app?
How can I find out what the Problem is?
Every Time I start Audio and hear it when the iPad/iPhone is turned off and then activate Display of the device after 10-15 Minutes, the App crashes.
Here are the First Lines of the Crash Report:
Hardware Model: iPad8,12
Process: VOH-App [16336]
Path: /private/var/containers/Bundle/Application/5B2CF582-D108-4AA2-B30A-81BA510B7FB6/VOH-App.app/VOH-App
Identifier: com.voiceofhope.VOH
Version: 7 (1.0)
Code Type: ARM-64 (Native)
Role: Non UI
Parent Process: launchd [1]
Coalition: com.voiceofhope.VOH [740]
Date/Time: 2021-08-18 22:51:24.0770 +0200
Launch Time: 2021-08-18 22:36:50.4081 +0200
OS Version: iPhone OS 14.7.1 (18G82)
Release Type: User
Baseband Version: 2.05.01
Report Version: 104
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_PROTECTION_FAILURE at 0x000000016d2dffb0
VM Region Info: 0x16d2dffb0 is in 0x16d2dc000-0x16d2e0000; bytes after start: 16304 bytes before end: 79
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
CG raster data 11cad0000-11d814000 [ 13.3M] r--/r-- SM=COW
GAP OF 0x4fac8000 BYTES
---> STACK GUARD 16d2dc000-16d2e0000 [ 16K] ---/rwx SM=NUL ... for thread 0
Stack 16d2e0000-16d3dc000 [ 1008K] rw-/rwx SM=PRV thread 0
Termination Signal: Segmentation fault: 11
Termination Reason: Namespace SIGNAL, Code 0xb
Terminating Process: exc handler [16336]
Triggered by Thread: 0
Thread 0 name: Dispatch queue: com.apple.main-thread
Thread 0 Crashed:
0 libswiftCore.dylib 0x00000001a8028360 swift::MetadataCacheKey::operator==+ 3773280 (swift::MetadataCacheKey) const + 4
1 libswiftCore.dylib 0x00000001a801ab8c _swift_getGenericMetadata+ 3718028 (swift::MetadataRequest, void const* const*, swift::TargetTypeContextDescriptor<swift::InProcess> const*) + 304
2 libswiftCore.dylib 0x00000001a7ffbd00 __swift_instantiateCanonicalPrespecializedGenericMetadata + 36
Here is a full crash Report:
VOH-App 16.08.21, 20-22.crash
Post not yet marked as solved
Hi,
How can I pause/play the now playing song with MedaiRemote framework
Thanks!
Post not yet marked as solved
I use AVSpeechSynthesizer to pronounce some text in German. Sometimes it works just fine and sometimes it doesn't for some unknown to me reason (there is no error, because the speak() method doesn't throw and the only thing I am able to observe is the following message logged in the console):
_BeginSpeaking: couldn't begin playback
I tried to find some API in the AVSpeechSynthesizerDelegate to register a callback when error occurs, but I have found none.
The closest match was this (but it appears to be only available for macOS, not iOS):
https://developer.apple.com/documentation/appkit/nsspeechsynthesizerdelegate/1448407-speechsynthesizer?changes=_10
Below you can find how I initialize and use the speech synthesizer in my app:
class Speaker: NSObject, AVSpeechSynthesizerDelegate {
class func sharedInstance() -> Speaker {
struct Singleton {
static var sharedInstance = Speaker()
}
return Singleton.sharedInstance
}
let audioSession = AVAudioSession.sharedInstance()
let synth = AVSpeechSynthesizer()
override init() {
super.init()
synth.delegate = self
}
func initializeAudioSession() {
do {
try audioSession.setCategory(.playback, mode: .spokenAudio, options: .duckOthers)
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
} catch {
}
}
func speak(text: String, language: String = "de-DE") {
guard !self.synth.isSpeaking else { return }
let utterance = AVSpeechUtterance(string: text)
let voice = AVSpeechSynthesisVoice.speechVoices().filter { $0.language == language }.first!
utterance.voice = voice
self.synth.speak(utterance)
}
}
The audio session initialization is ran during app started just once.
Afterwards, speech is synthesized by running the following code:
Speaker.sharedInstance.speak(text: "Lederhosen")
The problem is that I have no way of knowing if the speech synthesis succeeded—the UI is showing "speaking" state, but nothing is actually being spoken.
Post not yet marked as solved
I'm using the MacBook Pro (2020 series).
https://developer.dolby.com/platforms/apple/macos/overview/
The above page says that Dolby Atmos is supported for built-in speakers, but I don't know how to play.
I can't found the settings for playing with Dolby Atmos on MacBook Pro, so how can I play with Dolby Atmos in Mac OS application written by mysellf ?
Execution environment:
MacBook Pro (2020) Big Sur 11.5.2
CPU: Apple M1
Xcode: 12.5.1
Post not yet marked as solved
Hey,
I am trying to figure out how can I display currently playing sources of Audio on my Xcode Project.
In the new Big Sur update I believe this was possible due to the Mac Catalyst.
How can I do this on the Mac?
New to this, can someone guide please.
Post not yet marked as solved
Good day community,
More than half a year we faced the crash with following callstack:
Crashed: AVAudioSession Notify Thread
EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000000000000000
0. libEmbeddedSystemAUs.dylib
InterruptionListener(void*, unsigned int, unsigned int, void const*)
1. libEmbeddedSystemAUs.dylib
InterruptionListener(void*, unsigned int, unsigned int, void const*)
arrow_right
2. AudioToolbox
AudioSessionPropertyListeners::CallPropertyListeners(unsigned int, unsigned int, void const*) + 596
3. AudioToolbox
HandleAudioSessionCFTypePropertyChangedMessage(unsigned int, unsigned int, void*, unsigned int) + 1144
4. AudioToolbox
ProcessDeferredMessage(unsigned int, __CFData const*, unsigned int, unsigned int) + 2452
5. AudioToolbox
ASCallbackReceiver_AudioSessionPingMessage + 632
6. AudioToolbox
_XAudioSessionPingMessage + 44
7. libAudioToolboxUtility.dylib
mshMIGPerform + 264
8. CoreFoundation
__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 56
9. CoreFoundation
__CFRunLoopDoSource1 + 444
10. CoreFoundation
__CFRunLoopRun + 1888
11. CoreFoundation
CFRunLoopRunSpecific + 424
12. AVFAudio
GenericRunLoopThread::Entry(void*) + 156
13. AVFAudio
CAPThread::Entry(CAPThread*) + 204
14. libsystem_pthread.dylib
_pthread_start + 156
15. libsystem_pthread.dylib
thread_start + 8
We use Wwise audio framework as audio playback API. We did reported the problem to Audiokinetic's support, but it seems that the problem is not there.
Also we used FMOD sound engine earlier, but we had the same issue.
At this time we have around 100 crash events every day, which makes us upset. Looks like it started from iOS 13.
My main problem is that I don't communicate with AudioToolbox or AVFAudio API directly but use thirdparty sound engines instead.
I believe I am not the only who faced this problem.
Also there is a discussion at https://forum.unity.com/threads/ios-12-crash-audiotoolbox.719675/
The last message deserves special attention:
https://zhuanlan.zhihu.com/p/370791950
where Jeffrey Zhuang made a research. This might be helpful for Apple's support team.
Any help is highly appreciated.
Best regards,
Sergey.
Post not yet marked as solved
(original question on stack overflow)
Safari requires that a user gesture occurs before the playing of any audio.
However, the user's response to getUserMedia does not appear to count as a user gesture. Or perhaps I have that wrong, maybe there is some way to trigger the playing?
This question ("Why can't JavaScript .play() audio files on iPhone safari?")
details the many attempts to work around the need for a user gesture,
but it seems like Apple has closed most of the loopholes. For whatever
reason, Safari does not consider the IOS acceptance of the camera/mic
usage dialog as a user gesture and there's no way to make camera capture
count as a user gesture.
Is there something I'm missing, is it impossible to play an audio
file after capturing the camera? Or is there someway to respond to the
camera being captured with an audio file?
Post not yet marked as solved
I have my first App ready and crash free (I think!) using AudioKit. While coding it I used the develop branch. I assume I should submit it with the main branch packages?
Trouble is I updated my iPad to iOS15 (yesterday) so then had to move onto Xcode 13 and ended up have a lot of broken AudioKit code with the main branch of AudioKit. As well as a couple of issues with the develop branch - which I managed to fix.
This is my first App submission so I'd like to get it right - excuse my newbie idiocy.
Seems like it may have been a bad idea moving to iOS15 & Xcode 13 right now. Should I go back to 12?
Main question though is what 3rd party framework branches should be used in a final App release?
Post not yet marked as solved
something broke in iOS 15 in my app , the same code is working fine with iOS 14.8 and below versions. The actual issues is when I play audio in my app then I go to notification bar , pause the audio and next play the audio from notification bar itself , then same audio is playing twice . One audio is resuming from where I paused it before and the other one is playing the same audio from initial stage.
When the issue is happening this is the logs I am getting
Ignoring setPlaybackState because application does not contain entitlement com.apple.mediaremote.set-playback-state for platform
2021-09-24 21:40:06.597469+0530 BWW[2898:818107] [rr] Response: updateClientProperties<A4F2E21E-9D79-4FFA-9B49-9F85214107FD> returned with error <Error Domain=kMRMediaRemoteFrameworkErrorDomain Code=29 “Could not find the specified now playing player” UserInfo={NSLocalizedDescription=Could not find the specified now playing player}> for origin-iPhone-1280262988/client-com.iconicsolutions.xstream-2898/player-(null) in 0.0078 seconds
I got stuck with this issue since 2 days , I tried all the ways but unable to get why it's only happening in iOS 15.
Any help will be greatly appreciated.
Hoping for guidance on how to prevent my app from stopping/pausing music playing from the Apple Music app. Would prefer if users can choose to listen to their own music while playing the game.