I was wondering if there are any example download projects of the PHASE audio framework? I was watching the WWDC 2021 video but there was no example code to download. The examples within the video were pretty verbose -- do not want to freeze a frame and type all that by hand.
I am attempting to replace some old OpenAL code from a few years ago with an alternate solution. All the OpenAL code shows deprecation messages when I build in Xcode.
The generated header PHASE documentation is kind of sparse and somewhat boiler plate with no examples.
Thanks in advance.
Post not yet marked as solved
Facing strange issue in two devices, iPhone 6s and 12 mini. Not getting audio from the media when ringer is off, while playing media. Is this device specific issue, any settings issue, OS issue or ultimately app issue. But this works fine with you tube and other media apps.
Post not yet marked as solved
When I use AVFoundation to record video on my App, the video can only save both video and audio less than 15seconds, if the video is more than 15seconds, the audio is missing.
My iOS is 15 and my Xcode is 13.0
Post not yet marked as solved
We have a syndicated radioshow goin on 21 years here in Reno, NV. Wondering if we could make an app to simulcast the show online? The show is on an FM radio station here. What documentation would we have to provide to prove we are legit? Thanks so much in advance, people always ask about hearing the show outside of the FM radio range & an app would be very cool.
Post not yet marked as solved
Hi all,
I have an app that is playing music from http live streams using AVPlayer. Have been trying to figure out how to use ShazamKit to recognise the music playing but I just can't figure out how to do it :-( Works well with local files and microphone recordings, but how do I get the data from a stream that is currently playing??? Feels like I tried everything...
Have tried to install an MTAudioProcessingTap but it doesn't seem to work on streaming assets even though I can get hold of the proper AVAssetTrack containing the audio. No callback with data are received? Bug?
I can open the streaming url and just save the bytes to disk and that's fine, but I'm not in sync with what is playing in AVPlayerItem so the recognition isn't working with the same audio data as the user is currently hearing. Hmmm.
Any suggestions and ideas are welcome. It would be such a nice feature for my app so I'm really looking forward to solving this.
Thx in advance / Jörgen, Bitfield
Post not yet marked as solved
HI
I'm a composer who has used Logic since it ran on an Atari, and was delighted by the update of Logic that includes Dolby mixing particularly as I have been working with sound spatialisation for many years - mostly live, in concerts of music I did at the Royal College of Music in London and around the world - classics of electronic music like Stockhausen and Jonathan Harvey and generally using Max/Ircam software which doesn't always actually work(!).
Anyway, I want to create ambient type music which rotates the sounds around the listener, and decoding to binaural and listening on Airpods sounds very good. But. If the head tracking could be employed to keep the image of this 3d soundcape fixed I think (know) the illusion would be so much more pronounced. So is there anyway that this can be incorporated into the listening experience (from the Music app on your phone) - presumably extra data in the file? I'm really not an engineer in that way.
Now even more fiddly, and probably a pipe dream, - I'm using the Leslie Cabinet plugin (native to Logic) to create a very lovely sound by spinning the sound of the Tanpura (The Indian Classical stringed drone instrument).
What could potentially be mind bending is if the output of the Lesile wasn't just stereo, but actually surround, and that this sound could similarly whirl around the head of the listener, MAINTAINING an fixed reference postiton (is that what I mean?) as the listener moves their head - so the surround leslie output could be directly "inserted" into the space and the Atmos system would, so to speak, know where the rotating drum of the Leslie is at all times.
Sometimes though I'm using quite fast rotations - hard to know what the speed is as the knob on the Leslie seems to use arbitrary units (ahem, something more scientific would help!), but max speed sounds like about 30Hz - almost audio itself which creates extaordinary effects.
At the moment I'm doing a rough fake by automating a surround panner, and I even tried mapping in a rotary controller, but actually that creates circles within circles, and it can't match the actual experience you might get in a performance of something like Harvey's 4th String Quartet where you're surrounded by actual speakers and the quatrtet is whizzing around you almost sounding like a cloud of bees (last movement -amazing).
Thing is we're close, and the listening quality of your products here (the Leslie, the Binaural render and the Airpods themselves) really is exceptional - so much better than stuff I (or other people) have made in say Max or Ableton Live. I'm very impressed so far but I feel we could push it to "the next level" as you Americans say.
I hope that explains the sort of thing I would like to work with? I'm sure it's very niche, and feel free to tell me to piss off, lol, but your ads always claim that you're really into making cool stuff. This is cool (says the 55yo balding guy).
I really think this way of listening with Aipods is going to become massive - I'm no gamer, but the implications for the immersive sound on those (and for watching films) is huge too (I'm sure you have someone working on it) I'm just a composer writing profoundly uncommercial, rather poetic, classical electronic music ;).
I've attached a rough BInaural mix of this thing I'm working on with the Tanpuras, Temple bells and some improvising musicians i know - once you hear the sound I think it will become much more clear what I'm talking about - and hopefully you like the music and not just effect! Very unfinished, and very hot off the press, so to speak.
I've also added a screen grab of my current workaround in Logic and a cool picture of Stockhausen making it work in !959 (!) before we ever dreamed of having tiny computers in our ears. If you can't help I may have to actually build a version of that spinning table, which would be a pain and expensive - I've spent more than enough on your products over the years. ;)
https://www.dropbox.com/s/uh9a4nx7psiv9qb/Tanpura%20Extended%20with%20Hannah%20Dolby%20Atmos%20Version%20%28A%29.mp3?dl=0
Ok it won't let me submit the pic, ugh...
Thanks for taking the time, let me know what you think, cheers,
Michael Oliva
Post not yet marked as solved
Hello!
There are a few similar open-source projects that implement support for creating macOS M1 guest Virtual Machine on macOS M1 Host, for example:
https://github.com/jspahrsummers/Microverse
https://github.com/KhaosT/MacVM
But both of them share a common problem: any audio from guest VM (system sounds or youtube videos in Safari or Firefox) is played late on host by approximately 0.3 seconds.
Is there any way how we can remove this latency, so it is possible to hear real-time audio from guest VM?
Regards, Eugene.
Post not yet marked as solved
I'm trying to play an audio content built from NSData inside a library (.a). It works properly when my code is inside an app. But it is not working when in a library, I get no error and no sound playing.
NSError * errorAudio = nil;
NSError * errorFile;
// Clear all cache
NSArray* tmpDirectory = [[NSFileManager defaultManager] contentsOfDirectoryAtPath:NSTemporaryDirectory() error:NULL];
for (NSString *file in tmpDirectory) {
[[NSFileManager defaultManager] removeItemAtPath:[NSString stringWithFormat:@"%@%@", NSTemporaryDirectory(), file] error:NULL];
}
// Set temporary directory and temporary file
NSURL * tmpDirURL = [NSURL fileURLWithPath:NSTemporaryDirectory() isDirectory:YES];
NSURL * soundFileURL = [[tmpDirURL URLByAppendingPathComponent:@"temp"] URLByAppendingPathExtension:@"wav"];
[[NSFileManager defaultManager] createDirectoryAtURL:tmpDirURL withIntermediateDirectories:NO attributes:nil error:&errorFile];
// Write NSData to temporary file
NSString *path= [soundFileURL path];
[audioToPlay writeToFile:path options:NSDataWritingAtomic error:&errorFile];
if (errorFile) {
// Error while writing NSData
} else {
// Init audio player
self.audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:soundFileURL error:&errorAudio];
if (errorAudio) {
// Audio player could not be initialized
} else {
// Audio player was initialized correctly
[audioPlayer prepareToPlay];
[audioPlayer stop];
[audioPlayer setCurrentTime:0];
[audioPlayer play];
}
}
I don't check errorFile intros piece of code, but when debugging I can see that value is nil.
My header file
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
@property(nonatomic, strong) AVAudioPlayer * audioPlayer;
My m file
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
@synthesize audioPlayer;
I've been checking for dozens of posts but cannot find any solution, it always works properly in an app, but not in a library. Any help would be greatly appreciated.
I'm trying to construct what I would consider a tracklist of an Album from MusicKit for Swift, but the steps I've had to take don't feel right to me, so I feel like I am missing something. The thing is that I don't just want a bag of tracks, I want their order (trackNumber) AND what disc they're on (discNumber), if more than one disc.
At the moment, I can get to the tracks for the album with album.with([.tracks]) to get a (MusicItemCollection<Track>), but each Track DOES NOT have a discNumber. Which means, in multi disc Albums all the tracks are munged together.
To get to the discNumber's, I've had to get an equivalent Song using the same ID, which DOES contain the discNumber, as well as the trackNumber. But this collection is not connected to the Album strongly, so I've had to create a containing struct to hold both the Album (which contains Track's) and some separate Song's, making my data model slightly more complex.
(Note that you don't appear to be able to do album.with([.songs]) to skip getting to a Song via a Track.)
I can't get my head around this structure - what is the difference between a Track and a Song?
I would think that the difference is that a Track is a specific representation on a specific Album, and a Song is a more general representation that might be shared amongst more Album's (like compilations). But therefore, surely discNumber should actually be on the Track as it is specific to the Album, and arguably Song shouldn't even have trackNumber of it's own?
In any case, can anyone help with what is the quickest way to get get the trackNumber's and discNumbers' of each actual track on an Album (starting with an MusicItemID) please?
var songs = [Song]()
let detailedAlbum = try await album.with([.tracks])
if let tracks = detailedAlbum.tracks {
for track in tracks {
let resourceRequest = MusicCatalogResourceRequest<Song>(matching: \.id, equalTo: track.id)
async let resourceResponse = resourceRequest.response()
guard let song = try await resourceResponse.items.first else { break }
songs.append(song)
}
}
Post not yet marked as solved
This issue has a blocking impact on our ability to serve our product on any iOS devices (since Web Audio APIs are not supported on any other browsers than Safari on iOS) and Safari browser on desktop and iOS in general. And one of our customer is currently heavily impacted because of this limitation in Safari. Currently Safari built on WebKit has a limitation that it cannot provide access to raw audio data via AudioContext for HLS playback, which works on mp4 files. This is supported by EVERY OTHER MAJOR BROWSER EXCEPT SAFARI, which is concerning because we will need to force users to not use our application on safari desktop, and we simply CANNOT SERVE ANY IPHONE AND IPAD USERS which is a BLOCKER for us given that more than half of our users use iOS based devices. And of course this is clearly a feature that should’ve been in place already in Safari, which is currently lagging behind in comparison to other browsers. The W3C specification already supports this and all major browsers have already implemented and supported HLS streams to be used with AudioContext.
We’d like to re-iterate the importance and urgency of this (https://bugs.webkit.org/show_bug.cgi?id=231656) for us, and this has been raised multiple times by other developers as well, so certainly this will help thousands of other Web developers to bring HLS based applications to life on Safari and iOS ecosystem.
Can we please get the visibility on what is going to be the plan and timelines for HLS support with AudioContext in Safari? Critical part of our business and our customer’s products depend on this support in Safari.
We're using new webkitAudioContext() in Safari 15.0 on MacBook and iOS Safari on iPhone and iPad to create AudioContext instance, and we're creating ScriptProcessorNode and attaching it to the HLS/m3u8 source create using audioContext. createMediaElementSource(). The onaudioprocess callback gets called with the audio data, but no data is processed and instead we get 0’s.
If you also connect Analyser node to the same audio source create using audioContext. createMediaElementSource(), analyser.getByteTimeDomainData(dataArray) populates no data in the data but onaudioprocess on the ScriptProcessorNode on the same source
What has been tried:
We confirmed that the stream being used is the only stream in the tab and createMediaElementSource() was only called once to get the stream.
We confirmed that if the stream source is MP4/MP3 it works with no issues and data is received in onaudioprocess, but when modifing the source to HLS/m3u8 it does not work
We also tried using MediaRecorder with HLS/m3u8 as the stream source but didn’t get any events or data
We also tried to create two AudioContext’s, so the first AudioContext will be the source passing the createMediaElementSource as the destination to the other Audio Context and then pass it to ScriptProcessorNode, but Safari does not allow more than one output.
Currently none of the scenarios we tried works and this is a major blocker to us and for our customers.
Code sample used to create the ScriptProcessorNode:
const AudioContext = window.AudioContext || window.webkitAudioContext;
audioContext = new AudioContext();
// Create a MediaElementAudioSourceNode
// Feed the HTML Video Element 'VideoElement' into it
const audioSource = audioContext.createMediaElementSource(VideoElement);
const processor = audioContext.createScriptProcessor(2048, 1, 1);
processor.connect(audioContext.destination);
processor.onaudioprocess = (e) => {
// Does not get called when connected to external microphone
// Gets called when using internal MacBook microphone
console.log('print audio buffer', e);
}
The exact same behavior is also observed on iOS Safari on iPhone and iPad.
We are asking for your help on this matter ASAP.
Thank you!
Post not yet marked as solved
When I use PHASE on my iPad, the test app is in landscape mode left or right only. But it seems like phase is in portrait mode. That is, 90 degrees off with a normal pointed at me. Is this a case where I need to use the world transform and detect view orientation notifications? Or does phase automatically handle it? I notice in simulator when I rotate the app, the speakers on the mac pro are always fixed which is what I was expecting. Or maybe its my imagination ... sounds like portrait on my device but I'm in landscape. I do have supported interface orientations set in my plist. Actually, its kind of annoying having the speakers on one side of the iPad.
Post not yet marked as solved
Hi,
I'm able to play songs from the Apple Music catalog (using the catalogId) by adding an item to the QueueProviderBuilder as shown below
queueProviderBuilder.items(MediaItemType.SONG, "1590036021");
playerController.prepare(queueProviderBuilder.build(), true);
However, I'm unable to play a User uploaded song using MediaItemType.UPLOADED_AUDIO I'm unsure what to pass in as the "itemId" as there's no 'catalogId' on the Apple Music API Response for a user's own uploaded song to iCloud. I have managed to obtain the URL of the uploaded song but that doesn't work either. Any ideas?
Post not yet marked as solved
I've got volume in my implementation. Too much hurricane force volume. Though, the consistent problem is the volume is blasting when I create a ambient or channel mixer. (not a point or volumetric source. eg. calm breeze sound) I set the level on the mixer and nothing seems to happen. I'd like to set the volume lower.
Though, on the spatial mixer, if I set the gain, rolloff and direct path level on the source node ( a point or volumetric source), then the spatial mixer case appears to work and no blasting audio.
I've been following the wwdc examples. ( watched it about 4 times now) It appears I should not use the source node with the ambient and channel mixers? That seems to be only an option adding the parameter to the spatial mixer. The ambient mixer seems to only want the listener and a quaternion direction. ( I normalized to 1 )
If I set the calibration to relative spl on the sampler node but that always seems to cause blasting audio.
I added the sound assets with dynamic using wav format at 32 bits and 44.1 khz.
Also, are there any examples of the meta parameters? Is that how I could dynamically adjust the level? Think there was a passing reference to it in the wwdc video.
Any pointers would be appreciated. I wonder if I'm making consistent assumptions on how phase works. I try to set up as much as possible before I start the engine. ( especially adding children nodes. )
Hey all,
It seems like there isn't support for MPMusicPlayerController on MacOS. I was trying to leverage MusicKit for a personal MacOS app with Apple Music playback.
I was wondering:
Is there an alternative approach to using MPMusicPlayerController on MacOS where I can play Apple Music based entities in a similar style (e.g., player.setqueue(with: applesongid))?
Are there known plans to bring MPMusicPlayerController to MacOS, or known reason why it's not supported?
Best!
Post not yet marked as solved
The web audio panner node processing webRTC stream has no spatial effect on iOS safari 15.1.1.
I move the positions of panner node or listener to no effect at all.
Stream:
webRTC p2p stream , audio only
Browser:
iOS safari 15.1.1
DEMO
This is my demo code: https://github.com/random-vincent/webRTCP2Pdemo-spatial-audio
This is a Demo based on WebRTC Team GitHub.
webRTC Stream -> MediaStream source node -> panner node -> destination
Usage:
Clone this repo
Run an https service in the root directory of this project
Open the url on browser
Click call to establish a webrtc p2p connection.
Click init to initialize the spatial audio.
Change positions and forwards of panner node and listener
Post not yet marked as solved
What is the first version of iOS and tvOS to support HLS Steering? Is the AppleTV+ service using HLS Steering to manage multiple CDNs?
Post not yet marked as solved
I have a game that is built as a PWA and the minute I add sounds that overlap, everything goes to pot on mobile. The sounds work perfectly on desktop, but on an iPhone X the behavior is drawing and sounds get randomly delayed and serialized. So even though on the desktop the visual of a block dropping is accompanied by the sound of a block dropping, on mobile I get one or the other first, and if I do 16 in a row, I'll get some set of each before or after one another (3 sounds, 8 visual animations, 6 sounds, 5 visual animations, 2 sound, 3 visual animations).
I'd rather not use a dependency like Howler.js, but have decided to do so.
I've tried both DOM audio elements and new Audio() objects in javascript. Nothing seems to make it better. Thoughts?
To see this in action - https://staging.likeme.games
Post not yet marked as solved
Hello.
When trying to add more than 8 nodes to the graph, the AUGraphInitialize function returns an error code -10877 (kAudioUnitErr_InvalidElement).
What is wrong? How can I debug this InvalidElement?
var acd: AudioComponentDescription = AudioComponentDescription()
acd.componentManufacturer = kAudioUnitManufacturer_Apple
acd.componentType = kAudioUnitType_MusicDevice
acd.componentSubType = kAudioUnitSubType_MIDISynth
var units = []
for i in 0..<number {
var instrumentNode = AUNode()
AUGraphAddNode(graph, &acd, &instrumentNode)
var instrumentUnit: AudioUnit!
AUGraphNodeInfo(graph, instrumentNode, nil, &instrumentUnit)
AUGraphConnectNodeInput(graph, instrumentNode, 0, mixerNode, UInt32(i))
units.append(instrumentUnit)
}
AUGraphConnectNodeInput(graph, mixerNode, 0, outNode, 0)
AUGraphInitialize(graph)
AUGraphStart(graph)
Post not yet marked as solved
I have several (hundreds, thousands) of files locally on my machine, when i add them to Music, then play them. They skip at some point. I have no clue what this is but have read there might be a corrupt index somewhere that needs to be rebuilt. The files themselves are definitely not corrupt bc I can listen to them just fine in finder or any other player / app.
MacOS 12.2 Beta (21D5025f)
Mac mini (M1, 2020)
Post not yet marked as solved
I'm working on Group Activities for our video app.
When I start a video from Apple TV, it's fine to sync it with other user's iPhone device. but inverse case, it's not working. And in some cases, I saw "Unsupported Activity : The active SharePlay activity is not supported on this Apple TV"
What did I miss or wrong something?