Integrate music and other audio content into your apps.

Posts under Audio tag

92 Posts
Sort by:
Post not yet marked as solved
0 Replies
153 Views
For creating custom iOS notification sounds, are there any Apple-sanctioned/suggested specifications for loudness (LUFS) or true peak? How about sample rate? I've heard before that iOS' native sample rate is 48 kHz — is this true?
Posted
by
Post not yet marked as solved
0 Replies
229 Views
How can you add a live audio player to Xcode where they will have a interactive UI to control the audio and they will be able to exit out of the app and or turn their device off and it will keep playing? Is their a framework or API that will work for this? Thanks! Really need help with this…. 🤩
Posted
by
Post not yet marked as solved
0 Replies
235 Views
I have to process interviews, recorded on iphone and sent to me in smrc-format. I have been unable to find a file-converter able to read smrc and convert to mp3 or comparable. Is there a way to make these files audible? The header of one of these files starts like this: bplist00‘ ûüX$versionX$objectsY$archiverT$top � Ü†Ø ( $%&-89:;<=>IMPSVY_bepsvy| äçêìñóòôùU$null“ V$classZNS.objectsÄ ´ Help would be greatly appreciated, the interviews cannot be done again.
Posted
by
Post not yet marked as solved
0 Replies
292 Views
Hey I am trying to decode AMR_WB audio on iOS, for this I am using the below settings var asbd = AudioStreamBasicDescription() asbd.mSampleRate = Float64(sampleRate) asbd.mFormatID = kAudioFormatAMR_WB asbd.mFormatFlags = 0 asbd.mFramesPerPacket = 320 asbd.mChannelsPerFrame = UInt32(channels) asbd.mBitsPerChannel = 16 * UInt32(MemoryLayout<UInt8>.size) asbd.mReserved = 0 asbd.mBytesPerFrame = 2 asbd.mBytesPerPacket = asbd.mBytesPerFrame * asbd.mFramesPerPacket let _audioFormat = AVAudioFormat(streamDescription: &asbd)! return _audioFormat But I encounter the error as follows: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (1885696621), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x283fb1920 {Error Domain=NSOSStatusErrorDomain Code=1885696621 "(null)" UserInfo={AVErrorFourCharCode='perm'}}} Now as per the documentation found here, it looks to be supported but I am unable what permission to give to the application for this to work. Any help will be appreciated.
Posted
by
Post not yet marked as solved
0 Replies
920 Views
My team is responsible for maintaining a web application that uses an iframe to load various web pages that support interaction and audio playback. During use of our application this iFrame may load up to 20 different pages that play audio and interact with our users. Out of 100 users about 3-8 of them the audio will abruptly stop. If the page is reloaded the audio will begin playing for a few seconds then stop again. The only way to reliably to fix the audio playback is to double tap and swipe safari out of view and then reload our application. Things we have checked and tried: Volume is at maximum Volume is not muted Tablet is active and never enters sleep when detected We have confirmed there are not any connectivity issues audio files are completely loaded without error Audio context state registers as playing Audio gain controls are at default Issue surfaced after upgrading to IOS 15.x and was not reported on earlier versions of safari
Posted
by
Post not yet marked as solved
0 Replies
285 Views
Is there a way for me to programmatically query whether if my AVAudioSession is able to play even when app is minimized/screen is locked? I need this to debug background audio permissions as my AVAudioSession keeps getting paused while app goes into background and it resumes once it goes into the foreground. Moreover, when I try to call setActive for AVAudioSession in didEnterBackground, it gives me the error code 561015905 which says it is permission related. My Info.plist already has <key>UIBackgroundModes</key> <array> <string>audio</string> </array> added to it.
Posted
by
Post not yet marked as solved
0 Replies
364 Views
Hi, I've been asked to develop a software for macOS that monitors daily 24/7 some streamings and logs the aired music. I'd like to use ShazamKit, by the way I don't know if I have to do something in particular in order to use it in a commercial app and I don't know if with so many requests I can hit some threshold (it could be 10 simultaneous streams, it could be 100 simultaneous streams, I don't know at the moment). Any info about that?
Posted
by
Post marked as solved
1 Replies
377 Views
TL;DR C function can't open MP3 file in app's documents directory, am I missing any sort of permissions? I am trying to create an app to play music through the BASS Audio Library for C/C++ and while I have it playing music, I cannot seem to have it open local files. To create a stream from a file to play in this library, you use BASS_StreamCreateFile(); function, which you pass a URL to the file to use, but even thought I can verify the URL I passing is correct and the file is in the files app, it throws error code 2 "Cannot open file" However, when I use BASS_StreamCreateURL(); and pass in a URL from the internet, it works perfectly, so I have to assume the problem has something to do with file permissions. Here is the C function in which I am creating these streams int createStream(const char* url) { //HSTREAM stream = BASS_StreamCreateURL("https://vgmsite.com/soundtracks/legend-of-zelda-ocarina-of-time-original-sound-track/fticxozs/68%20-%20Gerudo%20Valley.mp3", 0, 0, NULL, 0); HSTREAM stream = BASS_StreamCreateFile(false, url, 0, 0, 0); if (stream == 0) { printf("Error at createStream, error code: %i\n", BASS_ErrorGetCode()); return 0; } else { return stream; } } In the commented out line is the working Stream created from a URL And here is the URL I am passing in guard let documentsURL = fileManager.urls(for: .documentDirectory, in: .userDomainMask).first else { return } gerudoValleyURL = documentsURL.appendingPathComponent("GerudoValley.mp3") stream = createStream(gerudoValleyURL.absoluteString) I can confirm that the MP3 "GerudoValley.mp3" is in the app's documents directory in the files app. Is there anything I could do to allow this C file to open to open MP3's form the App's documents directory? The exact MP3 from that link is already there.
Posted
by
Post not yet marked as solved
0 Replies
245 Views
Hello, I would like to create a VoIP App I want to build a sound visualizer for the voice of the other party. example: https://medium.com/swlh/swiftui-create-a-sound-visualizer-cadee0b6ad37 The call function was implemented using the call kit. (Based on twilio's quick-start) https://jp.twilio.com/docs/voice/sdks/ios/get-started https://github.com/twilio/voice-quickstart-ios After importing AVFoundation, have to need routing the AVAudioengine mixer. I have no idea how to put voice in the mixer. Any guidance is appreciated! Thank you!
Posted
by
Post not yet marked as solved
1 Replies
224 Views
Hello! I have an Iphone XS Max. Until the IOS15 update I could listen to music by bluetooth on my car without a problem, but ever since the update, whenever i turn off the car, when i turn it on again the music stutters/skips and is very choppy, makes it imposible to listen. If then i forget devices and repair it everything works fine until i turn off the car and then its the same process again. I dont want to have to pair my phone to my car everytime i enter the car, and this only started to happen with the IOS15 update. Any solution? Thanks!
Posted
by
Post not yet marked as solved
0 Replies
277 Views
Specifically I am trying to set .constrainsSeekingForwardInPrimaryContent when creating an AVPlayerInterstitialEvent but it has no effect on iOS. I don't have access to tvOS at the moment to try it there but according to the docs it should work on iOS 15 and >. let event = AVPlayerInterstitialEvent( primaryItem: App.player!.currentItem!, identifier: ad.podId, time: CMTime(seconds: ad.timeOffsetSec!, preferredTimescale: 1), templateItems: adPodTemplates, // sadly on iOS these restrictions seem to simply not work restrictions: [ .constrainsSeekingForwardInPrimaryContent, .requiresPlaybackAtPreferredRateForAdvancement ], resumptionOffset: .zero, playoutLimit: .invalid) Thoughts?
Posted
by
Post not yet marked as solved
1 Replies
652 Views
I'm working on Group Activities for our video app. When I start a video from Apple TV, it's fine to sync it with other user's iPhone device. but inverse case, it's not working. And in some cases, I saw "Unsupported Activity : The active SharePlay activity is not supported on this Apple TV" What did I miss or wrong something?
Posted
by
Post not yet marked as solved
1 Replies
235 Views
I have several (hundreds, thousands) of files locally on my machine, when i add them to Music, then play them. They skip at some point. I have no clue what this is but have read there might be a corrupt index somewhere that needs to be rebuilt. The files themselves are definitely not corrupt bc I can listen to them just fine in finder or any other player / app. MacOS 12.2 Beta (21D5025f) Mac mini (M1, 2020)
Posted
by
Post not yet marked as solved
0 Replies
173 Views
Hello. When trying to add more than 8 nodes to the graph, the AUGraphInitialize function returns an error code -10877 (kAudioUnitErr_InvalidElement). What is wrong? How can I debug this InvalidElement? var acd: AudioComponentDescription = AudioComponentDescription() acd.componentManufacturer = kAudioUnitManufacturer_Apple acd.componentType = kAudioUnitType_MusicDevice acd.componentSubType = kAudioUnitSubType_MIDISynth var units = [] for i in 0..<number { var instrumentNode = AUNode() AUGraphAddNode(graph, &acd, &instrumentNode) var instrumentUnit: AudioUnit! AUGraphNodeInfo(graph, instrumentNode, nil, &instrumentUnit) AUGraphConnectNodeInput(graph, instrumentNode, 0, mixerNode, UInt32(i))         units.append(instrumentUnit) } AUGraphConnectNodeInput(graph, mixerNode, 0, outNode, 0) AUGraphInitialize(graph) AUGraphStart(graph)
Posted
by
Post not yet marked as solved
1 Replies
405 Views
I have a game that is built as a PWA and the minute I add sounds that overlap, everything goes to pot on mobile. The sounds work perfectly on desktop, but on an iPhone X the behavior is drawing and sounds get randomly delayed and serialized. So even though on the desktop the visual of a block dropping is accompanied by the sound of a block dropping, on mobile I get one or the other first, and if I do 16 in a row, I'll get some set of each before or after one another (3 sounds, 8 visual animations, 6 sounds, 5 visual animations, 2 sound, 3 visual animations). I'd rather not use a dependency like Howler.js, but have decided to do so. I've tried both DOM audio elements and new Audio() objects in javascript. Nothing seems to make it better. Thoughts? To see this in action - https://staging.likeme.games
Posted
by
Post not yet marked as solved
1 Replies
287 Views
The web audio panner node processing webRTC stream has no spatial effect on iOS safari 15.1.1. I move the positions of panner node or listener to no effect at all. Stream: webRTC p2p stream , audio only Browser: iOS safari 15.1.1 DEMO This is my demo code: https://github.com/random-vincent/webRTCP2Pdemo-spatial-audio This is a Demo based on WebRTC Team GitHub. webRTC Stream -> MediaStream source node -> panner node -> destination Usage: Clone this repo Run an https service in the root directory of this project Open the url on browser Click call to establish a webrtc p2p connection. Click init to initialize the spatial audio. Change positions and forwards of panner node and listener
Posted
by
Post marked as solved
1 Replies
278 Views
Hey all, It seems like there isn't support for MPMusicPlayerController on MacOS. I was trying to leverage MusicKit for a personal MacOS app with Apple Music playback. I was wondering: Is there an alternative approach to using MPMusicPlayerController on MacOS where I can play Apple Music based entities in a similar style (e.g., player.setqueue(with: applesongid))? Are there known plans to bring MPMusicPlayerController to MacOS, or known reason why it's not supported? Best!
Posted
by
Post not yet marked as solved
1 Replies
334 Views
I've got volume in my implementation. Too much hurricane force volume. Though, the consistent problem is the volume is blasting when I create a ambient or channel mixer. (not a point or volumetric source. eg. calm breeze sound) I set the level on the mixer and nothing seems to happen. I'd like to set the volume lower. Though, on the spatial mixer, if I set the gain, rolloff and direct path level on the source node ( a point or volumetric source), then the spatial mixer case appears to work and no blasting audio. I've been following the wwdc examples. ( watched it about 4 times now) It appears I should not use the source node with the ambient and channel mixers? That seems to be only an option adding the parameter to the spatial mixer. The ambient mixer seems to only want the listener and a quaternion direction. ( I normalized to 1 ) If I set the calibration to relative spl on the sampler node but that always seems to cause blasting audio. I added the sound assets with dynamic using wav format at 32 bits and 44.1 khz. Also, are there any examples of the meta parameters? Is that how I could dynamically adjust the level? Think there was a passing reference to it in the wwdc video. Any pointers would be appreciated. I wonder if I'm making consistent assumptions on how phase works. I try to set up as much as possible before I start the engine. ( especially adding children nodes. )
Posted
by