Integrate music and other audio content into your apps.

Posts under Audio tag

92 Posts
Sort by:
Post not yet marked as solved
0 Replies
182 Views
Simple AVPlayer sample in swift for iOS 15.4.1 Interstitial specified via EXT-X-DATERANGE tag. Interstitial displayed as expected but no notifications generated for either AVPlayerInterstitialEventMonitor.currentEventDidChangeNotification or .eventsDidChangeNotification? Tested on both a simulator and a device?? Suggestions?
Posted Last updated
.
Post not yet marked as solved
1 Replies
932 Views
I am working on a simple voip application for iOS and need to use opus codec. As far as I know opus codec is only available in C. There seems to be no way in Xcode to compile opus codec for iOS platform. How can I use opus codec in my iOS project? Do I need to compile everything from the source using cmake? And if so, does Xcode have cmake support?
Posted
by nmd007.
Last updated
.
Post not yet marked as solved
1 Replies
370 Views
Hi, I am facing a really strange issue.I have a wordpress website in which I have used html 5 audio tag .Whenever I play audio it delays it takes 2-3 sec to start on Safari. On other browsers it is working fine.I have done lot's of research and tried many things JS scripts but nothing worked.Can you please check and let me know any solution for this. page link : dev.one-button.de/create-your-song-with-name/ Thanks
Posted
by sumit12.
Last updated
.
Post not yet marked as solved
0 Replies
172 Views
I am running IOS 15.5 Developer Beta and have seen that Spatial Audio has stopped working for me on the latest beta. Everything is turned on in my settings even in accessibility settings and there is no head tracking. I have tried it on my Airpods Max and Pro and even 2 different iPhone 13s with no luck. Has anyone else seen this issue? Thank you!
Posted
by Brayden99.
Last updated
.
Post not yet marked as solved
2 Replies
400 Views
I have an app that has been rejected with "Guideline 5.2.3 - Legal". The content of the app is a shoutcast audio server stream that broadcasts voice audio recordings only. There is no music or copyrighted material besides the voice audio files that are owned by the creator and partner in this app. There is also an HTML/javascript photo gallery of images that are taken by the partner also. These images are of public payphones mostly in ny. How do we best provide documentation of ownership on this content made by us? Especially with the voice only custom content.
Posted
by startkey.
Last updated
.
Post not yet marked as solved
0 Replies
247 Views
MacOS CoreAudio buffer playback produces annoying noise between correct sound. I'm interested to play valid .wav data though the buffer. Why I'm playing a .wav? It has valid data. What I'm trying to achieve is to understand how to write correctly to the sound buffer. I'm porting a music engine to MacOS .... #include <string.h> #include <math.h> #include <unistd.h> #include <stdio.h> #include <AudioToolbox/AudioToolbox.h> FILE *fp; typedef struct TwavHeader{ char RIFF[4]; uint32_t RIFFChunkSize; char WAVE[4]; char fmt[4]; uint32_t Subchunk1Size; uint16_t AudioFormat; uint16_t NumOfChan; uint32_t SamplesPerSec; uint32_t bytesPerSec; uint16_t blockAlign; uint16_t bitsPerSample; char Subchunk2ID[4]; uint32_t Subchunk2Size; }TwavHeader; typedef struct SoundState { bool done; }SoundState; void auCallback(void *inUserData, AudioQueueRef queue, AudioQueueBufferRef buffer) { buffer->mAudioDataByteSize = 1024*4; int numToRead = buffer->mAudioDataByteSize / sizeof(float) * 2; void *p = malloc(numToRead); fread(p, numToRead,1,fp); void *myBuf = buffer->mAudioData; for (int i=0; i < numToRead / 2; i++) { uint16_t w = *(uint16_t *)&(p[i*sizeof(uint16_t)]); float f = ((float)w / (float)0x8000) - 1.0; *(float *)&(myBuf[i*sizeof(float)]) = f; } free(p); AudioQueueEnqueueBuffer(queue, buffer, 0, 0); } void checkError(OSStatus error){ if (error != noErr) { printf("Error: %d", error); exit(error); } } int main(int argc, const char * argv[]) { printf("START\n"); TwavHeader theHeader; fp = fopen("/Users/kirillkranz/Documents/mytralala-code/CoreAudioTest/unreal.wav", "r"); fread(&theHeader, sizeof(TwavHeader),1,fp); printf("%i\n",theHeader.bitsPerSample); AudioStreamBasicDescription auDesc = {}; auDesc.mSampleRate = theHeader.SamplesPerSec; auDesc.mFormatID = kAudioFormatLinearPCM; auDesc.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked; auDesc.mBytesPerPacket = 8; auDesc.mFramesPerPacket = 1; auDesc.mBytesPerFrame = 8; auDesc.mChannelsPerFrame = 2; auDesc.mBitsPerChannel = 32; AudioQueueRef auQueue = 0; AudioQueueBufferRef auBuffers[2] ={}; // our persistent state for sound playback SoundState soundState= {}; soundState.done=false; OSStatus err; // most of the 0 and nullptr params here are for compressed sound formats etc. err = AudioQueueNewOutput(&auDesc, &auCallback, &soundState, 0, 0, 0, &auQueue); checkError(err); // generate buffers holding at most 1/16th of a second of data uint32_t bufferSize = auDesc.mBytesPerFrame * (auDesc.mSampleRate / 16); err = AudioQueueAllocateBuffer(auQueue, bufferSize, &(auBuffers[0])); checkError(err); err = AudioQueueAllocateBuffer(auQueue, bufferSize, &(auBuffers[1])); checkError(err); // prime the buffers auCallback(&soundState, auQueue, auBuffers[0]); auCallback(&soundState, auQueue, auBuffers[1]); // enqueue for playing AudioQueueEnqueueBuffer(auQueue, auBuffers[0], 0, 0); AudioQueueEnqueueBuffer(auQueue, auBuffers[1], 0, 0); // go! AudioQueueStart(auQueue, 0); char rxChar[10]; scanf( "%s", &rxChar); printf("FINISH"); fclose(fp); // be nice even it doesn't really matter at this point if (auQueue) AudioQueueDispose(auQueue, true); } what do I do wrong?
Posted
by Key-Real.
Last updated
.
Post not yet marked as solved
0 Replies
173 Views
Push notification audio sound working fine in other devices but in two iPad the sounds are not working. Two models in which audio sound is not working are : iPad(MHNF3LL) iPad(MK7M3LL) Both iPad have iOS 15.3.1 version. Is there any particular settings needed in the app to make the custom audio sound audible ?
Posted
by Rajshree.
Last updated
.
Post not yet marked as solved
0 Replies
167 Views
Hi, We connect our audio playing device to Mac though UAC2 (USB Audio Class 2). Our device is displayed as "Playback Inactive" in the sound device list on Mac. Except for this, everything else works fine, we can play tracks on Mac to our device. It works fine when we connect our device to Windows 10 PC. Our device is displayed as "DSM", which is the name of our device. Do we need to register our device to Mac software system? or something that our device reports to Mac through USB is not recognized by Mac? Thanks in advance
Posted
by JiwenQi.
Last updated
.
Post not yet marked as solved
0 Replies
153 Views
For creating custom iOS notification sounds, are there any Apple-sanctioned/suggested specifications for loudness (LUFS) or true peak? How about sample rate? I've heard before that iOS' native sample rate is 48 kHz — is this true?
Posted
by LMoo.
Last updated
.
Post not yet marked as solved
1 Replies
1.1k Views
Dear all, I found the announced built-in sound classifier pretty amazing. I would appreciate it if you could point me to a link or a document that is listed all 300 sound classes mentioned in https://developer.apple.com/videos/play/wwdc2021/10036/. Thank you
Posted Last updated
.
Post not yet marked as solved
63 Replies
12k Views
While I have developer's beta iOS 14 on both my iPhone and Mac a recurrent problem has being happening to me where when I connect my AirPods to my computer it only recognizes one and uses one for audio. Whereas on my phone both work fine. So the problem is not the AirPods. Hope that on the next beta update it is resolve.
Posted
by Ruden.
Last updated
.
Post not yet marked as solved
0 Replies
229 Views
How can you add a live audio player to Xcode where they will have a interactive UI to control the audio and they will be able to exit out of the app and or turn their device off and it will keep playing? Is their a framework or API that will work for this? Thanks! Really need help with this…. 🤩
Posted Last updated
.
Post not yet marked as solved
0 Replies
484 Views
Hi all, I have an app that is playing music from http live streams using AVPlayer. Have been trying to figure out how to use ShazamKit to recognise the music playing but I just can't figure out how to do it :-( Works well with local files and microphone recordings, but how do I get the data from a stream that is currently playing??? Feels like I tried everything... Have tried to install an MTAudioProcessingTap but it doesn't seem to work on streaming assets even though I can get hold of the proper AVAssetTrack containing the audio. No callback with data are received? Bug? I can open the streaming url and just save the bytes to disk and that's fine, but I'm not in sync with what is playing in AVPlayerItem so the recognition isn't working with the same audio data as the user is currently hearing. Hmmm. Any suggestions and ideas are welcome. It would be such a nice feature for my app so I'm really looking forward to solving this. Thx in advance / Jörgen, Bitfield
Posted
by jogga.
Last updated
.
Post not yet marked as solved
0 Replies
739 Views
I would like my app to pause currently playing audio (from other applications) for a certain amount of time, and then resume it. I'm able to simulate the pause/play media key, but the problem is if the system is not playing audio, and then I simulate the key press, it starts playing audio, which is a big no no. So I need to detect if there is audio playing. Some things I have explored: kAudioDevicePropertyDeviceIsRunningSomewhere will detect whether is a device running that can play audio, but that doesn't mean it actually is playing audio. AVAudioSession.isOtherAudioPlaying is exactly what I'm looking for, but I'm not working on a Catalyst app, so it's not available to me. I came across this command, but it is not very accurate. If I pause music, it still returns true for the next 30 seconds or so: if [[ "$(pmset -g | grep ' sleep')" == *"coreaudiod"* ]]; then echo audio is playing; else echo no audio playing; fi Any ideas?
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.1k Views
Hello Everyone, I am working on an app that will receive some audio files sent from a device over Bluetooth to an iPhone, and the iPhone should play those audio files. The device is currently using A2DP Bluetooth audio profile. Here, the device is the A2DP source and the iPhone should be the A2DP sink. During my research, I found out that Apple doesn't allow iPhone to act as an A2DP sink. My question is, which other Bluetooth audio profiles can be used to make the iPhone act as an audio sink or receiver? Thanks, PC
Posted Last updated
.
Post not yet marked as solved
0 Replies
235 Views
I have to process interviews, recorded on iphone and sent to me in smrc-format. I have been unable to find a file-converter able to read smrc and convert to mp3 or comparable. Is there a way to make these files audible? The header of one of these files starts like this: bplist00‘ ûüX$versionX$objectsY$archiverT$top � Ü†Ø ( $%&-89:;<=>IMPSVY_bepsvy| äçêìñóòôùU$null“ V$classZNS.objectsÄ ´ Help would be greatly appreciated, the interviews cannot be done again.
Posted
by Wagapple.
Last updated
.
Post not yet marked as solved
1 Replies
652 Views
I'm working on Group Activities for our video app. When I start a video from Apple TV, it's fine to sync it with other user's iPhone device. but inverse case, it's not working. And in some cases, I saw "Unsupported Activity : The active SharePlay activity is not supported on this Apple TV" What did I miss or wrong something?
Posted
by Wontai.
Last updated
.
Post not yet marked as solved
0 Replies
292 Views
Hey I am trying to decode AMR_WB audio on iOS, for this I am using the below settings var asbd = AudioStreamBasicDescription() asbd.mSampleRate = Float64(sampleRate) asbd.mFormatID = kAudioFormatAMR_WB asbd.mFormatFlags = 0 asbd.mFramesPerPacket = 320 asbd.mChannelsPerFrame = UInt32(channels) asbd.mBitsPerChannel = 16 * UInt32(MemoryLayout<UInt8>.size) asbd.mReserved = 0 asbd.mBytesPerFrame = 2 asbd.mBytesPerPacket = asbd.mBytesPerFrame * asbd.mFramesPerPacket let _audioFormat = AVAudioFormat(streamDescription: &asbd)! return _audioFormat But I encounter the error as follows: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (1885696621), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x283fb1920 {Error Domain=NSOSStatusErrorDomain Code=1885696621 "(null)" UserInfo={AVErrorFourCharCode='perm'}}} Now as per the documentation found here, it looks to be supported but I am unable what permission to give to the application for this to work. Any help will be appreciated.
Posted Last updated
.
Post not yet marked as solved
0 Replies
920 Views
My team is responsible for maintaining a web application that uses an iframe to load various web pages that support interaction and audio playback. During use of our application this iFrame may load up to 20 different pages that play audio and interact with our users. Out of 100 users about 3-8 of them the audio will abruptly stop. If the page is reloaded the audio will begin playing for a few seconds then stop again. The only way to reliably to fix the audio playback is to double tap and swipe safari out of view and then reload our application. Things we have checked and tried: Volume is at maximum Volume is not muted Tablet is active and never enters sleep when detected We have confirmed there are not any connectivity issues audio files are completely loaded without error Audio context state registers as playing Audio gain controls are at default Issue surfaced after upgrading to IOS 15.x and was not reported on earlier versions of safari
Posted
by waterford.
Last updated
.