Integrate music and other audio content into your apps.

Posts under Audio tag

87 Posts
Sort by:
Post not yet marked as solved
0 Replies
60 Views
Hello 👋, after people updating to iOS 16 I was getting reports that my tuner app is not working anymore for some people. The mic is working and also everything else except the pitch detection. There seems to be a connection with the device it is running on - it seems to not be working on iPhone 11 Pro. In the iPhone 11 Pro simulator it works though, and also with every other device I tested with. Any hints how I could debug this issue further without having an iPhone 11 Pro? Without being able to reproduce the bug it is impossible to fix... No compiler warnings or anything 🫤 Maybe someone has an idea how to deal with this 😎 - have there been changes in iOS 16 that could be related? Thx 🙏
Posted Last updated
.
Post not yet marked as solved
2 Replies
149 Views
OK so I have been trying to find the right way to develop a program that runs in the background of my MacBook Air 2021 and all It will do is create a read-only text transcript file of the speech from the audio output: speaker, headphones, etc. Not only that, but the original speech from the audio/video file will be transcripted into a read-only text file. Not only that but there will be a read-only file created to pinpoint he origin of the audio speaker output and the origin of the file the audio is supposedly coming from. So each time I play a video or movie whether its YouTube Netflix prime video Vimeo etc, an instance will occur that creates 4 of the read-only text files. The directory could be created when the program is setup on my laptop, and I could change the directory if need be. I read some things about programming in swift and it seems overwhelming in the sense that this program could take more time than I expect it to in order to be fully functional. And another thing is that I see no commercial value in this program/product so it will essentially be an example of the many possibilities of swift. I believe that the program needs these specifications but It could be written in a way I dont expect it to be written. Maybe If I was given a direction of what book would be best for developing this program, completely full of all the jargon I need to learn. I would be forever grateful to the swift development team. But this program seems very unnecessary I guess so I am not expecting too much.
Posted Last updated
.
Post not yet marked as solved
0 Replies
113 Views
I'm trying to record a phrase using AVSpeechSynthesizer.write. It appears to save "something", but when I try to play that back, nothing is played. Not sure where I'm going wrong (wrong file type, not actually saving a file, I don't have the play portion set correctly). Any help would be greatly appreciated. The "speakPhrase" function is just to prove that the AVSpeech is set up properly. That works just fine. import AVFoundation import Foundation class Coordinator {     let synthesizer: AVSpeechSynthesizer     var player: AVAudioPlayer?     init() {         let synthesizer = AVSpeechSynthesizer()         self.synthesizer = synthesizer     }     var recordingPath:  URL {         let soundName = "Finally.caf"         // I've tried numerous file extensions.  .caf was in an answer somewhere else.  I would think it would be         // .pcm, but that doesn't work either.         // Local Directory         let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)         return paths[0].appendingPathComponent(soundName)     }     func speakPhrase(phrase: String) {         let utterance = AVSpeechUtterance(string: phrase)         utterance.voice = AVSpeechSynthesisVoice(language: "en")         synthesizer.speak(utterance)     }     func playFile() {         print("Trying to play the file")         do {             try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)             try AVAudioSession.sharedInstance().setActive(true)                          player = try AVAudioPlayer(contentsOf: recordingPath, fileTypeHint: AVFileType.caf.rawValue)             guard let player = player else {return}                 player.play()         } catch {             print("Error playing file.")         }     }     func saveAVSpeechUtteranceToFile() {         let utterance = AVSpeechUtterance(string: "This is speech to record")         utterance.voice = AVSpeechSynthesisVoice(language: "en-US")         utterance.rate = 0.50         synthesizer.write(utterance) { [self] (buffer: AVAudioBuffer) in             guard let pcmBuffer = buffer as? AVAudioPCMBuffer else {                 fatalError("unknown buffer type: \(buffer)")             }             if pcmBuffer.frameLength == 0 {                 // Done             } else {                 // append buffer to file                 do {                     let audioFile = try AVAudioFile(forWriting: recordingPath, settings: pcmBuffer.format.settings, commonFormat: .pcmFormatInt16, interleaved: false)                     try audioFile.write(from: pcmBuffer)                 } catch {                     print(error.localizedDescription)                 }             }         }     } }
Posted
by ShadowDES.
Last updated
.
Post not yet marked as solved
0 Replies
101 Views
Hi, I have multiple audio files I want to decide which channel goes to which output. For example, how to route four 2-channel audio files to an 8-channel output. Also If I have an AVAudioPlayerNode playing a 2-channel track through headphones, can I flip the channels on the output for playback, i.e flip left and right? I have read the following thread which seeks to do something similar, but it is from 2012 and I do not quite understand how it would work in modern day. Many thanks, I am a bit stumped.
Posted
by jaolan.
Last updated
.
Post not yet marked as solved
0 Replies
179 Views
I am trying to figure out why there is no audio on my iPhone or iPad, my code is working on other devices. I am on IPad iOS 15.3.1 and I test on my computer using Safari. Video is working, and both the video and audio work on Android, Chrome, etc. This is just an audio problem on iOS. From my WebRTC I have HTML5 Audio Track tracks as such: <audio muted="false" autoplay="1" id="xxxx"></audio> When debugging, I connect my IPad and have run this volume check: document.getElementById('***').volume And it returns the value of 1, so the volume is on its loudest (I think according to HTML5 audio tags range from 0, 0.1, 0.2, xxxx 1). document.getElementById('***').end The ended returns false. Next I try to run the play() function as such: $('#***')[0].play() .then((resp) => { console.log("Success"); console.log(resp) }) .catch(error => {console.log(error)}) And it executes the success response. But there is still no sound. What could be causing this issue on iOS and Safari only?enter code here
Posted
by BingeWave.
Last updated
.
Post not yet marked as solved
0 Replies
107 Views
I am creating a web app using canvas2d rendering and web-audio API that uses a good amount of memory because of audio and textures. On iOS and iPadOS I'm getting this logged in the console after a while. kernel EXC_RESOURCE -> com.apple.WebKit.GPU[12432] exceeded mem limit: ActiveSoft 200 MB (non-fatal) It seems Safari is holding onto my resources even though they have been dereferenced in JS, but that's another issue. When this happens, all my canvases containing rendered text disappears and the audio stops playing. The main canvas seems to be kept and renders as normal. The text canvases can be regenerated fairly quickly. However from this point I am not able to play audio anymore, unless I recreate the AudioContext. The odd thing is that if I inspect the AudioContext after this event, it still has the state "running". I would expect it to have the state "suspended" if the OS did something with it. I am not sure if this is intended or a bug in Safari. If I am able to somehow detect that this happened from JS, I can just recreate the AudioContext, but I am not sure if this is possible. Does anybody have any experience with this or any other advice?
Posted
by orjandh.
Last updated
.
Post not yet marked as solved
1 Replies
546 Views
I have a USB Audio Soundcard, it work on ipad and iphone well. Audio can be play through the soundcard. Now, i want to send some controlling command to the soundcard. I think it will be done with the USB HID interface. But I do not know how to do it. Can anybody give me some guide ? My App is designed for IOS, not Mac OS. The app can send volume and mute command to the soundcard, which are the Usb-Audio-class request.
Posted
by HaosenL.
Last updated
.
Post marked as solved
3 Replies
234 Views
I'm playing an Audio in my app. I can play the audio in the background mode in the simulator but when I test the app in a real device, the audio Stops as soon as the App goes in the background. Xcode 14 iOS 16 Device: iPhone 11 I have this in the background mode: This is the code into .onAppear() view: let path = Bundle.main.path(forResource: "timer.mp3", ofType:nil)! let url = URL(fileURLWithPath: path) timerEffect = try AVAudioPlayer(contentsOf: url) //@State var timerEffect: AVAudioPlayer? timerEffect?.numberOfLoops = -1 timerEffect?.play() Anyone can help me? Thanks!
Posted Last updated
.
Post not yet marked as solved
0 Replies
177 Views
In the existing version of my app, GuitarParrot, it was necessary to use '.defaultToSpeaker' to get the sounds to play from the Speaker, not the Receiver. Now this same setting is disabling the microphone. This problem doesn't occur on iPads, but I have confirmed the issue on an iPhone7S and an iPhoneXR. I am in the process of updating my app to be compliant with iOS 15 and Xcode 13. The use of '.defaultToSpeaker' is killing the mic input. If I remove this setting, then the mic input works as desired, but sound is sent to the Receiver speaker. I have hunted down dozens of audio code, yet I am not finding anyone doing this differently than myself. Here is the statement I'm using to initialize the session. What am I doing wrong? let audioSession: AVAudioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(AVAudioSession.Category.playAndRecord,                                  mode: AVAudioSession.Mode.gameChat,                                  options: [.defaultToSpeaker, .mixWithOthers]) try audioSession.setActive(true) I have tried many variations (different modes, adding bluetooth). The only method that allows the mic to work is to remove '.defaultToSpeaker' from the options. Any help is greatly appreciated. Extra detail: My app is written using Flutter. All of the AVAudioSession initialization is done using a Swift plugin. I also downloaded the newly-released demo from AudioKit and it also defaults sound to the receiver speaker when the demo is run on an iPhone.
Posted Last updated
.
Post not yet marked as solved
0 Replies
131 Views
What is the SAR level of RFI exposure for the Airpod Pro?
Posted
by clh.
Last updated
.
Post not yet marked as solved
1 Replies
209 Views
Hey all... I have read up on a lot of forms but have not found a way yet to implement this. So i would like to have an app where i can record the sounds in the night. If a sound reaches a certain level of noise some music will play. We have already created this for Android but have not found a way to implement this for IOS. Android version: https://play.google.com/store/apps/details?id=com.sicgames.muziekindenacht&amp;hl=nl&amp;gl=US Does anyone have any suggestion on how to handle this?
Posted
by Tommy030.
Last updated
.
Post not yet marked as solved
11 Replies
4.9k Views
I am experiencing an issue where my Mac's speakers will crackle and pop when running an app on the Simulator or even when previewing SwiftUI with Live Preview. I am using a 16" MacBook Pro (i9) and I'm running Xcode 12.2 on Big Sur (11.0.1). Killing coreaudiod temporarily fixes the problem however this is not much of a solution. Is anyone else having this problem?
Posted
by joltguy.
Last updated
.
Post not yet marked as solved
0 Replies
197 Views
Hello! I'm working on an iOS app that uses MPMusicPlayerController.systemMusicPlayer to play songs from Apple Music to the user. My app should be able to append songs to the Music player's queue based on messages it receives from a server. I have this working with a simple WebSocket connection between the app and the server, but as soon as the app enters the background the socket is automatically closed (which makes sense). Because the actual music playback is done by the Music app, I can't use the Background Audio background mode to keep my app alive. Is there a way around this? Things I have already considered (and why I don't think they will work): Remote Notifications are throttled too slow to be of any real use Background App Refresh is also too slow PushKit / VoIP (the app isn't a VoIP app) Playing "blank" or nearly silent audio over the actual audio which seems too "hacky" and likely won't pass app review Using background location tracking (again, almost certainly won't pass review) Ditching systemMusicPlayer completely and using AVAudioPlayer with the Apple Music API (this would be reinventing the wheel a little bit and would force streaming even if the media was downloaded) Using applicationQueuePlayer and just forcing the user to stay in app (this would be a bad user experience imo, they should be able to listen in the background) Any help would be appreciated, thank you!
Posted Last updated
.
Post not yet marked as solved
0 Replies
214 Views
Can anybody help me to check why this code is not working in Safari (osx)? The audio is is playing, but no visualisation. I tried all the hints I found, but still no luck. Need to mention it's working fine in Chrome (osx) Thanks a lot for help function getDataFromAudio() { var freqByteData = new Uint8Array(analyser.fftSize / 2); var timeByteData = new Uint8Array(analyser.fftSize / 2); analyser.getByteFrequencyData(freqByteData); analyser.getByteTimeDomainData(timeByteData); return { f: freqByteData, t: timeByteData }; // array of all 1024 levels } I see CodePen links are not enabled here. I posted on stackoverflow
Posted
by mrvii.
Last updated
.
Post not yet marked as solved
3 Replies
812 Views
I have an app that has been rejected with "Guideline 5.2.3 - Legal". The content of the app is a shoutcast audio server stream that broadcasts voice audio recordings only. There is no music or copyrighted material besides the voice audio files that are owned by the creator and partner in this app. There is also an HTML/javascript photo gallery of images that are taken by the partner also. These images are of public payphones mostly in ny. How do we best provide documentation of ownership on this content made by us? Especially with the voice only custom content.
Posted
by startkey.
Last updated
.
Post marked as solved
4 Replies
1.2k Views
I have a working AUv3 AUAudioUnit app extension but I had to work around a strange issue: I found that the internalRenderBlock value is fetched and invoked before allocateRenderResources() is called. I have not found any documentation stating that this would be the case, and intuitively it does not make any sense. Is there something I am doing in my code that would be causing this to be the case? Should I *force* a call to allocateRenderResources() if it has not been called before internalRenderBlock is fetched? Thanks! Brad
Posted
by bradhowes.
Last updated
.
Post not yet marked as solved
1 Replies
2.4k Views
Hi, I am building a realtime drum practise tool which listens to the players' practice and provides visual feedback on their accuracy. I use AVAudioSourceNode and AVAudioSinkNode for playing audio and for listening to player practise. Precious timing is the most important part of our app. To optimise audio latency I set PreferredIOBufferDuration to 64/48000sec (~1.33ms). My preferences work fine with builtin or wired audio devices. In these cases we can easily estimate the actual audio latency. However we would like to support Apple airPods (or other bluetooth earbuds) as well, but it seems to be impossible to predict the actual audio latency. let bufferSize: UInt32 = 64 let sampleRate: Double = 48000 let bufferDuration = TimeInterval(Double(bufferSize) / sampleRate) try? session.setCategory(AVAudioSession.Category.playAndRecord, options: [.defaultToSpeaker, .mixWithOthers, .allowBluetoothA2DP])     try? session.setPreferredSampleRate(Double(sampleRate))     try? session.setPreferredIOBufferDuration(bufferDuration)     try? session.setActive(true, options: .notifyOthersOnDeactivation) I use iPhone 12 mini and airPods 2 for testing. (Input always have to be the phone's builtin mic) let input = session.inputLatency // 2.438ms let output = session.outputLatency // 160.667ms let buffer = session.ioBufferDuration // 1.333ms let estimated = input + output + buffer * 2 // 165.771 session.outputLatency returns ca 160ms for my airPods. With the basic calculation above I can estimate a latency of 165.771ms, but when I measure the actual latency (time difference between heard and played sound ) I get significantly different values. If I connect my airPods and start playing immediately, the actual measured latency is ca 215-220ms at first, but it is continuously decreasing over time. After about 20-30mins of measuring the actual latency is around 155-160ms (just like the value that the session returns). However if I am using my airPods for a while before I start the measurement, the actual latency starts from ca 180ms (and decreasing over time the same way). On older iOS devices these differences are even larger. It feels like bluetooth connection needs to "warm up" or something. My questions would be: Is there any way to have a relatively constant audio latency with bluetooth devices? I thought maybe it depends on the actual bandwidth but I couldn't find anything on this topic. Can bandwidth change over time? Can I control it? I guess airPods support AAC codec. Is there any way to force them to use SBC? Does SBC codec work with lower latency? What is the best audioengine setting to support bluetooth devices with the lowest latency? Any other suggestion? Thank you
Posted Last updated
.
Post marked as solved
5 Replies
1.9k Views
I want to create my own custom audio recognition with ShazamKit, when opening the sample project I found the FoodMath.shazamsignature file. I believe there is a way to generate that file based on my audio collections. How do I create the .shazamsignature file? Thanks.
Posted
by rzkhilman.
Last updated
.
Post not yet marked as solved
0 Replies
355 Views
I've noticed that enabling voice processing on AVAudioInputNode change the node's format - most noticeably channel count. let inputNode = avEngine.inputNode print("Format #1: \(inputNode.outputFormat(forBus: 0))") // Format #1: <AVAudioFormat 0x600002bb4be0:  1 ch,  44100 Hz, Float32> try! inputNode.setVoiceProcessingEnabled(true) print("Format #2: \(inputNode.outputFormat(forBus: 0))") // Format #2: <AVAudioFormat 0x600002b18f50:  3 ch,  44100 Hz, Float32, deinterleaved> Is this expected? How can I interpret these channels? My input device is an aggregate device where each channel comes from a different microphone. I then record each channels to separate files. But when voice processing messes up with the channels layout, I cannot rely on this anymore.
Posted
by smialek.
Last updated
.