Integrate music and other audio content into your apps.

Posts under Audio tag

86 Posts
Sort by:
Post marked as solved
1 Replies
93 Views
I'm attempting to record from a device's microphone (under iOS) using AVAudioRecorder. The examples are all quite simple, and I'm following the same method. But I'm getting error messages on attempts to record, and the resulting M4A file (after several seconds of recording) is only 552 bytes long and won't load. Here's the recorder usage: func startRecording() { let settings = [ AVFormatIDKey: Int(kAudioFormatMPEG4AAC), AVSampleRateKey: 22050, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] do { recorder = try AVAudioRecorder(url: tempFileURL(), settings: settings) recorder?.delegate = self recorder!.record() recording = true } catch { recording = false recordingFinished(success: false) } } The immediate sign of trouble appears to be the following, in the console. Note the 0 bits per channel and irrelevant 8K sample rate: AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 8000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 8000 Hz, Int16 A subsequent attempt to load the file into AVAudioPlayer results in: MP4_BoxParser.cpp:1089 DataSource read failed MP4AudioFile.cpp:4365 MP4Parser_PacketProvider->GetASBD() failed AudioFileObject.cpp:105 OpenFromDataSource failed AudioFileObject.cpp:80 Open failed But that's not surprising given that it's only 500+ bytes and we had the earlier error. Anybody have an idea here? Every example on the Web shows essentially this exact method. I've also tried constructing the recorder with let audioFormat = AVAudioFormat.init(standardFormatWithSampleRate: 44100, channels: 1) if audioFormat == nil { print("Audio format failed.") } else { do { recorder = try AVAudioRecorder(url: tempFileURL(), format: audioFormat!) ... with mostly the same result. In that case the instantiation error message was the following, which at least mentions the requested sample rate: AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 44100 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 44100 Hz, Int32
Posted Last updated
.
Post not yet marked as solved
0 Replies
80 Views
I first encountered this issue on my Spotify web app on 6th March 2024 where the song will restart and/or jumps to another song within the playlist, plays for a bit and occasionally restarts a number of times. I will never know when Spotify will restart/jumps the song. No issues on Apple Music and on my iPhone & Apple Watch & then it started happening today 21 March 2024. I tried Googling but to no avail and exhausted all solutions with Spotify's care team (re-installing, clearing the app and macbook's cache, host files & etc, restarting my devices). I assume that it's now Apple's software issue with macOS Sonoma? Please help, anyone / Apple!!! Details: The spotify version i'm having is 1.2.33.1039.g8ddb5918 Macbook Air M2 2022 with macOS Sonoma 14.4 iPhone 14 Pro
Posted
by meag01.
Last updated
.
Post not yet marked as solved
0 Replies
110 Views
I would like to detect audio input device state change in system logs. Right now i can detect the activation using: log show --info --predicate "process == 'coreaudiod' && category == 'access'". But i'm unable to detect deactivation and have no idea which predicate to use.
Posted
by kcr.
Last updated
.
Post marked as solved
2 Replies
412 Views
Some of installers which we have suddenly become broken for users running the latest version of OS X, I found that the reason was that we install Core Audio HAL driver and because I wanted to avoid system reboot I relaunched coreaudio daemon via from a pkg post-install script. sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod So with the OS update the command fails, if a computer has SIP enabled (what is the default). sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod Password: Could not kickstart service "com.apple.audio.coreaudiod": 1: Operation not permitted It would be super nice if either the change can be: reverted OR I and similar people to know a workaround of how to hot-plug (and unplug) such a HAL driver.
Posted Last updated
.
Post not yet marked as solved
3 Replies
423 Views
We have an angular/ionic based app that has an audio playback feature. It appears that iphone (but not ipad) users who upgrade to iOS 17.4 can no longer play audio in our app. iPad users who upgraded to 17.4 don't have an issue. We use the HTMLAudioElement for audio playback. It appears that in 17.4 it is no longer firing the 'canplay' event that we listening for, for starting our playback. The other data buffering events like 'loadeddata' are also not being delivered. By changing the logic to listen for the 'loadstart' event, audio playback works and then the remaining 'canplaythrough' and 'canplay' are delivered. In other words, I need to start playback before any data buffering status events are delivered, otherwise they never get delivered. I am testing this against an audio delivery server on my same machine and have confirmed that the data is correctly delivered. Is anyone else experiencing a similar issue on iphones in iOS 17.4?
Posted
by drjvp.
Last updated
.
Post not yet marked as solved
0 Replies
146 Views
I often find when doing basic actions in MusicKit it is incredibly slow compared to Apple's Music App. I've tried different versions, devices, networks, Apple's sample code, it all throughout the last several years, and it is all the same. Does anyone else have this issue?
Posted Last updated
.
Post not yet marked as solved
0 Replies
138 Views
We develop virtual instruments for Mac/AU and are trying to get our AU-Plugins and our Standalone player to work with Audio Workgroups. When the Standalone App or Logic Pro is in the foreground and active all is well and as expected. However when the App or Logic Pro is not in focus all my auxiliary threads are running on E-Cores. Even though they are properly joined to the processing thread's workgroup. This leads to a lot of audible drop outs because deadlines are not met anymore. The processing thread itself stays on a p-core. But has to wait for the other threads to finish. How can I opt out of this behaviour? Our users certainly have use cases where they expect the Player to run smoothly even though they currently have a different App in focus.
Posted Last updated
.
Post not yet marked as solved
0 Replies
171 Views
I’ve noticed that audio is locked into position with builds. It sometimes comes out the left ear, other times the right. I occasionally get it working with both, but the audio isn’t moving with head tracking. I've seen other reports of this issue in the Unity support forums too. Any ideas on how to fix this? Its being reported in my app reviews as a negative.
Posted
by Rave-TZ.
Last updated
.
Post not yet marked as solved
1 Replies
296 Views
Hello everyone, I'm relatively new to iOS development, and I'm currently working on a Flutter plugin package. I want to use the AVFAudio package to load instrument sounds from an SF2 file into different channels. Specifically, I'd like to load individual instruments from the SF2 file onto separate channels. However, I've been struggling to find a way to achieve this. Could someone guide me on how to load SF2 instrument sounds into different channels using AVFAudio? I've tried various combinations of parameters (program number, soundbank MSB, and soundbank LSB), but none seem to work. If anyone has experience with AVFAudio and SF2 files, I'd greatly appreciate your help. Perhaps there's a proven approach or a way to determine the correct values for these parameters? Should I use a soundfont editor to inspect specific values within the SF2 file? Thank you in advance for any assistance! Best regards, Melih
Posted Last updated
.
Post not yet marked as solved
4 Replies
320 Views
I can't figure out how to get audio from my RealityKitContentBundle to play on Vision Pro... I have a scene in Reality Composer Pro called "WinterVivarium" which contains a 3D model of a tree, a particle emitter, a ChannelAudio entity, and an audio file (m4a) with 30 minutes of nature sounds. The 3D model and particle emitter load up just fine on my device, but I'm getting an error when I try to load the audio... Swift file below. When I run the app and this file gets called it throws the following error: "Error loading winter vivarium model and/or audio: The operation couldn’t be completed. (RealityKit.__REAsset.LoadError error 2.)" ChatGPT tells me error code 2 likely means "file not found" but I'm not sure on that one... Please help! import SwiftUI import RealityKit import RealityKitContent struct WinterVivarium: View { @State private var angle: Angle = .degrees(0) var body: some View { RealityView { content in let audioFilePath = "/Root/back-yard-feb-7am.m4a" let audioEntity = Entity() do { let entity = try await Entity(named: "WinterVivarium", in: realityKitContentBundle) content.add(entity) let resource = try await AudioFileResource.load(named: audioFilePath, from: "WinterVivarium.usda", in: RealityKitContent.RealityKitContentBundle) let audioController = audioEntity.playAudio(resource) } catch { print("Error loading winter vivarium model and/or audio: \(error.localizedDescription)") } } } #Preview { WinterVivarium() }
Posted Last updated
.
Post not yet marked as solved
0 Replies
184 Views
I am working on a radio app. This is the first time and I have a problem with lock Screen Audio Card. According to docs It looks ok but could you please check why I can not display Audio Now Playing Card on lock Screen. 2 Code samples, 1. Now Playing and 2. Logic of current song and Album art. 1. Now Playing // Create a dictionary to hold the now playing information var nowPlayingInfo: [String: Any] = [:] // Set the title of the current song nowPlayingInfo[MPMediaItemPropertyTitle] = currentSong // If album art URL is available, fetch the image asynchronously if let albumArtUrl = albumArtUrl { URLSession.shared.dataTask(with: albumArtUrl) { data, _, error in if let data = data, let image = UIImage(data: data) { // Create artwork object let artwork = MPMediaItemArtwork(boundsSize: image.size) { _ in image } // Update now playing info with artwork on the main queue DispatchQueue.main.async { nowPlayingInfo[MPMediaItemPropertyArtwork] = artwork MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo } } else { // If there's an error fetching the album art, set now playing info without artwork MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo print("Error retrieving album art data:", error?.localizedDescription ?? "Unknown error") } }.resume() } else { // If album art URL is not available, set now playing info without artwork MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo } } 2. Current Song, Album Art Logic let parts = currentSong.split(separator: "-", maxSplits: 1, omittingEmptySubsequences: true).map { $0.trimmingCharacters(in: .whitespaces) } let titleWithExtra = parts.count > 1 ? parts[1] : "" let title = titleWithExtra.components(separatedBy: " (").first ?? titleWithExtra return title } func updateSongInfo() { let url = URL(string: "https://live.heartfm.com.tr/listen/heart_fm/currentsong")! URLSession.shared.dataTask(with: url) { data, response, error in if let data = data, let songString = String(data: data, encoding: .utf8) { DispatchQueue.main.async { self.currentSong = songString.trimmingCharacters(in: .whitespacesAndNewlines) self.updateAlbumArtUrl(song: self.currentSong) } } }.resume() } private func updateAlbumArtUrl(song: String) { let parts = song.split(separator: "-", maxSplits: 1, omittingEmptySubsequences: true).map { $0.trimmingCharacters(in: .whitespaces) } let artist = parts.first ?? "" let titleWithExtra = parts.count > 1 ? parts[1] : "" let title = titleWithExtra.components(separatedBy: " (").first ?? titleWithExtra let artistAndTitle = artist.isEmpty || title.isEmpty ? song : "\(artist) - \(title)" let encodedArtistAndTitle = artistAndTitle.addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed) ?? artistAndTitle albumArtUrl = URL(string: "https://www.heartfm.com.tr/ArtCover/\(encodedArtistAndTitle).jpg") }
Posted
by lcyzkn.
Last updated
.
Post not yet marked as solved
1 Replies
238 Views
There is a CustomPlayer class and inside it is using the MTAudioProcessingTap to modify the Audio buffer. Let's say there are instances A and B of the Custom Player class. When A and B are running, the process of B's MTAudioProcessingTap is stopped and finalize callback is coming up when A finishes the operation and the instance is terminated. B is still experiencing this with some parts left to proceed. Same code same project is not happening in iOS 17.0 or lower. At the same time when A is terminated, B can complete the task without any impact on B. What changes to iOS 17.1 are resulting in these results? I'd appreciate it if you could give me an answer on how to avoid these issues. let audioMix = AVMutableAudioMix() var audioMixParameters: [AVMutableAudioMixInputParameters] = [] try composition.tracks(withMediaType: .audio).forEach { track in let inputParameter = AVMutableAudioMixInputParameters(track: track) inputParameter.trackID = track.trackID var callbacks = MTAudioProcessingTapCallbacks( version: kMTAudioProcessingTapCallbacksVersion_0, clientInfo: UnsafeMutableRawPointer( Unmanaged.passRetained(clientInfo).toOpaque() ), init: { tap, clientInfo, tapStorageOut in tapStorageOut.pointee = clientInfo }, finalize: { tap in Unmanaged<ClientInfo>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).release() }, prepare: nil, unprepare: nil, process: { tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut in var timeRange = CMTimeRange.zero let status = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, &timeRange, numberFramesOut) if noErr == status { .... } }) var tap: Unmanaged<MTAudioProcessingTap>? let status = MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks, kMTAudioProcessingTapCreationFlag_PostEffects, &tap) guard noErr == status else { return } inputParameter.audioTapProcessor = tap?.takeUnretainedValue() audioMixParameters.append(inputParameter) tap?.release() } audioMix.inputParameters = audioMixParameters return audioMix
Posted
by koggo2.
Last updated
.
Post not yet marked as solved
0 Replies
229 Views
Is there a way for an FXPlug to access the Source audio? Or do we need to make an AU plugin, apply it to a audio source [both video or audio track], and feed the info via shared memory to an FXPlug? Is there an AU plugin for external processes to "listen" to the audio?
Posted
by belisoful.
Last updated
.
Post not yet marked as solved
0 Replies
229 Views
Hi all, I have created a QuickLook Preview for my custom datatype in my app. I use SwiftUI wrapped in UIKit for the preview. My issue is that when I try and play audio using AVAudioPlayer, I receive a status code 50 error. Does anyone know if there are seperate permissions I need to request before being able to do this? Here are the errors I get while trying to set my audio session as active and play on the avaudioplayer Thanks for your help and advice! The operation couldn’t be completed. (OSStatus error -50.) nwi_state: registration failed (9) connection <connection: 0x100e0b270> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } : error <dictionary: 0x251524530> { count = 1, transaction: 0, voucher = 0x0, contents = "XPCErrorDescription" => <string: 0x2515246c8> { length = 18, contents = "Connection invalid" } } auto-cancelling <connection: 0x100e0b270> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } 0x2816bf680 reply: XPC_ERROR_CONNECTION_INVALID throwing swix::exception: !(is_valid()) AQ_API_V2Impl.cpp:134 AudioQueueNew: <-AudioQueueNew failed -302 rebuilding null connection 0x2816bf680 reply: XPC_ERROR_CONNECTION_INVALID connection <connection: 0x100822a90> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } : error <dictionary: 0x251524530> { count = 1, transaction: 0, voucher = 0x0, contents = "XPCErrorDescription" => <string: 0x2515246c8> { length = 18, contents = "Connection invalid" } } throwing swix::exception: !(is_valid()) auto-cancelling <connection: 0x100822a90> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } AQ_API_V2Impl.cpp:134 AudioQueueNew: <-AudioQueueNew failed -302
Posted
by emilea.
Last updated
.
Post not yet marked as solved
0 Replies
224 Views
Each time your listening music you are streaming from a server powered by frequently coal or gaz are rarely green energy. As a developper on IOS, i request to Apple to provide download of audio file into our audio app . The goal is not to resell the audio and violate authors right. You Tube already does that. it is time to find tips and tricks to reduce the consumption of the energy specially into data brodcasting and useless streaming of the same song again and again and again. is it possible to change the API in accordance to this reality.
Posted
by tinolayco.
Last updated
.
Post not yet marked as solved
0 Replies
266 Views
I’m exploring enabling speech-to-commands processing for a game, but would like to try and do a baseline of voice recognition within that to allow two people in close proximity to interact , but not interfere with each others voice commands to this system. (it’s for an accessible game idea)
Posted
by heckj.
Last updated
.
Post not yet marked as solved
1 Replies
530 Views
Hi I was trying to design the above UI, But using the below code of CPListImageRowItem func templateApplicationScene(_ templateApplicationScene: CPTemplateApplicationScene, didConnect interfaceController: CPInterfaceController) { self.interfaceController = interfaceController // Create a list row item with images let item = CPListImageRowItem(text: "Category", images: [UIImage(named: "cover.jpeg")!, UIImage(named: "cover2.jpeg")!, UIImage(named: "discover.jpeg")!, UIImage(named: "thumbnail.jpeg")!]) // Create a list section let section = CPListSection(items: [item]) // Create a list template with the section let listTemplate = CPListTemplate(title: "Your Template Title", sections: [section]) // Set the template on the interface controller interfaceController.setRootTemplate(listTemplate, animated: true) } I was getting only header and below image items but detailed text under images are no way to set. can any one help me out of this
Posted
by TarakJaan.
Last updated
.
Post not yet marked as solved
0 Replies
364 Views
I have an app that is getting rejected from TestFlight because of this error: ITMS-90683: Missing purpose string in Info.plist - Your app’s code references one or more APIs that access sensitive user data, or the app has one or more entitlements that permit such access. The Info.plist file for the “TurtleTuner.app” bundle should contain a NSCameraUsageDescription key with a user-facing purpose string explaining clearly and completely why your app needs the data. If you’re using external libraries or SDKs, they may reference APIs that require a purpose string. While your app might not use these APIs, a purpose string is still required. For details, visit: https://developer.apple.com/documentation/uikit/protecting_the_user_s_privacy/requesting_access_to_protected_resources. The app does not use the camera, only the microphone. I cannot find references to the camera in any of the third party libraries I'm using. What are some ways to troubleshoot this beyond looking for "camera" in the few dependencies? For context, this commit allows the app to get through successfully to TestFlight: https://github.com/tsargent/turtle-tuner/commit/67d4a52e62839ad6c2a49848bea9c408d983f17a While this following commit, which reverts the commit, fails on TestFlight with the mentioned camera permission error: https://github.com/tsargent/turtle-tuner/commit/c95b0b16c4e85d77e625d36b816ed53faa826cf5
Posted
by twsargent.
Last updated
.