Post not yet marked as solved
A: iPhone SE 2nd (iOS 16.5)
Used bluetooth model: Shokz OpenRun S803
B: Any mobile device
A uses bluetooth microphone/speaker, and make a call to B using iPhone app.
Mute the A's headphone. (The bluetooth device support mute by hardware).
While A mutes, B speaks.
Unmute A's headphone.
Every time B speaks, B can hear the echo.
Since there is no audio data during the hardware muted, VPIO don't recognize audio reference data to remove echo signal.
Is there any alternative to resolve this echo in VoIP software using VPIO?
Post not yet marked as solved
New on iOS17 we can control the amount of 'other audio' ducking through the AVAudioEngine. Is this also possible on AVAudioSession?
In my app I don't voice input, but I do play voice audio while music from other apps plays in the background. Often the music either drowns to voice, if I use the .mixWithOthers category, or it's not loud enough if I use .duckOthers. It would be awesome to have the level of control that AVAudioEngine has.
Post not yet marked as solved
Thank you for this new API.
Today, when using AUVoiceIO, voice gets processed ahead of rendering to ensure the echo canceller is capable of filtering it out from the input.
Will other audio be processed in the same way? For example, rendered as mono in a 16kHz sampling rate?
I'm asking because I'm wondering if this API will unlock the ability to use wide-band, stereo, high quality other audio (for example game audio) simultaneously while using voice.
Thanks!
Post not yet marked as solved
I am getting an error in iOS 16. This error doesn't appear in previous iOS versions.
I am using RemoteIO to playback live audio at 4000 hz. The error is the following:
Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets
This is how the audio format and the callback is set:
// Set the Audio format
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 4000;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
AURenderCallbackStruct callbackStruct;
// Set output callback
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
Note that the mSampleRate I set is 4000 Hz.
In iOS 15 I get 0.02322 seconds of buffer duration (IOBufferDuration) and 93 frames in each callback. This is expected, because:
number of frames * buffer duration = sampling rate
93 * 0.02322 = 4000 Hz
However, in iOS 16 I am getting the aforementioned error in the callback.
Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets
Since the number of frames is equal to the number of packets, I am getting 1 or 2 frames in the callback and the buffer duration is of 0.02322 seconds. This didn't affect the playback of the "raw" signal, but it did affect the playback of the "processed" signal.
number of frames * buffer duration = sampling rate
2 * 0.02322 = 0.046 Hz
That doesn't make any sense. This error appears for different sampling rates (8000, 16000, 32000), but not for 44100. However I would like to keep 4000 as my sampling rate.
I have also tried to set the sampling rate by using the setPreferredSampleRate(_:) function of AVAudioSession, but the attempt didn't succeed. The sampling rate was still 44100 after calling that function.
Any help on this issue would be appreciated.
Post not yet marked as solved
How can I processing on the AudioQueue callback AudioQueueOutputCallback for FFT .
Just like this func
static void audioQueueOutpuCallBack(void *input, AudioQueueRef inQueue, AudioQueueBufferRef outQueueBuffer)
{
SYAudioQueue *aq = (__bridge SYAudioQueue *)input;
dispatch_semaphore_wait(aq-m_mutex, DISPATCH_TIME_FOREVER);
[aq enterQueue:inQueue withBuffer:outQueueBuffer];
dispatch_semaphore_signal(aq-m_mutex);
}
I know that AVAudioEngine can be processing for FFT under the AVAudioPCMBuffer .
or
How can I convert AudioQueueBufferRef to AVAudioPCMBuffer
Post not yet marked as solved
Hello,
Here is an issue I encountered recently. Does anybody have feedback on this?
Issue encountered
AVAudioFile
throws when opening WAV files and MPEG-DASH files with .mp3 extension,
works fine with many other tested combinations of formats and extension (for example, an AIFF file with .mp3 extension is read by AVAudioFile without error).
The Music app, AVAudioFile and ExtAudioFile all fail on the same files.
However, previewing an audio file in Finder (select the file and hit the space bar) works regardless of the file extension.
Why do I consider this an issue?
AVAudioFile seems to rely on extension sometimes but not always to guess the audio format of the file, which leads to unexpected errors.
I would expect AVAudioFile to deal properly with wrong extensions for all supported audio formats.
⚠️ This behaviour can cause real trouble in iOS and macOS applications using audio files coming from the user, which often have unreliable extensions.
I published some code to easily reproduce the issue:
https://github.com/ThomasHezard/AVAudioFileFormatIssue
Thank you everybody, have a great day 😎
Post not yet marked as solved
An error is reported when playing h5 audio or video elements in wkwebview: Error acquiring assertion: Error Domain=RBSAssertionErrorDomain Code=3 "Required client entitlement is missing" UserInfo={RBSAssertionAttribute=RBSDomainAttribute| domain:"com.apple.webkit" name:"MediaPlayback" sourceEnvironment:"(null)", NSLocalizedFailureReason=Required client entitlement is missing} Then the performance of the webview will become very poor.
There is an audio element and a button button in my HTML file. Click the button to play audio.
body
button onclick="handleClick()"PLAY/button
audio id="audio" src="https://ac-dev.oss-cn-hangzhou.aliyuncs.com/test-2022-music.mp3"/audio
script
function handleClick() {
document.getElementById("audio").play();
}
/script
/body
Create a wkwebview to load the html file in my demo APP.
class ViewController: UIViewController , WKUIDelegate{
var webView: WKWebView!
override func loadView() {
let config = WKWebViewConfiguration()
config.preferences.javaScriptEnabled = true
config.allowsInlineMediaPlayback = true
webView = WKWebView(frame: .zero, configuration: config) //.zero
webView.uiDelegate = self
view = webView
}
override func viewDidLoad() {
super.viewDidLoad()
let myURL = URL(string: "https://ac-dev.oss-cn-hangzhou.aliyuncs.com/test-2022-py.html")
let myRequest = URLRequest(url: myURL!)
webView.load(myRequest)
}
}
Click the button in the HTML to play the audio, and you can see the error report on the xcode.
iPadN[2133:855729] [assertion] Error acquiring assertion: Error Domain=RBSAssertionErrorDomain Code=3 "Required client entitlement is missing" UserInfo={RBSAssertionAttribute=RBSDomainAttribute| domain:"com.apple.webkit" name:"MediaPlayback" sourceEnvironment:"(null)", NSLocalizedFailureReason=Required client entitlement is missing}
To sum up, this error will appear when playing audio or video in HTML. Then the app performance will drop a lot, and the interactive response will be very slow.
Post not yet marked as solved
I am experiencing an issue where my Mac's speakers will crackle and pop when running an app on the Simulator or even when previewing SwiftUI with Live Preview.
I am using a 16" MacBook Pro (i9) and I'm running Xcode 12.2 on Big Sur (11.0.1).
Killing coreaudiod temporarily fixes the problem however this is not much of a solution.
Is anyone else having this problem?
Post not yet marked as solved
Hello, after updating to iOS 16.4 I have major issues when trying to play music through my 2015 BMW with both bluetooth and USB. I have used other phones with earlier iOS versions, and it works flawlessly so I know it's 16.4. I have tried restarting, updating the BMW software, and disconnecting/reconnecting, but no luck as it's certainly a 16.4 issue. The problems are as follows (for apple music, spotify, soundcloud, any audio streaming)
No album artwork
No song title
No album title
No ability to change songs unless on phone
When attempting to play a song, it will only play the first ~30 seconds or so before restarting to the first song in my library. This happens over, and over, and over again
16.4 has made being able to enjoy music in my car obsolete. I have tried submitting 2 tickets in the feedback app with no response, and when I try to contact Apple they just tell me to submit feedback and are unable to help. Hoping a dev or someone sees this and is able to fix it. Thank you.
Post not yet marked as solved
Hi,
Anyone know if there are a lot of issue requiring Audio Entitlements for CarPlay ?
I know a lot of developer have request these entitlements without reply from Apple.
Thanks for help me!
Andrea
Post not yet marked as solved
I am trying to figure out why there is no audio on my iPhone or iPad, my code is working on other devices. I am on IPad iOS 15.3.1 and I test on my computer using Safari. Video is working, and both the video and audio work on Android, Chrome, etc. This is just an audio problem on iOS.
From my WebRTC I have HTML5 Audio Track tracks as such:
<audio muted="false" autoplay="1" id="xxxx"></audio>
When debugging, I connect my IPad and have run this volume check:
document.getElementById('***').volume
And it returns the value of 1, so the volume is on its loudest (I think according to HTML5 audio tags range from 0, 0.1, 0.2, xxxx 1).
document.getElementById('***').end
The ended returns false. Next I try to run the play() function as such:
$('#***')[0].play()
.then((resp) => {
console.log("Success");
console.log(resp)
})
.catch(error => {console.log(error)})
And it executes the success response. But there is still no sound. What could be causing this issue on iOS and Safari only?enter code here
Post not yet marked as solved
To reproduce this error, first you need to install a build version (for example, version 14). Then, you need to generate some internal audio files within the application. After that, if you update the system to, for example, version 15, you will notice that some of the audio files are not persisted.
I made a video on youtube to explain my point:
https://youtu.be/fbZ5okq2ddo
I am a Flutter developer. The problem is that after the build update, some data persists while others do not. The program generates several audio files during its normal use. Some of these audio files are generated directly by recording with a microphone, while others are generated by concatenating pre-existing audio files. Interestingly, the audio files generated by concatenation are not persisting after the build update.
Here is the address of one of the audio files in the set that is not persisting:
/var/mobile/Containers/Data/Application/F7288BFF-6A62-49BF-961C-615C17DAE0FE/Library/Application Support/guerreirodafogueiragmailcom_autocura_folder/transMentaltmenv4yT2uHbj8G9p6T.mp3
And here is the address of one of the audio files that is persisting:
/var/mobile/Containers/Data/Application/F7288BFF-6A62-49BF-961C-615C17DAE0FE/Library/Application Support/guerreirodafogueiragmailcom_autocura_folder/revoltas.mp3
Other data, such as lists of strings and images, are not lost.
Any suggestion to solve this issue?
The iphone used to run the app: IPhone SE iOS 15.7.5
My XCode version is: 14.2 (14C18)
My Flutter doctor result:
[✓] Flutter (Channel stable, 3.0.5, on macOS 13.2.1 22D68 darwin-arm, locale pt-BR)
[✓] Android toolchain - develop for Android devices (Android SDK version 32.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 14.2)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2021.2)
[✓] VS Code (version 1.77.0)
[✓] Connected device (3 available)
[✓] HTTP Host Availability
Post not yet marked as solved
My application's music/media player is defined in my app as:
@State var musicPlayer = MPMusicPlayerController.applicationMusicPlayer
I want to change the playback rate and increase it slightly, but I'm unsure how to do this. The documentation is here which states it's part of the MPMediaPlayback protocol but I'm unsure how I can get this to work together.
I tried doing the following:
self.musicPlayer.currentPlaybackRate(1.2)
But I just get the following error:
Cannot call value of non-function type 'Float'
When I play a song (self.musicPlayer.play()), how can I set the playback rate at the same time please?
Post not yet marked as solved
Hello
We have published Voix Audio Recorder Application. The app seems to work but we have very strange issue on one particular iPad device: Application is requesting permission for microphone access, but as you can see on attached images, the permission for microphone is totally missing from permission list. On similar iPad devices, we have not noticed similar issue.
What could be cause of this issue and how can we avoid it?
Post not yet marked as solved
Hi Everyone! I am writing a script to help organize all of the songs in my library, and part of this workflow is to call the Get All Library Songs API. To do this, I tried both creating the request.get call myself, and using a modified fork of an Apple Music Python repo which does work with other APIs to for examples, search for music. The problem is that in both cases, I always get a 403 error saying that authentication is required. Here is some info about a request I made:
In addition to the above modifications, I tried running my python script both standalone and wrapped in an Automator app (I created a shell script to run the python script, and then used Automator to wrap it in an application called test), and both resulted in the same error.
I verified that the app has access to media & apple music here (and logged out and logged back into my personal Apple Music account to see if anything changed)
However, my Apple Music account doesn't show the app at all, nor is there a prompt to allow access for it to my account either:
My test app's Info.plist file shows the corresponding flag to enable Apple Music access as well:
Can anyone help identify what could be happening here? Is there an explicit permission that I may need to enable to unblock me? Is there an issue with this API specifically?
Thanks!
Post not yet marked as solved
Hi,
I am using WkWebView to display some dynamic content made in cocos creator game engine. For sound to play without user interaction I was using the "mediaTypesRequiringUserActionForPlayback" flag for WKWebViewConfiguration as '0' i.e. no user interaction is required to play audio/video.
The above setting was working fine and sound autoplay was working up untill iOS 16.0, after updating my iPad to iOS 16.3.1 the autoplay of sound stopped working. It can only be played if I call AudioContext.resume() on user interaction.
Can anyone advice how to get the audio autoplay working correctly again.
Thanks
Post not yet marked as solved
What this is about:
I have an iOS "Guitar Effect" app that gets audio signal from input, process it and plays the result audio back to user via output. The app dosn't work with BuiltIn microphone of iOS device (because of feedback) - users have to connect guitar via special device: either analog like iRig or digital like iRig HD.
Starting from iOS 16 I face a weird behaviour of the AVAudioSession that breaks my app. In iOS 16 the input of the AVAudioSession Route is always MicrophoneBuiltIn - no matter if I connect any external microphones like iRig device or headphones with microphone. Even if I try to manually switch to external microphone by assigning the preferredInput for AVAudioSession it doesn't change the route - input is always MicrophoneBuiltIn. In iOS 15 and earlier iOS automatically change the input of the route to any external microphone you attach to the iOS device. And you may control the input by assigning preferredInput property for AVAudioSession.
This is an smallest example project to reproduce the issue.
Project Structure:
This is a very small project created to reproduce the issue. All the code is in ViewController class.
I create a playAndRecord AVAudioSession and subscribe for routeChangeNotification notification:
NotificationCenter.default.addObserver(self, selector: #selector(handleRouteChange), name: AVAudioSession.routeChangeNotification, object: nil)
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSession.Category.playAndRecord, options: .mixWithOthers)
try audioSession.setActive(true, options: [])
} catch {
print("AVAudioSession init error: \(error)")
}
When I get a notification - I print the list of available audio inputs, preferred input and current audio route:
@objc func handleRouteChange(notification: Notification) {
print("\nHANDLE ROUTE CHANGE")
print("AVAILABLE INPUTS: \(AVAudioSession.sharedInstance().availableInputs ?? [])")
print("PREFERRED INPUT: \(String(describing: AVAudioSession.sharedInstance().preferredInput))")
print("CURRENT ROUTE: \(AVAudioSession.sharedInstance().currentRoute)\n")
}
I have a button that displays an alert with the list of all available audio inputs and providing the way to set each input as preferred:
@IBAction func selectPreferredInputClick(_ sender: UIButton) {
let inputs = AVAudioSession.sharedInstance().availableInputs ?? []
let title = "Select Preferred Input"
let message = "Current Preferred Input: \(String(describing: AVAudioSession.sharedInstance().preferredInput?.portName))\nCurrent Route Input \(String(describing: AVAudioSession.sharedInstance().currentRoute.inputs.first?.portName))"
let alert = UIAlertController(title: title, message: message, preferredStyle: .alert)
for input in inputs {
alert.addAction(UIAlertAction(title: input.portName, style: .default) {_ in
print("\n\(title)")
print("\(message) New Preferred Input: \(input.portName)\n")
do {
try AVAudioSession.sharedInstance().setPreferredInput(input)
} catch {
print("Set Preferred Input Error: \(error)")
}
})
}
alert.addAction(UIAlertAction(title: "Cancel", style: .cancel))
present(alert, animated: true)
}
iOS 16 Behaviour:
When I launch the app without any external mics attached and initiate the AVAudioSession. Then I attach the iRig device (which is basically the external microphone) and I have the following result: the MicrophoneWired appears in the list of available inputs but input of the route is still MicrophoneBuiltIn.
Then I tried to change preferredInput of the AVAudioSession first to MicrophoneWired, then to MicrophoneBuiltIn and then to MicrophoneWired again:
No matter what is preferredInput the input device of AudioSession route is MicrophoneBuiltIn. Sorry for image - forum doesn't allow to post the log message:
iOS 15 Behaviour:
Everything is different (and much better) in iOS 15. When I launch the app without any external mics attached and initiate the AVAudioSession. Then I attach the iRig device (which is basically the external microphone) and I have the following result:
The input of the AVAudioSession route is MicrophoneWired. Then I try to change the preferred input of the AVAudioSession and it works fine - the input of the route matches the preferred input of the AVAudioSession. Sorry for image - forum doesn't allow to post the log message:
Conclusion:
Please let me know if there is any way to make the behaviour of iOS 16 the same it is on iOS 15 and below. I searched the release notes of iOS 16 and didn't find any mention of AVAudioSession. If there is no way to do it please let me know what is the proper way to manage input source of the route of AVAudioSession. Any advice is highly appreciated.
Is it possible to completely remove the before/next buttons from the CPNowPlayingTemplate?
I've already tried to set live-streaming mode, but the buttons are still there.
Post not yet marked as solved
I want to add music to my app to recreate the 80's era. Do I understand correctly that if I use music by real and famous artists at the time, I will be disqualified for copyright infringement?
If so, the following question follows from this. Can I then use music available without a license indicating the author?
The most sensitive subject of my app, I would be glad if you could clarify it. Thank you!
I've created a CPListTemplate with dynamic list entries from an API. All entries are different music channels. In my ForEach for generating all list entries, I've set the .isPlaying property to true and also the playingIndicatorLocation to .trailing.
If I play a channel, the isPlaying indicator is animated on every list entries. I've set different metadata in the MPNowPlayingInfoCenter and this is correct inside the CPNowPlayingTemplate. Do I need to set some identifier somewhere, so my list template knows which channel is currently playing? If so where can I set this?