Some of installers which we have suddenly become broken for users running the latest version of OS X, I found that the reason was that we install Core Audio HAL driver and because I wanted to avoid system reboot I relaunched coreaudio daemon via from a pkg post-install script.
sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod
So with the OS update the command fails, if a computer has SIP enabled (what is the default).
sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod
Password:
Could not kickstart service "com.apple.audio.coreaudiod": 1: Operation not permitted
It would be super nice if either the change can be:
reverted OR
I and similar people to know a workaround of how to hot-plug (and unplug) such a HAL driver.
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Post
Replies
Boosts
Views
Activity
The CoreAudio framework has a process class property kAudioProcessPropertyDevices, which is used to obtain an array of AudioObjectIDs that represent the devices currently used by the process for output.
But this property behaves incorrectly. Specifically, if a process switches from one microphone to another while streaming, this property returns the output device ID as the input device ID.
Steps to reproduce:
run FaceTime
select "MacBook Pro Microphone" as an input device from the FaceTime menu
select "MacBook Pro Speaker" as an output device from the FaceTime menu
start a call
get kAudioProcessPropertyDevices for Input scope: returns ID1 - "MacBook Pro Microphone" [CORRECT]
get kAudioProcessPropertyDevices for Output scope: returns ID2 - "MacBook Pro Speaker" [CORRECT]
change the input device in the FaceTime menu to any other microphone ("AirPods Pro" - ID3)
get kAudioProcessPropertyDevices for Input scope: returns ID2 "MacBook Pro Speaker" but should be ID3 "AirPods Pro" [INCORRECT]
get "kAudioProcessPropertyDevices" for Output scope: returns ID2 "MacBook Pro Speaker" [CORRECT]
Monitoring the property change for kAudioProcessPropertyDevices could provide a means to track audio streaming processes, but its current flaw renders it unusable.
So I'm curious if the macOS developers plan to address this issue in future releases, or if anyone can come up with a reliable alternative for identifying processes and associated audio devices being used for playback or recording.
Hi
Hopefully a simple question. I just reached for AudioObjectShow() to help debug stuff and it does not appear to work on audio devices or audio streams. It prints nothing for them. It does work on AudiokAudioObjectSystemObject, I've not explored what else it does or does not work on. I could not find any other posts about this, it it expected to work? On all audio objects? I'm on macOS 14.4.
Here is a simple demo. AudioObjectShow() prints out info for the System AudiokAudioObjectSystemObject but then prints nothing as we loop through the audio devices in the system (and same for all streams on all these devices, but I'm not showing that here).
#include <CoreAudio/AudioHardware.h>
static const AudioObjectPropertyAddress devicesAddress = {
kAudioHardwarePropertyDevices,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain
};
static const AudioObjectPropertyAddress nameAddress = {
kAudioObjectPropertyName,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain
};
int main(int argc, const char *argv[]) {
UInt32 size;
AudioObjectID *audioDevices;
AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &devicesAddress, 0, NULL, &size);
audioDevices = (AudioObjectID *) malloc(size);
AudioObjectGetPropertyData(kAudioObjectSystemObject, &devicesAddress, 0, NULL, &size, audioDevices);
UInt32 nDevices = size / sizeof(AudioObjectID);
printf("--- AudioObjectShow(kAudioObjectSystemObject):\n");
AudioObjectShow(kAudioObjectSystemObject);
for (int i=0; i < nDevices; i++) {
printf("-------------------------------------------------\n");
printf("audioDevices[%d] = 0x%x\n", i, audioDevices[i]);
AudioObjectGetPropertyDataSize(audioDevices[i], &nameAddress, 0, NULL, &size);
CFStringRef cfString = malloc(size);
AudioObjectGetPropertyData(audioDevices[i], &nameAddress, 0, NULL, &size, &cfString);
CFShow(cfString);
//AudioObjectShow() give us anything?
printf("--- AudioObjectShow(audioDevices[%d]=0x%x):\n", i, audioDevices[i]);
AudioObjectShow(audioDevices[i]);
printf("---\n");
}
}
Start of output...
AudioObjectID: 0x1
Class: Audio System Object
Name: The Audio System Object
-------------------------------------------------
audioDevices[0] = 0xd2
Darryl\u2019s iPhone Microphone
--- AudioObjectShow(audioDevices[0]=0xd2):
---
-------------------------------------------------
audioDevices[1] = 0x41
LG UltraFine Display Audio
--- AudioObjectShow(audioDevices[1]=0x41):
---
-------------------------------------------------
audioDevices[2] = 0x3b
LG UltraFine Display Audio
--- AudioObjectShow(audioDevices[2]=0x3b):
---
-------------------------------------------------
audioDevices[3] = 0x5d
BlackHole 16ch
--- AudioObjectShow(audioDevices[3]=0x5d):
---
-------------------------------------------------
As a straightforward example, I've taken Apple's MV-HEVC sample project and added two lines.
First, after the AVAssetWriterInput is created:
frameInput.performsMultiPassEncodingIfSupported = true
Second, after the call to multiviewWriter.startWriting():
print("canPerformMultiplePasses: \(frameInput.canPerformMultiplePasses)")
Which prints true.
This leads me to believe that the first encoding pass should proceed as-normal (even though I haven't handled the logic for the completion of the first pass, etc.).
However, I receive this error when the code attempts to appendTaggedBuffers to the AVAssetWriterInputTaggedPixelBufferGroupAdaptor:
Fatal error: Failed to append tagged buffers to multiview output
Am I missing a step? Or is the multi-pass encoding only supported for standard sample/pixel buffers (and not tagged buffers)?
I want to give the user the option for my text-to-speech app to use a voice that they choose - but when I call "AVSpeechSynthesisVoice.speechVoices()", I get a list of all possible voices.
Each voice has properties like .gender, .quality and voiceTraits which provides isNoveltyVoice and .isPersonalVoice - but I can't seem to find a property that tells me whether the voice has been downloaded (and can be used) or not.
How can I tell which voices the user has downloaded so I can present a list of what is currently available to use?
Also - I know I can't download voices for the user - but am I able to use a deep link to pop open the correct place for the user to download the voices? (ie Settings->Accessibility->Spoken Content->Voices->English)
Thanks in advance!
I'm trying to use AVPlayer to capture frames from a livestream that is remotely playing. Eventually I want to convert these frames to UIImages to be displayed. The code I have right now is not working because pixel_buffer doesn't have an actual value for some reason. When I print itemTime its value is continuously 0, which I think might be a potential cause of this issue. Would appreciate any help with getting this to work.
import RealityKit
import RealityKitContent
import AVFoundation
import AVKit
class ViewController: UIViewController {
let player = AVPlayer(url: URL(string: {webrtc stream link})!)
let videoOutput = AVPlayerItemVideoOutput(pixelBufferAttributes: [String(kCVPixelBufferPixelFormatTypeKey): NSNumber(value: kCVPixelFormatType_32BGRA)])
override func viewDidLoad() {
print("doing viewDidLoad")
super.viewDidLoad()
player.currentItem!.add(videoOutput)
player.play()
let displayLink = CADisplayLink(target: self, selector: #selector(displayLinkDidRefresh(link:)))
displayLink.add(to: RunLoop.main, forMode: RunLoop.Mode.common)
}
@objc func displayLinkDidRefresh(link: CADisplayLink) {
let itemTime = videoOutput.itemTime(forHostTime: CACurrentMediaTime())
if videoOutput.hasNewPixelBuffer(forItemTime: itemTime) {
if let pixelBuffer = videoOutput.copyPixelBuffer(forItemTime: itemTime, itemTimeForDisplay: nil) {
print("pixelBuffer \(pixelBuffer)") // yay, pixel buffer
let image = CIImage(cvImageBuffer: pixelBuffer) // or maybe CIImage?
print("CIImage \(image)")
}
}
}
}
struct ImmersiveView: View {
var body: some View {
RealityView { content in
if let scene = try? await Entity(named:"Immersive", in: realityKitContentBundle) {
content.add(scene)
}
let viewcontroller = ViewController()
viewcontroller.viewDidLoad()
}
Hi Apple developers,
Is it possible to display lyrics synchronized with a song in my app after the song has been identified?
Just like the feature available in the general Shazam app when clicking on the middle-top icon (with music note in it) after Shazam'ing a song.
The app I'm developing is intended for the Deaf and hard-of-hearing and therefore I would love to be able to show the song lyrics to make the app accessible.
I would greatly appreciate your help because I can't find this in the documentation. Many thanks in advance!
I see unexpected behavior when using AudioObjectGetPropertyData() to get the Channel Number Name or the Channel Category Name for the iPhone Microphone or the MacBook Pro Microphone audio devices. I am running macOS 14.4 Sonoma on a Intel MacBook Pro 15" 2019.
I have a test program that loops though all audio devices on a system, and all channels on each device. It uses AudioObjectGetPropertyData() to get the device name and manufacturer name and then iterate over the input and output channels getting Channel Number Name, Channel Name and Channel Category.
I would expect some of these values (like channel Name frequently is) to be empty CFStrings. Or for others to return FALSE to AudioObjectHasProperty() if the driver does not implement the property. And that is how things behave for most devices...
... except for the MacBook Pro Microphone and iPhone Microphone devices. There I get AudioObjectHasProperty() return TRUE but then a AudioObjectGetPropertyData() call with the exact same AudioObjectPropertyAddress returns with an error code 'WHAT'.
Took me a little while to realize the error cord being returned was 'WHAT' not 'what' and I added a modified checkError() function here to capture that and more.
So what surprised me is:
If AudioObjectHasProperty() returns TRUE then I expect that the matching AudioObjectGetPropertyData() works.
and
What the heck is 'WHAT'? I assume it is supposed to mean 'what' aka kAudioHardwareUnspecifiedError. Why is that actual error value not returned?
Are there other places that return 'WHAT' or capitalized versions of these standard OSStatus CoreAudio errors?
The example program is not complex but is too long for here so it's on GitHub at https://github.com/Darryl-Ramm/Wot
Here is some output from that program showing the unexpected behavior:
output.txt
hello,
i want to video play movie file(ex mp4, mov...) in vision pro app. and i want to video play movie panorama and curved view (ex albums app > panorama picture > panorama button) in my app.
Apple has introduced a way to disable Reactions by default with the key NSCameraReactionEffectGesturesEnabledDefault. I have checked and unchecked the reactions for several times, how to simulate the default behaviour scenario? Is there any setting or command line utility to reset my Mac mini to the default state for reactions gesture?
Hi. Can anyone who's into cosmology give hints as to how I might depict and animate dark matter for the VisionPro?
I first encountered this issue on my Spotify web app on 6th March 2024 where the song will restart and/or jumps to another song within the playlist, plays for a bit and occasionally restarts a number of times.
I will never know when Spotify will restart/jumps the song. No issues on Apple Music and on my iPhone & Apple Watch & then it started happening today 21 March 2024.
I tried Googling but to no avail and exhausted all solutions with Spotify's care team (re-installing, clearing the app and macbook's cache, host files & etc, restarting my devices).
I assume that it's now Apple's software issue with macOS Sonoma? Please help, anyone / Apple!!!
Details:
The spotify version i'm having is 1.2.33.1039.g8ddb5918
Macbook Air M2 2022 with macOS Sonoma 14.4
iPhone 14 Pro
Hello,
We are trying to use an audio calling functionality for visionOS with no success since the update of visionOS. We do not used CallKit for this flow.
We set the AudioSession as followed:
[sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeVoiceChat options: (AVAudioSessionCategoryOptionAllowBluetooth | AVAudioSessionCategoryOptionAllowBluetoothA2DP | AVAudioSessionCategoryOptionMixWithOthers) error:&error_];
We are creating our Audio unit as followed:
AudioComponentDescription desc_;
desc_.componentType = kAudioUnitType_Output;
desc_.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc_.componentManufacturer = kAudioUnitManufacturer_Apple;
desc_.componentFlags = 0;
desc_.componentFlagsMask = 0;
AudioComponent comp_ = AudioComponentFindNext(NULL, &desc_);
IMSXThrowIfError(AudioComponentInstanceNew(comp_, &_audioUnit),"couldn't create a new instance of Apple Voice Processing IO.");
UInt32 one_ = 1;
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, audioUnitElementIOInput, &one_, sizeof(one_)), "could not enable input on Apple Voice Processing IO");
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, audioUnitElementIOOutput, &one_, sizeof(one_)), "could not enable output on Apple Voice Processing IO");
IMSTagLogInfo(kIMSTagAudio, @"Rate: %ld", _rate);
bool isInterleaved = _channel == 2 ? true : false;
self.ioFormat = CAStreamBasicDescription(_rate, _channel, CAStreamBasicDescription::kPCMFormatInt16, isInterleaved);
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the input client format on Apple Voice Processing IO");
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the output client format on Apple Voice Processing IO");
UInt32 maxFramesPerSlice_ = 4096;
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO");
UInt32 propSize_ = sizeof(UInt32);
IMSXThrowIfError(AudioUnitGetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, &propSize_), "couldn't get max frames per slice on Apple Voice Processing IO");
AURenderCallbackStruct renderCallbackStruct_;
renderCallbackStruct_.inputProc = playbackCallback;
renderCallbackStruct_.inputProcRefCon = (__bridge void *)self;
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, 0, &renderCallbackStruct_, sizeof(renderCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO");
AURenderCallbackStruct inputCallbackStruct_;
inputCallbackStruct_.inputProc = recordingCallback;
inputCallbackStruct_.inputProcRefCon = (__bridge void *)self;
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Input, 0, &inputCallbackStruct_, sizeof(inputCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO");
And as soon as we try to start the AudioUnit we have the following error:
PhaseIOImpl.mm:1514 phaseextio@0x107a54320: failed to start IO directions 0x3, num IO streams [1, 1]: Error Domain=com.apple.coreaudio.phase Code=1346924646 "failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6" UserInfo={NSLocalizedDescription=failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6}
We do not use PHASE framework on our side and the error is not clear to us nor documented anywhere.
We also try to use a AudioUnit that only do Speaker witch works perfectly, but as soon as we try to record from an AudioUnit the start failed as well with the error AVAudioSessionErrorCodeCannotStartRecording
We suppose that somehow inside PHASE an IO VOIP audio unit is running that prevent us from stoping/killing it when we try to create our own, and stop the whole flow.
It used to work on visonOS 1.0.1
Regards,
Summit-tech
I'm attempting to record from a device's microphone (under iOS) using AVAudioRecorder. The examples are all quite simple, and I'm following the same method. But I'm getting error messages on attempts to record, and the resulting M4A file (after several seconds of recording) is only 552 bytes long and won't load. Here's the recorder usage:
func startRecording()
{
let settings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 22050,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
do
{
recorder = try AVAudioRecorder(url: tempFileURL(), settings: settings)
recorder?.delegate = self
recorder!.record()
recording = true
}
catch
{
recording = false
recordingFinished(success: false)
}
}
The immediate sign of trouble appears to be the following, in the console. Note the 0 bits per channel and irrelevant 8K sample rate:
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 8000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 8000 Hz, Int16
A subsequent attempt to load the file into AVAudioPlayer results in:
MP4_BoxParser.cpp:1089 DataSource read failed MP4AudioFile.cpp:4365 MP4Parser_PacketProvider->GetASBD() failed AudioFileObject.cpp:105 OpenFromDataSource failed AudioFileObject.cpp:80 Open failed
But that's not surprising given that it's only 500+ bytes and we had the earlier error. Anybody have an idea here? Every example on the Web shows essentially this exact method.
I've also tried constructing the recorder with
let audioFormat = AVAudioFormat.init(standardFormatWithSampleRate: 44100, channels: 1)
if audioFormat == nil
{
print("Audio format failed.")
}
else
{
do
{
recorder = try AVAudioRecorder(url: tempFileURL(), format: audioFormat!)
...
with mostly the same result. In that case the instantiation error message was the following, which at least mentions the requested sample rate:
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 44100 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 44100 Hz, Int32
We have a local video file whose data keeps updating every second from web socket connection. An AVAssetItem is created from this local file and an AVFragmentedAssetMinder is used to monitor the changes to the asset with mindingInterval: 2.
We are observing for AVAssetDurationDidChange and AVAssetContainsFragmentsDidChange notifications
NotificationCenter.default.addObserver(self, selector: #selector(onVideoUpdate), name: .AVAssetDurationDidChange, object: nil)
NotificationCenter.default.addObserver(self, selector: #selector(assetContainsFragments), name: .AVAssetContainsFragmentsDidChange, object: nil)
The above notifications are not called in iOS 17 iPhone devices. These notifications are called in iOS 17 simulator and in previous iOS devices.
Hi all, I'm working on an app that involves measuring the heading of one iPhone relative to another iPhone. I need to be able to record audio at the same time from at least 2 of built-in data sources at once.
Does anyone know how I can achieve this? I've found that, when using the .measurement mode for an AVAudioSession, the stereo polar pattern is not available. Also, I see that it doesn't seem possible to select multiple data sources.
Is there something I'm missing? If this is not possible, why not?
Hey all!
I'm building a Camera app using AVFoundation, and I am using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. (I cannot use AVCaptureMovieFileOutput because I am doing some processing inbetween)
When recording the audio CMSampleBuffers to the AVAssetWriter, I noticed that compared to the stock iOS camera app, they are mono-audio, not stereo audio.
I wonder how recording in stereo audio works, are there any guides or documentation available for that?
Is a stereo audio frame still one CMSampleBuffer, or will it be multiple CMSampleBuffers? Do I need to synchronize them? Do I need to set up the AVAssetWriter/AVAssetWriterInput differently?
This is my Audio Session code:
func configureAudioSession(configuration: CameraConfiguration) throws {
ReactLogger.log(level: .info, message: "Configuring Audio Session...")
// Prevent iOS from automatically configuring the Audio Session for us
audioCaptureSession.automaticallyConfiguresApplicationAudioSession = false
let enableAudio = configuration.audio != .disabled
// Check microphone permission
if enableAudio {
let audioPermissionStatus = AVCaptureDevice.authorizationStatus(for: .audio)
if audioPermissionStatus != .authorized {
throw CameraError.permission(.microphone)
}
}
// Remove all current inputs
for input in audioCaptureSession.inputs {
audioCaptureSession.removeInput(input)
}
audioDeviceInput = nil
// Audio Input (Microphone)
if enableAudio {
ReactLogger.log(level: .info, message: "Adding Audio input...")
guard let microphone = AVCaptureDevice.default(for: .audio) else {
throw CameraError.device(.microphoneUnavailable)
}
let input = try AVCaptureDeviceInput(device: microphone)
guard audioCaptureSession.canAddInput(input) else {
throw CameraError.parameter(.unsupportedInput(inputDescriptor: "audio-input"))
}
audioCaptureSession.addInput(input)
audioDeviceInput = input
}
// Remove all current outputs
for output in audioCaptureSession.outputs {
audioCaptureSession.removeOutput(output)
}
audioOutput = nil
// Audio Output
if enableAudio {
ReactLogger.log(level: .info, message: "Adding Audio Data output...")
let output = AVCaptureAudioDataOutput()
guard audioCaptureSession.canAddOutput(output) else {
throw CameraError.parameter(.unsupportedOutput(outputDescriptor: "audio-output"))
}
output.setSampleBufferDelegate(self, queue: CameraQueues.audioQueue)
audioCaptureSession.addOutput(output)
audioOutput = output
}
}
This is how I activate the audio session just before I start recording:
let audioSession = AVAudioSession.sharedInstance()
try audioSession.updateCategory(AVAudioSession.Category.playAndRecord,
mode: .videoRecording,
options: [.mixWithOthers,
.allowBluetoothA2DP,
.defaultToSpeaker,
.allowAirPlay])
if #available(iOS 14.5, *) {
// prevents the audio session from being interrupted by a phone call
try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true)
}
if #available(iOS 13.0, *) {
// allow system sounds (notifications, calls, music) to play while recording
try audioSession.setAllowHapticsAndSystemSoundsDuringRecording(true)
}
audioCaptureSession.startRunning()
And this is how I set up the AVAssetWriter:
let audioSettings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: options.fileType)
let format = audioInput.device.activeFormat.formatDescription
audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: format)
audioWriter!.expectsMediaDataInRealTime = true
assetWriter.add(audioWriter!)
ReactLogger.log(level: .info, message: "Initialized Audio AssetWriter.")
The rest is trivial - I receive CMSampleBuffers of the audio in my delegate's callback, write them to the audioWriter, and it ends up in the .mov file - but it is not stereo, it's mono.
Is there anything I'm missing here?
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code.
So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock;
Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback:
[playerNode installTapOnBus:bus
bufferSize:bufferSize
format:format
block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
//Inspect current audio here and fire...
}];
[playerNode scheduleBuffer:fullbuffer
atTime:startTime
options:0
completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack
completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType)
{
// some code is here, not important to this question.
}];
The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled).
Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
Hello, I am currently developing an application for audiogram testing. What methods can I use to obtain the dB values of headphone levels in real-time?
Sometimes when I'm putting on or taking off clothes, I accidentally bump the digital crown of my Apple Watch or AirPods Max, and then the volume suddenly becomes very loud, which has been bothering me for a long time.
I followed the instructions in https://support.apple.com/zh-sg/guide/iphone/iphb71f9b54d/ios, but I couldn't find the relevant settings. The system prompt is to "Reduce Loud Audio", rather than to lower the volume (iOS 17.4).
I searched, but I couldn't find any related apps in the App Store. I asked the AI and it provided a relevant solution, so I want to learn Swift and create an app myself (I've only been learning for less than a week). Here's the solution provided by the AI:
The general idea is to listen for the routeChange event of AVAudioSession through NotificationCenter
then use MPVolumeView to get the slider, and set the value of the slider to control the volume limit.
However, when I debugged it, I found that it didn't work even after setting it. I would like to ask where the problem might be and how I should adjust it?
@objc func setMaximumVolume () -> Void {
if !enableMaxvolume {
return;
}
let volumeView = MPVolumeView()
if let slider = volumeView.subviews.first as? UISlider {
slider.value = Float(self.maximumVolume / 100)
print("setMaximumVolume: \(slider.value)")
}
}