Post not yet marked as solved
This just seems like a useful thing to have when rendering audio. For example, let's say you have an effect that pitches audio up/down. That typically requires that you know the sample rate of the incoming audio. The way I do this right now is just to save the sample rate after the AUAudioUnit's render resources have been allocated, but being provided this info on a per-render-callback basis seems more useful.
Another use case is for AUAudioUnit's on the input chain. Since the format for connections must match the hardware format, you can no longer explicitly set the format that you expect the audio to come in at. You can check the sample rate on the AVAudioEngine's input node or the sample rate on the AVAudioSession singleton, but when you are working with the audio from within the render callback, you don't want to be accessing those methods due to the possibility they are blocking calls. This is especially true when using the AVAudioSinkNode where you don't have the ability to set the sample rate before the underlying node's render resources are allocated.
Am I missing something here, or does this actually seem useful?
Post not yet marked as solved
The GarageBand app can import both midi and recorded audio file into a single player to play.
Just like this:
My App have the same feature but I don't know how to implement it.
I have tried the AVAudioSequencer,but it only can load and play MIDI file.
I have tried the AVPlayer and AVPlayerItem,but it seems that it can't load the MIDI file.
So How to combine MIDI file and audio file into a single AVPlayerItem or anything else to play?
Post not yet marked as solved
This is the crash log from Firebase.
Fatal Exception: NSInvalidArgumentException
*** -[AVAssetWriter addInput:] Format ID 'lpcm' is not compatible with file type com.apple.m4a-audio
But I can't reproduce the crash ...
This is the demo code
Does anyone know where the problem is ?
let normalOutputSettings:[String:Any] = [
AVFormatIDKey : kAudioFormatLinearPCM,
AVSampleRateKey : 44100,
AVNumberOfChannelsKey : 2,
AVLinearPCMBitDepthKey : 16,
AVLinearPCMIsNonInterleaved : false,
AVLinearPCMIsFloatKey : false,
AVLinearPCMIsBigEndianKey : false
]
let writerInput = AVAssetWriterInput(mediaType: .audio, outputSettings: outputSettings)
let outputURL = URL(fileURLWithPath: NSTemporaryDirectory() + UUID().uuidString + ".m4a")
self.writer = try! AVAssetWriter(outputURL: outputURL, fileType: fileType)
writer?.add(writerInput)
Post not yet marked as solved
I'm writing a macOS audio unit hosting app using the AVAudioUnit and AUAudioUnit APIs. I'm trying to use the NSView cacheDisplay(in:to:) function to capture an image of a plugin's view:
func viewToImage(veiwToCapture: NSView) -> NSImage? {
var image: NSImage? = nil
if let rep = veiwToCapture.bitmapImageRepForCachingDisplay(in: veiwToCapture.bounds) {
veiwToCapture.cacheDisplay(in: veiwToCapture.bounds, to: rep)
image = NSImage(size: veiwToCapture.bounds.size)
image!.addRepresentation(rep)
}
return image
}
}
This works ok when a plugin is instantiated using the .loadInProcess option. If the plugin is instantiated using the .loadOutOfProcess option the resulting bitmapImageRep is blank.
I'd much rather be loading plugins out-of-process for the enhanced stability. Is there any trick I'm missing to be able to capture the contents of the NSView from an out-of-process audio unit?
Post not yet marked as solved
MacOS CoreAudio buffer playback produces annoying noise between correct sound.
I'm interested to play valid .wav data though the buffer.
Why I'm playing a .wav? It has valid data.
What I'm trying to achieve is to understand how to write correctly to the sound buffer. I'm porting a music engine to MacOS ....
#include <string.h>
#include <math.h>
#include <unistd.h>
#include <stdio.h>
#include <AudioToolbox/AudioToolbox.h>
FILE *fp;
typedef struct TwavHeader{
char RIFF[4];
uint32_t RIFFChunkSize;
char WAVE[4];
char fmt[4];
uint32_t Subchunk1Size;
uint16_t AudioFormat;
uint16_t NumOfChan;
uint32_t SamplesPerSec;
uint32_t bytesPerSec;
uint16_t blockAlign;
uint16_t bitsPerSample;
char Subchunk2ID[4];
uint32_t Subchunk2Size;
}TwavHeader;
typedef struct SoundState {
bool done;
}SoundState;
void auCallback(void *inUserData, AudioQueueRef queue, AudioQueueBufferRef buffer) {
buffer->mAudioDataByteSize = 1024*4;
int numToRead = buffer->mAudioDataByteSize / sizeof(float) * 2;
void *p = malloc(numToRead);
fread(p, numToRead,1,fp);
void *myBuf = buffer->mAudioData;
for (int i=0; i < numToRead / 2; i++) {
uint16_t w = *(uint16_t *)&(p[i*sizeof(uint16_t)]);
float f = ((float)w / (float)0x8000) - 1.0;
*(float *)&(myBuf[i*sizeof(float)]) = f;
}
free(p);
AudioQueueEnqueueBuffer(queue, buffer, 0, 0);
}
void checkError(OSStatus error){
if (error != noErr) {
printf("Error: %d", error);
exit(error);
}
}
int main(int argc, const char * argv[]) {
printf("START\n");
TwavHeader theHeader;
fp = fopen("/Users/kirillkranz/Documents/mytralala-code/CoreAudioTest/unreal.wav", "r");
fread(&theHeader, sizeof(TwavHeader),1,fp);
printf("%i\n",theHeader.bitsPerSample);
AudioStreamBasicDescription auDesc = {};
auDesc.mSampleRate = theHeader.SamplesPerSec;
auDesc.mFormatID = kAudioFormatLinearPCM;
auDesc.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked;
auDesc.mBytesPerPacket = 8;
auDesc.mFramesPerPacket = 1;
auDesc.mBytesPerFrame = 8;
auDesc.mChannelsPerFrame = 2;
auDesc.mBitsPerChannel = 32;
AudioQueueRef auQueue = 0;
AudioQueueBufferRef auBuffers[2] ={};
// our persistent state for sound playback
SoundState soundState= {};
soundState.done=false;
OSStatus err;
// most of the 0 and nullptr params here are for compressed sound formats etc.
err = AudioQueueNewOutput(&auDesc, &auCallback, &soundState, 0, 0, 0, &auQueue);
checkError(err);
// generate buffers holding at most 1/16th of a second of data
uint32_t bufferSize = auDesc.mBytesPerFrame * (auDesc.mSampleRate / 16);
err = AudioQueueAllocateBuffer(auQueue, bufferSize, &(auBuffers[0]));
checkError(err);
err = AudioQueueAllocateBuffer(auQueue, bufferSize, &(auBuffers[1]));
checkError(err);
// prime the buffers
auCallback(&soundState, auQueue, auBuffers[0]);
auCallback(&soundState, auQueue, auBuffers[1]);
// enqueue for playing
AudioQueueEnqueueBuffer(auQueue, auBuffers[0], 0, 0);
AudioQueueEnqueueBuffer(auQueue, auBuffers[1], 0, 0);
// go!
AudioQueueStart(auQueue, 0);
char rxChar[10]; scanf( "%s", &rxChar);
printf("FINISH");
fclose(fp);
// be nice even it doesn't really matter at this point
if (auQueue)
AudioQueueDispose(auQueue, true);
}
what do I do wrong?
Post not yet marked as solved
Hello,
Is there any way to create a call recording app without distributing it to the app store? And just use it as a beta local app on my iphone only?
If there is a way to do it, can you briefly explain how can i do it?
Because all of the app store recording apps store my calls on their server and i dont want that, i just want to save my calls locally on my iphone.
Thanks!
Post not yet marked as solved
We're seeing the following when running auvaltool
It seems AUParameterNode cannot be constructed with a default value and has no option to set default values via property or method so what does this mean? ....
Does it mean we ALSO have to iterate all the parameters using the old fashioned long winded toolbox calls ?
Or does it mean nothing at all?
The "Will fail in future auval version" is worrying!
Values: Minimum = 5.000000, Default = 0.000000, Maximum = 300.000000
Flags: Expert Mode, Readable,
WARNING: use -strict_DefP flag
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* Parameter's Published defaultvalue does not fall with [min, max] range *
* This will fail using -strict option. Will fail in future auval version *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
-parameter PASS
Post not yet marked as solved
AVAudioUnitTimePitch.latency is 0.09s on my debug devices.
It will have a little time delay during render audio using `AVAudioEngine.
I just want to change the pitch during playing audio.
So how can I avoid this this latency??
Post not yet marked as solved
I built a simple recorder on my website for users to record and playback audio. It works on ALL desktop browsers (including Safari) but when I use any browser on my iPhone, the mic is active at the opposite time.
The flow is: ask permission > user allows mic access > user presses record > records audio > saves and plays back
On iPhone what's happening is after the user allows permission, the mic goes active (visualized by the mic icon in safari browser) and then once the user presses record, it disables the mic.
I am using getUserMedia within a React.js app
Why is it doing this?
Post not yet marked as solved
I am working on a function that inserts audio files into comments in MS Word for Mac. I can insert MS Word, MS Excel, and picture files, but not audio files. It was also a problem for MS Word in Windows 11, but Microsoft eventually fixed it.
Post not yet marked as solved
I am trying to compile on my iPhone XS device an app on Xcode Version 14.0 beta 2. First time I connected the device it correctly showed among the devices urging me to enable Developer Modality on Privacy & Security. I did it and the iPhone restarted, but once it happened the iPhone disappeared from the targets, thus not allowing me to compile on it. For some strange reason my iPad instead showed, but when I tried to compile on it, Xcode of course urged me to connect it.
So it seems Xcode sees not connected device and does not see connected devices.
Is anyone able to give a sense to that and possibly a solution?