Post not yet marked as solved
All that being said, for your high-level goal of recording 10-15 seconds of audio you might find this api simpler to use: https://developer.apple.com/documentation/avfaudio/avaudiorecorder/1389378-record
As I will be recoding unknown lengths, and trying to use "background" samples to remove background noise, I am guessing that using the same functions to record would be the wiser?
The size of the incoming buffers. The implementation may choose another size.
Does this also apply to the same instance call of .installTap ? In other words, once running, will I start getting different buffer sizes in the same instance?
So now, when I create the table I am registering it as such
func createTblView() -> UITableView {
let myTable = MyTableView(frame: CGRect(x: 0, y: 0, width: 0, height: 0 ))
myTable.register(UITableViewCell.self, forCellReuseIdentifier: "reuse")
myTable.backgroundColor = .white
return myTable
}
And within: func tableView( tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell_
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
// let cell = UITableViewCell()
let cell = tableView.dequeueReusableCell(withIdentifier: "reuse")
var config = cell!.defaultContentConfiguration()
... additional config code below
}
This is just a quick sample, obviously I have to add code because cell is now optional, "reuse" is just a test, etc.
When I look at cell in the debugger I do see:
_reuseIdentifier NSTaggedPointerString * "reuse" 0x84c7f8963e3e2abe
It all appears correct, but since the return cell is an optional, and it wasn't without using the tableView.dequeueReusableCell, just want to make sure I am implementing this correctly.
Thanks for all your help.
I jumped to the definition in Xcode and found the following:
@available(iOS 14.0, tvOS 14.0, *)
public var contentConfiguration: UIContentConfiguration?
@available(iOS 14.0, tvOS 14.0, *)
public func defaultContentConfiguration() -> UIListContentConfiguration
So I tried a casting the contentConfiguration as a UIListContentConfiguration and it works.
While it works, I would appreciate it if someone would tell me if I am still somehow doing it incorrectly. Perhaps I am configuring the cell incorrectly to begin with?
Here is the following working code:
let cell = myClientList.cellForRow(at: myInvsList.indexPathForSelectedRow!)
let config = cell?.contentConfiguration as? UIListContentConfiguration
dbClass.curMasterinvList = config?.text ?? ""
No,
let config = cell?.contentConfiguration
dbClass.curMasterinvList = config?.text
Value of type 'UIContentConfiguration' has no member 'text'
For anyone running into this problem, it's a permission issue. Answer is here.
I thought I replied, but anyway:
I found this answer, and it has many solutions. The one that worked for me was "doing a hard restart of my iPhone." Ashamed I didn't think of trying that first.
Post not yet marked as solved
This is what worked for me. I went to this answer, which had a few answers.
The one that worked for me was: restart the iPhone. Which I am a little ashamed I didn't try on my own.
So it appears that I was misunderstanding where the fault/error lied.
I had changed an attribute in CoreData from String to Int. Since I had no data yet, I assumed that the existing entity would be over written. That was not the case.
So essentially I just have to delete the app from any device I am running them on, then rebuild.
Closing because again I asked a stupid question
Post not yet marked as solved
So I created the code below, and the weird thing was that the output read:
going in...
and we're out
Need to ask user
So essentially it blows right through the semaphore. However, if I declare the semaphore with 0, then the first semaphore.wait() is successful, and the program freezes because the userAlert permission box never pops up.
What is going on here?
print ("going in...")
let semaphore = DispatchSemaphore(value: 1 )
DispatchQueue.global(qos: .userInitiated).async {
let mediaAuthorizationStatus = AVCaptureDevice.authorizationStatus(for: .audio)
switch mediaAuthorizationStatus {
case .denied:
print (".denied")
case .authorized:
print ("authorized")
case .restricted:
print ("restricted")
case .notDetermined:
print("Need to ask user")
semaphore.wait()
AVCaptureDevice.requestAccess(for: .audio, completionHandler: { (granted: Bool) in
if granted {
semaphore.signal()
} else {
semaphore.signal()
}
})
@unknown default:
print("unknown")
}
print ("\(semaphore.debugDescription)")
}
semaphore.wait()
print ("and we're out")
I was informed that this wouldn't work. No matter where the instance of the class was generated, its methods would still run on the main queue by default.
It's kind of kludgy but someone gave me the answer here:
https://stackoverflow.com/questions/70616820/my-writing-the-installtap-buffer-to-an-avaudiofile-seems-to-fail-data-wise/70618216?noredirect=1#comment124895854_70618216
You need to let your AVAudioFile go out of scope (nil it at some point), that's how you call AVAudioFile's close() method, which presumably finishes writing out header information.
This is a rewriting of my original question, once I realized I was reading the flow incorrectly.
Working on a speech to text demo, which works. But I am still trying to learn the flow of Swift. While I may be calling it incorrectly, I'm looking at the closure in node.installTap as a C callback function. When the buffer is full, the code within the closure is called.
From what I interpret here, every time the buffer becomes full, the closure from within the node.installTap runs.
What I can't figure out is what triggers the closure within:
task = speechRecognizer?.recognitionTask(with: request, resultHandler: {})
The entire demo below works, am just trying to figure out how the AVAudioEngine knows when to call that second closure. Is there some connection?
func startSpeechRecognition (){
let node = audioEngine.inputNode
let recordingFormat = node.outputFormat(forBus: 0)
node.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat)
{ (buffer, _) in self.request.append(buffer) }
audioEngine.prepare()
do {
try audioEngine.start()
} catch let error {
...
}
guard let myRecognition = SFSpeechRecognizer() else {
...
return
}
if !myRecognition.isAvailable {
...
}
task = speechRecognizer?.recognitionTask(with: request, resultHandler:
{ (response, error) in guard let response = response else {
if error != nil {
print ("\(String(describing: error.debugDescription))")
} else {
print ("problem in repsonse")
}
return
}
let message = response.bestTranscription.formattedString
print ("\(message)")
})
}
I'm glad I asked this, but it turns out I do NOT understand the flow. How is
task = speechRecognizer?.recognitionTask(with: request, resultHandler: { (response, error) in guard let response = response else {
always being called multiple times? Does it have a connection to node.installTap?