There are different microphones that can be connected via a 3.5-inch jack or via USB or via Bluetooth, the behavior is the same.
There is a code that gets access to the microphone (connected to the 3.5-inch audio jack) and starts an audio capture session. At the same time, the microphone use icon starts to be displayed. The capture of the audio device (microphone) continues for a few seconds, then the session stops, the microphone use icon disappears, then there is a pause of a few seconds, and then a second attempt is made to access the same microphone and start an audio capture session. At the same time, the microphone use icon is displayed again. After a few seconds, access to the microphone stops and the audio capture session stops, after which the microphone access icon disappears.
Next, we will try to perform the same actions, but after the first stop of access to the microphone, we will try to pull the microphone plug out of the connector and insert it back before trying to start the second session. In this case, the second attempt to access begins, the running part of the program does not return errors, but the microphone access icon is not displayed, and this is the problem. After the program is completed and restarted, this icon is displayed again.
This problem is only the tip of the iceberg, since it manifests itself in the fact that it is not possible to record sound from the audio microphone after reconnecting the microphone until the program is restarted.
Is this normal behavior of the AVFoundation framework? Is it possible to somehow make it so that after reconnecting the microphone, access to it occurs correctly and the usage indicator is displayed? What additional actions should the programmer perform in this case? Is there a description of this behavior somewhere in the documentation?
Below is the code to demonstrate the described behavior.
I am also attaching an example of the microphone usage indicator icon.
Computer description: MacBook Pro 13-inch 2020 Intel Core i7 macOS Sequoia 15.1.
#include <chrono>
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <thread>
#include <AVFoundation/AVFoundation.h>
#include <Foundation/NSString.h>
#include <Foundation/NSURL.h>
AVCaptureSession* m_captureSession = nullptr;
AVCaptureDeviceInput* m_audioInput = nullptr;
AVCaptureAudioDataOutput* m_audioOutput = nullptr;
std::condition_variable conditionVariable;
std::mutex mutex;
bool responseToAccessRequestReceived = false;
void receiveResponse()
{
std::lock_guard<std::mutex> lock(mutex);
responseToAccessRequestReceived = true;
conditionVariable.notify_one();
}
void waitForResponse()
{
std::unique_lock<std::mutex> lock(mutex);
conditionVariable.wait(lock, [] { return responseToAccessRequestReceived; });
}
void requestPermissions()
{
responseToAccessRequestReceived = false;
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeAudio completionHandler:^(BOOL granted)
{
const auto status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeAudio];
std::cout << "Request completion handler granted: " << (int)granted << ", status: " << status << std::endl;
receiveResponse();
}];
waitForResponse();
}
void timer(int timeSec)
{
for (auto timeRemaining = timeSec; timeRemaining > 0; --timeRemaining)
{
std::cout << "Timer, remaining time: " << timeRemaining << "s" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
}
bool updateAudioInput()
{
[m_captureSession beginConfiguration];
if (m_audioOutput)
{
AVCaptureConnection *lastConnection = [m_audioOutput connectionWithMediaType:AVMediaTypeAudio];
[m_captureSession removeConnection:lastConnection];
}
if (m_audioInput)
{
[m_captureSession removeInput:m_audioInput];
[m_audioInput release];
m_audioInput = nullptr;
}
AVCaptureDevice* audioInputDevice = [AVCaptureDevice deviceWithUniqueID: [NSString stringWithUTF8String: "BuiltInHeadphoneInputDevice"]];
if (!audioInputDevice)
{
std::cout << "Error input audio device creating" << std::endl;
return false;
}
// m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:nil];
// NSError *error = nil;
NSError *error = [[NSError alloc] init];
m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:&error];
if (error)
{
const auto code = [error code];
const auto domain = [error domain];
const char* domainC = domain ? [domain UTF8String] : nullptr;
std::cout << code << " " << domainC << std::endl;
}
if (m_audioInput && [m_captureSession canAddInput:m_audioInput]) {
[m_audioInput retain];
[m_captureSession addInput:m_audioInput];
}
else
{
std::cout << "Failed to create audio device input" << std::endl;
return false;
}
if (!m_audioOutput)
{
m_audioOutput = [[AVCaptureAudioDataOutput alloc] init];
if (m_audioOutput && [m_captureSession canAddOutput:m_audioOutput])
{
[m_captureSession addOutput:m_audioOutput];
}
else
{
std::cout << "Failed to add audio output" << std::endl;
return false;
}
}
[m_captureSession commitConfiguration];
return true;
}
void start()
{
std::cout << "Starting..." << std::endl;
const bool updatingResult = updateAudioInput();
if (!updatingResult)
{
std::cout << "Error, while updating audio input" << std::endl;
return;
}
[m_captureSession startRunning];
}
void stop()
{
std::cout << "Stopping..." << std::endl;
[m_captureSession stopRunning];
}
int main()
{
requestPermissions();
m_captureSession = [[AVCaptureSession alloc] init];
start();
timer(5);
stop();
timer(10);
start();
timer(5);
stop();
}
Media Library
RSS for tagAccess read-only collections of the user’s multimedia content using Media Library.
Posts under Media Library tag
8 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
We are trying to build a simple image capture app using AVFoundation and AVCaptureDevice.
Custom settings are used for exposure point and bias.
But when image is captured using front camera , the image captured from the app and front native camera does not match.
The image captured from the app includes more area than the native app.
Also there is difference between the tilt angle between two images.
So is there any way to capture image exactly same as native camera using AVFoundation and AVCaptureDevice.
Native
Custom
I want my app to allow the user to search for certain words in a video file and the transcript of that video. I found a Transcript class. I don't remember which Framework it is in. Would someone point me in the right direction? What Framework and Classes should I use.
MPMusicPlayerControllers nowPlayingItem no longer seems to be able to change a song. The code use to work but seems to be broken on iOS 16, 17 and now the iOS 18 beta.
When newSong is triggered, the song restarts but it does not change songs. Instead I get the following error: Failed to set now playing item error=<MPMusicPlayerControllerErrorDomain.5 "Unable to play item <MPConcreteMediaItem: 0x9e9f0ef70> 206357861099970620" {}>.
The documentation seems to indicate I’m doing things correctly.
class MusicPlayer {
var songTwo: MPMediaItem?
let player = MPMusicPlayerController.applicationMusicPlayer
func start() async {
await MPMediaLibrary.requestAuthorization()
let myPlaylistsQuery = MPMediaQuery.playlists()
let playlists = myPlaylistsQuery.collections!.filter { $0.items.count > 2}
let playlist = playlists.first!
let songOne = playlist.items.first!
songTwo = playlist.items[1]
player.setQueue(with: playlist)
play(songOne)
}
func newSong() {
guard let songTwo else { return }
play(songTwo)
}
private func play(_ song: MPMediaItem) {
player.stop()
player.nowPlayingItem = song
player.prepareToPlay()
player.play()
}
}
Hello,
We've a music app reading MPMediaItem.
We got items using MPMediaQuery. But we realized that some downloaded tracks from Apple Music were fetched too. Not all downloaded track but only those who were played recently.
Of course, since these tracks are protected with DRM we can't play them in our player.
It's weird to get them in our query because we added predicate in order to dont fetch protected asset and iCloud item
MPMediaPropertyPredicate(value: false, forProperty: MPMediaItemPropertyHasProtectedAsset)
MPMediaPropertyPredicate(value: false, forProperty: MPMediaItemPropertyIsCloudItem)
To be sure, we made a second check on each item we've fetched
extension MPMediaItem {
public func isValid() -> Bool {
return self.assetURL != nil && !self.isCloudItem && !self.hasProtectedAsset
}
}
But we still get these items. Their hasProtectedAsset attribute always return false.
I dont know if it's a bug, but since we can't detect this items as Apple Music downloaded track, we can't either:
filter them to not add them in our application library
OR
switch on a MPMusicPlayerController.applicationMusicPlayer to allow the user to play them
So i meant to make a shared album but made a shared library. That being said, I deleted the shared album but my family can not remove it from the phones
I'm trying to accomplish creating a new playlist on device that appears in AppleMusic, and adding into the playlist a selection of MP3s within a small IOS app.
Now the MP3's are either a stream of bytes, or a flat file already stored on the device (the app itself generates these - they aren't downloaded, they are created in app, and then stored on the local device) in it's local storage space.
The idea is that created tracks can show up in a specific play list on the device.
Now, there appears to be some conflict as to what framework I need to use.
I've found MPMediaPlayer, which appears to allow me to create a playlist using the GetPlaylist call, although the documentation on this seems pretty sparse and there's not a lot of examples I can find on how to use this?
It looks like a UUID is passed in, but there is no documentation on what this UUID is or where it comes from? If I want to create a new Playlist, I presume I need to generate a UUID, and then store that locally in order to be able to access that playlist again later, yes?
There's an AddItem call which looks like it's how you add a track to a playlist, but there's no documentation on how you generate an entry. The documentation for this function talks about a Product ID, without describing what the product ID is, or where it needs to come from. Is this a GUID? Is it a name/description? Does it have to be unique? I'm assuming this Product ID refers to that which is being added to the playlist, but the documentation is sadly lacking in terms of explaining what the product ID refers to. Is it a media Item, or is that what is created when whatever entity the Product ID is referring to is added to the playlist?
I'm assuming I can create a NSURL of the file that is stored that is actually the MP3 sample, but what I do with that in order to actually add it as a playlist entry is unknown. I'm sure there is a mechanism to do this, it's just not clear what that is.
There's a lack of understanding or explanation of what the process is here, and some illumination would be helpful.
Running in a Mac (Catalyst) target or Apple Silicon (designed for iPad).
Just accessing the playbackStoreID from the MPMediaItem shows this error in the console:
-[ITMediaItem valueForMPMediaEntityProperty:]: Unhandled MPMediaEntityProperty subscriptionStoreItemAdamID.
The value returned is always “”.
This works as expected on iOS and iPadOS, returning a valid playbackStoreID.
import SwiftUI
import MediaPlayer
@main
struct PSIDDemoApp: App {
var body: some Scene {
WindowGroup {
Text("playbackStoreID demo")
.task {
let authResult = await MPMediaLibrary.requestAuthorization()
if authResult == .authorized {
if let item = MPMediaQuery.songs().items?.first {
let persistentID = item.persistentID
let playbackStoreID = item.playbackStoreID // <--- Here
print("Item \(persistentID), \(playbackStoreID)")
}
}
}
}
}
}
Xcode 15.1, also tested with Xcode 15.3 beta 2.
MacOS Sonoma 14.3.1
FB13607631