Hey there, I'm trying to display all user's albums using the MediaPlayer library. I'm getting many albums returning nil, but I know artwork exists because they show up in the default Music app. There doesn't seem to be much rhyme or reason for what shows up and what doesn't. All downloaded albums display artwork, but some cloud album artwork displays as well. Here's the code I'm using to debug this.
let query = MPMediaQuery.albums()
if let albumCollections = query.collections {
albums = albumCollections
}
for album in albums {
let artwork = album.representativeItem?.artwork
print(artwork, artwork?.image(at: CGSize(width: 100, height: 100)))
}
Any help would be greatly appreciated. Thanks!
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
I have a custom USB device that includes a microphone. I can see the microphone on macOS when I plug in the device so I know that it is working with the kernel and AV subsystems. I can enumerate and reference the microphone using AVCaptureDevice but I have not been able to figure out how to use this device reference with AVAudioEngine. I'm trying to accomplish two things with this microphone.
I want to stream audio from the microphone and have it rendered to the speakers on my MacBook Pro.
I want to capture sound data from the microphone and forward it to a live streaming API.
To my mind, from what I've read, I need AVAudioEngine to do this but I'm having trouble determining from the documentation just how to go about it on macOS. It seems that there is a lot more information for iOS or iPadOS but since USB-C support is sparsely documented on those operating systems, I'm focusing on the desktop (macOS) for now.
Can I convert an AVCaptureDevice into and audio input for AVAudioEngine? If not, how can I accomplish what I'm trying to do using whatever is available on AVFoundation?
Hello,
I tried to build AVCam sample application for iOS17 and run it on MacBook (designed as iPad) with macos14.3 (Sonoma).
https://developer.apple.com/documentation/avfoundation/capture_setup/avcam_building_a_camera_app?language=objc
When building and testing with Xcode 15.2, AVCam application crashes systematically when choosing target "My Mac (Designed for iPad)"
In fact, SIGABORT signal is received in a thread dealing with "portrait effect"
Thread 19 Queue : com.apple.portrait.effect_init (serial)
Is it a known bug? Is there a workaround about this case?
Best regards
External webcam is detected by AVCam but preview and capture are systematically upside down. (may be the same FaceTime HD camera's)
Is it a known bug? Is there a workaround about this case?
I'm looking for a sample code project on integrating Spatial Audio into my app, Tunda Island, a music-loving, make friends and dating app. I have gone as far as purchasing a book "Exploring MusicKit" by Rudrank Riyam but to no avail.
I am trying to use the Speech Synthesizer to speak the pronunciation of a word in British English rather than play a local audio file which I had before. However, I keep getting this in the debugger:
#FactoryInstall Unable to query results, error: 5 Unable to list voice folder Unable to list voice folder Unable to list voice folder IPCAUClient.cpp:129 IPCAUClient: bundle display name is nil Unable to list voice folder
Here is my code, any suggestions??
` func playSampleAudio() {
let speechSynthesizer = AVSpeechSynthesizer()
let speechUtterance = AVSpeechUtterance(string: currentWord)
// Search for a voice with a British English accent.
let voices = AVSpeechSynthesisVoice.speechVoices()
var foundBritishVoice = false
for voice in voices {
if voice.language == "en-GB" {
speechUtterance.voice = voice
foundBritishVoice = true
break
}
}
if !foundBritishVoice {
print("British English voice not found. Using default voice.")
}
// Configure the utterance's properties as needed.
speechUtterance.rate = AVSpeechUtteranceDefaultSpeechRate
speechUtterance.pitchMultiplier = 1.0
speechUtterance.volume = 1.0
// Speak the word.
speechSynthesizer.speak(speechUtterance)
}
Running in a Mac (Catalyst) target or Apple Silicon (designed for iPad).
Just accessing the playbackStoreID from the MPMediaItem shows this error in the console:
-[ITMediaItem valueForMPMediaEntityProperty:]: Unhandled MPMediaEntityProperty subscriptionStoreItemAdamID.
The value returned is always “”.
This works as expected on iOS and iPadOS, returning a valid playbackStoreID.
import SwiftUI
import MediaPlayer
@main
struct PSIDDemoApp: App {
var body: some Scene {
WindowGroup {
Text("playbackStoreID demo")
.task {
let authResult = await MPMediaLibrary.requestAuthorization()
if authResult == .authorized {
if let item = MPMediaQuery.songs().items?.first {
let persistentID = item.persistentID
let playbackStoreID = item.playbackStoreID // <--- Here
print("Item \(persistentID), \(playbackStoreID)")
}
}
}
}
}
}
Xcode 15.1, also tested with Xcode 15.3 beta 2.
MacOS Sonoma 14.3.1
FB13607631
HELP! How could I play a spatial video in my own vision pro app like the official app Photos? I've used API of AVKit to play a spatial video in XCode vision pro simulator with the guild of the official developer document, this video could be played but it seems different with what is played through app Photos. In Photos the edge of the video seems fuzzy but in my own app it has a clear edge.
How could I play the spatial video in my own app with the effect like what is in Photos?
how do I add AudioToolbox on xcode 15.2?
Hello, we are embedding a PHPickerViewController with UIKit (adding the vc as a child vc, embedding the view, calling didMoveToParent) in our app using the compact mode. We are disabling the following capabilities .collectionNavigation, .selectionActions, .search.
One of our users using iOS 17.2.1 and iPhone 12 encountered a crash with the following stacktrace:
Crashed: com.apple.main-thread
0 libsystem_kernel.dylib 0x9fbc __pthread_kill + 8
1 libsystem_pthread.dylib 0x5680 pthread_kill + 268
2 libsystem_c.dylib 0x75b90 abort + 180
3 PhotoFoundation 0x33b0 -[PFAssertionPolicyCrashReport notifyAssertion:] + 66
4 PhotoFoundation 0x3198 -[PFAssertionPolicyComposite notifyAssertion:] + 160
5 PhotoFoundation 0x374c -[PFAssertionPolicyUnique notifyAssertion:] + 176
6 PhotoFoundation 0x2924 -[PFAssertionHandler handleFailureInFunction:file:lineNumber:description:arguments:] + 140
7 PhotoFoundation 0x3da4 _PFAssertFailHandler + 148
8 PhotosUI 0x22050 -[PHPickerViewController _handleRemoteViewControllerConnection:extension:extensionRequestIdentifier:error:completionHandler:] + 1356
9 PhotosUI 0x22b74 __66-[PHPickerViewController _setupExtension:error:completionHandler:]_block_invoke_3 + 52
10 libdispatch.dylib 0x26a8 _dispatch_call_block_and_release + 32
11 libdispatch.dylib 0x4300 _dispatch_client_callout + 20
12 libdispatch.dylib 0x12998 _dispatch_main_queue_drain + 984
13 libdispatch.dylib 0x125b0 _dispatch_main_queue_callback_4CF + 44
14 CoreFoundation 0x3701c __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16
15 CoreFoundation 0x33d28 __CFRunLoopRun + 1996
16 CoreFoundation 0x33478 CFRunLoopRunSpecific + 608
17 GraphicsServices 0x34f8 GSEventRunModal + 164
18 UIKitCore 0x22c62c -[UIApplication _run] + 888
19 UIKitCore 0x22bc68 UIApplicationMain + 340
20 WorkAngel 0x8060 main + 20 (main.m:20)
21 ??? 0x1bd62adcc (Missing)
Please share if you have any ideas as to what might have caused that, or what to look at in such a case. I haven't been able to reproduce this myself unfortunately.
Is AVQT capable of being used to measure encoding quality of PQ or HLG based content beyond SDR? If so, how am I able to leverage it. If not, is there a roadmap for timing to enable this type of tool?
I can't figure out how to get audio from my RealityKitContentBundle to play on Vision Pro...
I have a scene in Reality Composer Pro called "WinterVivarium" which contains a 3D model of a tree, a particle emitter, a ChannelAudio entity, and an audio file (m4a) with 30 minutes of nature sounds.
The 3D model and particle emitter load up just fine on my device, but I'm getting an error when I try to load the audio...
Swift file below. When I run the app and this file gets called it throws the following error:
"Error loading winter vivarium model and/or audio: The operation couldn’t be completed. (RealityKit.__REAsset.LoadError error 2.)"
ChatGPT tells me error code 2 likely means "file not found" but I'm not sure on that one...
Please help!
import SwiftUI
import RealityKit
import RealityKitContent
struct WinterVivarium: View {
@State private var angle: Angle = .degrees(0)
var body: some View {
RealityView { content in
let audioFilePath = "/Root/back-yard-feb-7am.m4a"
let audioEntity = Entity()
do {
let entity = try await Entity(named: "WinterVivarium", in: realityKitContentBundle)
content.add(entity)
let resource = try await AudioFileResource.load(named: audioFilePath, from: "WinterVivarium.usda", in: RealityKitContent.RealityKitContentBundle)
let audioController = audioEntity.playAudio(resource)
} catch {
print("Error loading winter vivarium model and/or audio: \(error.localizedDescription)")
}
}
}
#Preview {
WinterVivarium()
}
Dear Sirs,
I've written an audio driver based on AudioDriverKit.
In my audio callback function I'm receiving calls with io operation IOUserAudioIOOperationWriteEnd and IOUserAudioIOOperationBeginRead as expected which means I see IOUserAudioIOOperationWriteEnd operations during a playback in an application like VLC or the browser and I see IOUserAudioIOOperationBeginRead when recording in Audacity etc..
But when I open the SystemSettings and goto Sound and I select my driver as input I also see calls with IOUserAudioIOOperationWriteEnd which seem to be the just read input data. I can also watch this when starting up Teams. I think the purpose is to add the (mic) input also to the output so you have the chance to listen to yourself.
Nevertheless I'd like to fully avoid this but I don't see a way to distinguish between the playback audio data and the input audio data inside this callback. How could I do this?
Or even better is there a switch which would completely switch off these callbacks which forward the input to the output?
Thanks and best regards,
Johannes
Dear Sirs,
when writing an AudioServerPlugin I can use the hosts WriteToStorage/CopyFromStorage functions to save and restore custom properties on restarting the machine. Are there corresponding functions for an audio driver based on AudioDriverKit? What would be the recommended way to save and restore properties so that they are available again after a reboot in an audio driver based on AudioDriverKit?
Thanks and best regards,
Johannes
Hi,
This Apple provided program “SideBySideToMVHEVC” removes the audio track from our videos resulting in useless content.
Advise for:
code changes required permitting keeping the audio track
Or
advise for Apple tools permitting re-inserting the audio track after “SideBySideToMVHEVC”
Sincerely,
Olaf
I'm currently working on an iPad application that uses a third party sdk to scan a drivers license, and then allows the user to take a picture of themselves. However, when the user is directed to the self photo view, the AVCaptureSession preview will freeze. The app as a whole does not freeze. Only the view preview. I believe this is an issue with the OS, because this only happens on iPad 9s. All the other iPads work fine. Has anyone else seen this issue? Also, is there anyway to see logs from the AVCaptureSession so I can see what is happening? Maybe there is a way I can see when it freezes and then restart it.
I often find when doing basic actions in MusicKit it is incredibly slow compared to Apple's Music App. I've tried different versions, devices, networks, Apple's sample code, it all throughout the last several years, and it is all the same. Does anyone else have this issue?
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput and scan QRCode.
After updating to iPadOS 17.4, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices.
The following devices are experiencing this issue.
iPad (7th Gen)
iPad (6th Gen)
iPad Pro (10.5)
iPad Pro (12.9 2nd Gen)
This issue has not occur on any other devices I have.
This may only occur on devices with model number "iPad7,x".
I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs.
https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces
Are any additional settings required after iPadOS17.4?
Or is there some problem on the OS side?
Hi all, I am a graduate student who is looking into making MV-HEVC videos streamable. May I ask that is it possible to encode mv-hevc videos with the HLS (Http Live Streaming) protocol?
I've been trying to use the HLS tool by Apple to encode a spatial video recored by VP.
mediafilesegmenter -iso-fragmented -t 4 -f sp_video-1-vp spatial-video-by-vp.MOV
But the output HLS playlist file doesn't look like the format that Apple proposes in the WWDC video. For example, the attribute EXT-X-VERSION is 7 instead of 12, and no attr REQ-VIDEO-LAYOUT=CH-STEREO which should be the key indicator of the spatial video type.
From what the WWDC video showcased, I assume Apple's HLS tool supports it. Maybe my usage is not correct. Just curious what you guys think about it, thank you!
The methods described in https://developer.apple.com/forums/thread/715452?answerId=729571022#729571022 to obtain 48 MP image captures no longer seem to work on iOS 17.4 under certain circumstances.
Previously, the following steps were sufficient to get 48 MP capture from AVFoundation:
Configuration
Set the active AVCaptureDevice.Format to a format where supportedMaxPhotoDimensions contains the (8064, 6048) size
Set AVCapturePhotoOutput.maxPhotoDimensions to (8064, 6048)
Set AVCapturePhotoOutput.maxPhotoQualityPrioritization to .quality
Taking a photo
Set AVCapturePhotoSettings.maxPhotoDimensions to (8064, 6048)
Set AVCapturePhotoSettings.photoQualityPrioritization to .quality
As of iOS 17.4, the exact same code that worked through 17.3 no longer works if the session was configured manually (resulting in the .inputPriority session preset) rather than using a session preset (like .high). When configuring the session manually, all the intervening steps work (an active format can be found with the appropriate dimensions, the photo output settings can be set to 8064x6048 successfully, etc.), but the resulting photo is 4032x3024. Again, these same steps worked flawlessly prior to iOS 17.4.
Am I missing something? Did iOS 17.4 change the requirements for 48 MP capture, or is this a bug?
Hi everyone! Are there any plans or existing alternatives to include the date a track was added to a playlist within Apple Music's API[1]? This functionality exists on Spotify[2] (with their "added_at" attribute), and it would be helpful for ordering tracks retrieved from playlists. Thank in advance for any help!
[1]https://developer.apple.com/documentation/applemusicapi/get_a_catalog_playlist_s_relationship_directly_by_name
[2]https://developer.spotify.com/documentation/web-api/reference/get-playlists-tracks