I am an artist (singer songwriter) and I use the Photos app to manage albums related to my various creative projects. And these are some BIG issues that i am SURPRISED never came into the account or maybe were overlooked -
Missing Search Bar When Adding Photos to Albums: Why there is no search bar when adding a photo to a bag of hundred of albums? (Artists like me like to organise things into different albums and folders)
I can no longer search for albums by name after ios 18 update, which was previously very helpful in quickly locating them.
Albums can be arranged & moved in the same folder but there is no way to move albums between DIFFERENT FOLDERS and the only wat is to create a new album in that folder and select and transfer everything and delete that old album.
Posts under Media tag
62 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi!
I recently updated to the latest 18.2 Beta version of iOS on my iPhone 15 Pro Max. Could you please guide me on how to locate and utilize the Image Search feature powered by Apple Intelligence?
Just a little detail: I went on YouTube and the instruction was to hold the camera action button on the iPhone 16 and image search appears.
So far, I haven’t been able to replicate these results on my iPhone 15 Pro Max. This is a great capability and I’d really like to try it out.
“Live long and prosper.” -Spock
-Jordan
I was able to obtain the depth map image using AVCapturePhotoOutput from the delegate method
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?)
I convert the depth map to kCVPixelFormatType_DepthFloat32 format and get the pixel values of the depth map using the below code
func convertDepthData(depthMap: CVPixelBuffer) -> [[Float32]] {
let width = CVPixelBufferGetWidth(depthMap)
let height = CVPixelBufferGetHeight(depthMap)
var convertedDepthMap: [[Float32]] = Array(
repeating: Array(repeating: 0, count: width),
count: height
)
CVPixelBufferLockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2))
let floatBuffer = unsafeBitCast(
CVPixelBufferGetBaseAddress(depthMap),
to: UnsafeMutablePointer<Float32>.self
)
for row in 0 ..< height {
for col in 0 ..< width {
if floatBuffer[width * row + col].isFinite{
convertedDepthMap[row][col] = floatBuffer[width * row + col]
}
}
}
CVPixelBufferUnlockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2))
return convertedDepthMap
}
Is this the right way of accessing the depth float values from a depth map. And what will be the unit for it. Because some times the depth values are in range of 0.7 when I keep the device close to the subject around 15 to 30 cm.
I'm trying to capture the depth map image using true depth camera in iPhone 15 plus. I was able to setup the AVCapture session with AVCaptureDeviceInput as builtInTrueDepthCamera and AVCapturePhotoOutput with isDepthDataDeliveryEnabled set as true. I also manually made the activeDepthDataFormat of AVCapture device to kCVPixelFormatType_DepthFloat16 or kCVPixelFormatType_DepthFloat32. Finally I have enabled isDepthDataDeliveryEnabled, embedsDepthDataInPhoto , embedsPortraitEffectsMatteInPhoto and embedsSemanticSegmentationMattesInPhoto in AVCapturePhotoSettings before capturing the photo using capturePhoto(with: photoSettings, delegate: self) method.
I have checked manually printing the activeDepthDataFormat of AVCapture device. First before setting it by default it is
Optional('dpth'/'hdis' 640x 480, { 2- 30 fps}, photo dims:{}, fov:73.699, system exposure bias range:-2.0-2.0)
After forcing it to kCVPixelFormatType_DepthFloat16 or kCVPixelFormatType_DepthFloat32 the format is
Optional('dpth'/'hdep' 160x 120, { 2- 30 fps}, photo dims:{}, fov:73.699, system exposure bias range:-2.0-2.0)
But when I receive the captured photo in
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?)
The depth map is
Optional(hdis 640x480 (high/abs) calibration:{intrinsicMatrix: [2723.07 0.00 2016.00 | 0.00 2723.07 1512.00 | 0.00 0.00 1.00], extrinsicMatrix: [1.00 0.00 0.00 0.00 | 0.00 1.00 0.00 0.00 | 0.00 0.00 1.00 0.00] pixelSize:0.001 mm, distortionCenter:{2016.00,1512.00}, ref:{4032x3024}})
Here it shows hdis instead of hdep, why is it capturing disparity map instead of true depth map.
The depth quality is high and depth data accuracy is absolute.
Here is my code
import UIKit
import AVKit
import AVFoundation
class ViewController: UIViewController, AVCapturePhotoCaptureDelegate {
@IBOutlet weak var previewView: UIView!
@IBOutlet weak var resultLbl: UILabel!
private var session = AVCaptureSession()
private var captureDevice: AVCaptureDevice?
private var inputDevice: AVCaptureDeviceInput?
private var photoOutput: AVCapturePhotoOutput?
private var photoSettings: AVCapturePhotoSettings?
private var cameraPreviewLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
self.setupCaptureSession()
}
func setupCaptureSession(){
captureDevice = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .unspecified)
guard let captureDevice else{
print("ERROR::UNABLE TO SET TRUE DEPTH CAMERA ")
return }
session.beginConfiguration()
do{
inputDevice = try AVCaptureDeviceInput(device: captureDevice)
guard let inputDevice else{
print("ERROR: UNABLE TO SET UP INPUT DEVICE")
return }
if session.canAddInput(inputDevice){
session.addInput(inputDevice)
}
}
catch{
print(error)
}
photoOutput = AVCapturePhotoOutput()
guard let photoOutput else{
print("ERROR: UNABLE TO SET UP PHOTO OUTPUT")
return }
if session.canAddOutput(photoOutput){
session.addOutput(photoOutput)
}
session.sessionPreset = .photo
photoOutput.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
print("IS DEPTH ENABLED:: \(photoOutput.isDepthDataDeliveryEnabled)")
session.commitConfiguration()
let availableFormats = captureDevice.activeFormat.supportedDepthDataFormats
let depthFormat = availableFormats.filter { format in
let pixelFormatType =
CMFormatDescriptionGetMediaSubType(format.formatDescription)
return (pixelFormatType == kCVPixelFormatType_DepthFloat16 ||
pixelFormatType == kCVPixelFormatType_DepthFloat32)
}.first
session.beginConfiguration()
try! captureDevice.lockForConfiguration()
captureDevice.activeDepthDataFormat = depthFormat
captureDevice.unlockForConfiguration()
session.commitConfiguration()
self.setupPreviewLayer()
}
func setupPreviewLayer(){
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
cameraPreviewLayer?.videoGravity = .resizeAspectFill
if let cameraPreviewLayer{
self.previewView.layer.addSublayer(cameraPreviewLayer)
cameraPreviewLayer.frame = self.previewView.bounds
}
DispatchQueue.global(qos: .userInteractive).async {
self.session.startRunning()
}
}
@IBAction func captureBtnPressed(_ sender: Any) {
photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey:AVVideoCodecType.jpeg])
guard let photoSettings else{
print("ERROR: UNABLE TO SETUP PHOTO SETTINGS")
return
}
guard let photoOutput else{
print("ERROR: UNABLE TO SET UP PHOTO OUTPUT")
return
}
photoSettings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
photoSettings.embedsDepthDataInPhoto = true
photoSettings.embedsPortraitEffectsMatteInPhoto = true
photoSettings.embedsSemanticSegmentationMattesInPhoto = true
photoOutput.capturePhoto(with: photoSettings, delegate: self)
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) {
print(photo.depthData)
switch photo.depthData?.depthDataQuality {
case .low:
print("Depth quality is low")
case .high:
print("Depth quality is high")
case nil:
print("Depth quality is nil")
}
switch photo.depthData?.depthDataAccuracy {
case .relative:
print("Depth accuarcy is relative")
case .absolute:
print("Depth accuarcy is absolute")
case nil:
print("Depth accuarcy is nil")
}
if let imageData = photo.fileDataRepresentation(){
if let image = UIImage(data: imageData){
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
}
}
}
When trying to edit some Live Photos, calling PHLivePhotoEditingContext.saveLivePhoto results in the following error:
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12815), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x300d05380 {Error Domain=NSOSStatusErrorDomain Code=-12815 "(null)"}}
I was able to replicate it on my device by taking a new Live Photo. Not sure what's wrong with that one specifically, not all Live Photos replicate the issue.
I've submitted FB15880825 with a sysdiagnose and a Photos Diagnostics as well. Any ideas what's going on here? It's impacting multiple customers. Thanks!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Media
Photos and Imaging
PhotoKit
AVFoundation
I have an app that allows the user to change a photo’s EXIF metadata. To do this, I request a content editing input, get the full size image, modify its properties, create a content editing output, write the output image to the rendered content URL, then call performChanges on the PHPhotoLibrary creating an asset change request for that asset setting its content editing output. This works as expected for regular photos but Live Photos get turned off converted to a regular photo.
To address this, I’m doing something similar by changing the properties of the .photo image in the Live Photo. I detect when the content editing input has a Live Photo, create a Live Photo editing context, set a frame processor that returns the frame’s image after setting its properties to the updated properties when the frame type is photo, then I create the content editing output and save the Live Photo to that output. It modifies the Live Photo successfully, but the metadata is not updated. If you get the full size image again the properties are the original properties. If you look at the EXIF metadata using an app like Metapho it remains unchanged. What am I doing wrong here? Thanks!
let imageURL = contentEditingInput.fullSizeImageURL!
let inputImage = CIImage(contentsOf: imageURL, options: [.applyOrientationProperty: true])!
var metadata: [AnyHashable: Any] = inputImage.properties
// Edit the metadata as desired...
let editingContext = PHLivePhotoEditingContext(livePhotoEditingInput: contentEditingInput)!
editingContext.frameProcessor = { frame, error -> CIImage? in
// Edit only the still photo
if frame.type == .photo {
return frame.image.settingProperties(metadata)
}
return frame.image
}
let contentEditingOutput = try await withCheckedThrowingContinuation { continuation in
let editingOutput = PHContentEditingOutput(contentEditingInput: contentEditingInput)
editingOutput.adjustmentData = adjustmentData
editingContext.saveLivePhoto(to: editingOutput) { success, error in
if success {
continuation.resume(returning: editingOutput)
} else {
continuation.resume(throwing: error!)
}
}
}
try await PHPhotoLibrary.shared().performChanges {
let request = PHAssetChangeRequest(for: asset)
request.contentEditingOutput = contentEditingOutput
}
I tried several times to use the PhotosPickerItem loadTransferable function with the goal of receiving some progress value especially when loading large video media.
However, the Progress object returned by the function has always isIndeterminate == true and so doesn't have any progress value to observe.
Is there some way to make it work ? Some configuration I might have overlooked ? Or is it just not working
I might have to revert back to the UIKit photo picker because of this
I have an webview that loads videos in it, we would like to be able to fullscreen our videos, so we use the fullscreen preference in the documentation however when it is set to true, upon fullscreening a video then pausing it, the entire video player will disappear.
You can exit fullscreen and attempt to fullscreen the video player once again, however upon doing this the entire app view will now disappear and you'll see your desktop background (or whatever is currently behind your app). This behavior seems consistent across multiple websites with the current app. I have setup a sample project you can test here
The Main error that seems to trigger to the console is this. I have not been able to find a solution to, maybe I am simply missing something here. I am on Sequoia 15.2 for Mac.
Attempting to update all DD element frames, but the bounds or contentsRect are invalid. Bounds: X: 0.00 Y: 0.00, W: 0.00 H: 0.00, contentsRect: X: 0.00 Y: 0.00, W: 1.00 H: 1.00 , skipping
We have had the same video player in our app for at least 5 years with few issues but the iOS 18 updated has now resulted in video playback for our users who have downloaded the video for offline viewing is now played at 2x speed.
While customizing ImagePicker and using it, we find out that the metadata is not reflected normally and report it.
The situation is as follows.
The time or time zone of an image is changed in the Photos app.
Changing the time zone of an image with an actual capture date of 2024:11:08 08:27:44 → 2024:11:07 17:27:44
Image data is extracted from a PHAsset using PHImageManager.
The metadata is obtained from this image data.
The time zone information exposed in the Exif tag information does not reflect the time or time zone changed in the Photos app.
let asset: PHAsset = ...
....
let options = PHImageRequestOptions()
options.isSynchronous = true
options.version = .current
options.deliveryMode = .highQualityFormat
options.resizeMode = .none
options.normalizedCropRect = .zero
options.isNetworkAccessAllowed = true
options.progressHandler = { progress, error, _, _ in }
PHImageManager.default().requestImageDataAndOrientation(for: asset, options: options) { imageData, uti, orientation, info in
let cgImageSource = CGImageSourceCreateWithData(imageData! as CFData, nil)
let properties = CGImageSourceCopyPropertiesAtIndex(cgImageSource!, 0, nil) as? Dictionary<String, Any>
let exif = properties!["{Exif}"]
let dictionary = exif as? Dictionary<String, Any>
}
Metadata Check
In this case, it is reflected in the creationDate of PHAsset, so it can be somewhat compensated by forcibly replacing the metadata.
However, because PHAsset does not include time zone information, when changing the time zone as well, it's impossible to calculate the correct time according to the time zone.
PHPicker
This issue is resolved when using the PHPickerResult provided by PHPicker.
extension PhotosPickerViewController: PHPickerViewControllerDelegate {
public func picker(_ picker: PHPickerViewController,
didFinishPicking results: [PHPickerResult]) {
.....
for result in results {
let identifier = UTType.image.identifier
if result.itemProvider.hasItemConformingToTypeIdentifier(identifier) {
result.itemProvider.loadDataRepresentation(forTypeIdentifier: identifier) { data, error in
guard let data = data,
let cgImageSource = CGImageSourceCreateWithData(data as CFData, nil),
let properties = CGImageSourceCopyPropertiesAtIndex(cgImageSource, 0, nil) as? Dictionary<String, Any>,
let exif = properties["{Exif}"],
let dictionary = exif as? Dictionary<String, Any>
else {
return
}
}
}
}
}
}
Metadata Check
Question
I wonder why this happens, and if this is normal behavior.
Instead of the System Picker that Apple provides as a base, I wonder if there is any way I can supplement it in that situation if I use a customizer.
There are different microphones that can be connected via a 3.5-inch jack or via USB or via Bluetooth, the behavior is the same.
There is a code that gets access to the microphone (connected to the 3.5-inch audio jack) and starts an audio capture session. At the same time, the microphone use icon starts to be displayed. The capture of the audio device (microphone) continues for a few seconds, then the session stops, the microphone use icon disappears, then there is a pause of a few seconds, and then a second attempt is made to access the same microphone and start an audio capture session. At the same time, the microphone use icon is displayed again. After a few seconds, access to the microphone stops and the audio capture session stops, after which the microphone access icon disappears.
Next, we will try to perform the same actions, but after the first stop of access to the microphone, we will try to pull the microphone plug out of the connector and insert it back before trying to start the second session. In this case, the second attempt to access begins, the running part of the program does not return errors, but the microphone access icon is not displayed, and this is the problem. After the program is completed and restarted, this icon is displayed again.
This problem is only the tip of the iceberg, since it manifests itself in the fact that it is not possible to record sound from the audio microphone after reconnecting the microphone until the program is restarted.
Is this normal behavior of the AVFoundation framework? Is it possible to somehow make it so that after reconnecting the microphone, access to it occurs correctly and the usage indicator is displayed? What additional actions should the programmer perform in this case? Is there a description of this behavior somewhere in the documentation?
Below is the code to demonstrate the described behavior.
I am also attaching an example of the microphone usage indicator icon.
Computer description: MacBook Pro 13-inch 2020 Intel Core i7 macOS Sequoia 15.1.
#include <chrono>
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <thread>
#include <AVFoundation/AVFoundation.h>
#include <Foundation/NSString.h>
#include <Foundation/NSURL.h>
AVCaptureSession* m_captureSession = nullptr;
AVCaptureDeviceInput* m_audioInput = nullptr;
AVCaptureAudioDataOutput* m_audioOutput = nullptr;
std::condition_variable conditionVariable;
std::mutex mutex;
bool responseToAccessRequestReceived = false;
void receiveResponse()
{
std::lock_guard<std::mutex> lock(mutex);
responseToAccessRequestReceived = true;
conditionVariable.notify_one();
}
void waitForResponse()
{
std::unique_lock<std::mutex> lock(mutex);
conditionVariable.wait(lock, [] { return responseToAccessRequestReceived; });
}
void requestPermissions()
{
responseToAccessRequestReceived = false;
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeAudio completionHandler:^(BOOL granted)
{
const auto status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeAudio];
std::cout << "Request completion handler granted: " << (int)granted << ", status: " << status << std::endl;
receiveResponse();
}];
waitForResponse();
}
void timer(int timeSec)
{
for (auto timeRemaining = timeSec; timeRemaining > 0; --timeRemaining)
{
std::cout << "Timer, remaining time: " << timeRemaining << "s" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
}
bool updateAudioInput()
{
[m_captureSession beginConfiguration];
if (m_audioOutput)
{
AVCaptureConnection *lastConnection = [m_audioOutput connectionWithMediaType:AVMediaTypeAudio];
[m_captureSession removeConnection:lastConnection];
}
if (m_audioInput)
{
[m_captureSession removeInput:m_audioInput];
[m_audioInput release];
m_audioInput = nullptr;
}
AVCaptureDevice* audioInputDevice = [AVCaptureDevice deviceWithUniqueID: [NSString stringWithUTF8String: "BuiltInHeadphoneInputDevice"]];
if (!audioInputDevice)
{
std::cout << "Error input audio device creating" << std::endl;
return false;
}
// m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:nil];
// NSError *error = nil;
NSError *error = [[NSError alloc] init];
m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:&error];
if (error)
{
const auto code = [error code];
const auto domain = [error domain];
const char* domainC = domain ? [domain UTF8String] : nullptr;
std::cout << code << " " << domainC << std::endl;
}
if (m_audioInput && [m_captureSession canAddInput:m_audioInput]) {
[m_audioInput retain];
[m_captureSession addInput:m_audioInput];
}
else
{
std::cout << "Failed to create audio device input" << std::endl;
return false;
}
if (!m_audioOutput)
{
m_audioOutput = [[AVCaptureAudioDataOutput alloc] init];
if (m_audioOutput && [m_captureSession canAddOutput:m_audioOutput])
{
[m_captureSession addOutput:m_audioOutput];
}
else
{
std::cout << "Failed to add audio output" << std::endl;
return false;
}
}
[m_captureSession commitConfiguration];
return true;
}
void start()
{
std::cout << "Starting..." << std::endl;
const bool updatingResult = updateAudioInput();
if (!updatingResult)
{
std::cout << "Error, while updating audio input" << std::endl;
return;
}
[m_captureSession startRunning];
}
void stop()
{
std::cout << "Stopping..." << std::endl;
[m_captureSession stopRunning];
}
int main()
{
requestPermissions();
m_captureSession = [[AVCaptureSession alloc] init];
start();
timer(5);
stop();
timer(10);
start();
timer(5);
stop();
}
someone know how to resolve or how much time it take to get access on playground .
I have an app that allows you to edit your photos. To preserve HDR, I edit both the SDR image and gain map image, like so:
let sdrImage = CIImage(data: data, options: [.applyOrientationProperty: true])
let gainMapImage = CIImage(data: data, options: [.applyOrientationProperty: true, .auxiliaryHDRGainMap: true])
// edit them...
try CIContext().writeHEIFRepresentation(of: sdrImage, to: url, format: .RGBA8, colorSpace: colorSpace, options: [.hdrGainMapImage: gainMapImage])
I also support editing the still photo in Live Photos. To do this you create a PHLivePhotoEditingContext, set the frameProcessor block which gives you a CIImage that I edit when the frame.type is .photo, then you create a PHContentEditingOutput and call saveLivePhoto. I’m not seeing any way to preserve HDR here. Interestingly the frame processor is called twice with .photo frame.type, but I don’t see any difference between these images. How can I edit a gain map image to preserve HDR in the still photo of a Live Photo?
Many issues in terms of usability and glitches.
When reviewing Video clips on the Photos app there is no way to have the full screen view and pause and scroll forward and backward. It only shows the shrunken screen.
When zooming in on a video and scrolling forward and backward it only shows part of the clip, you cannot scroll to the end of the clip. This must be some glitch in the software.
Was any of this app tested beforehand for usability? IOS 18, in general, may be one of the worst releases I've experienced.
This stuff may have been said many times already but my goodness fix the Photos app immediately!!
It’s impossible to enjoy video playback with this update. 1) Starting with the play/pause location, not friendly with your hand; 2) How are we supposed to back and forth just a few frames? I was thinking there is a hidden button somewhere to enable the old playback, because that’s waaay absurd.
3) Video playback with buttons enable makes video playback smaller, that’s ridiculous
I have noticed a problem when a PHAsset creation request is made with the resource type PHAssetResourceType.photoProxy.
let creationRequest = PHAssetCreationRequest.forAsset()
creationRequest.addResource(with: .photoProxy, data: photoData, options: nil)
creationRequest.location = location
creationRequest.isFavorite = true
After successfully saving the resulting asset through PHPhotoLibrary.shared().performChanges, I could verify it in the Photos app.
I noticed that the created photo was initially marked as Favorite and that the location was added to the info as expected. The title of the image changes from "Today" to "" too.
Next, the photo was refreshed, and location data was purged. However, the title remains unchanged and displays the .
This refresh was also observed in the code. PHPhotoLibraryChangeObserver protocols func photoLibraryDidChange(_ changeInstance: PHChange) receives a change notification. The same asset has been changed, and there is no location information anymore. isFavorite information persists correctly.
After debugging for a few hours, I discovered that changing the resource type to .photo fixes this issue. Location data is not removed in the Photos app, and no refresh callback is seen in func photoLibraryDidChange(_ changeInstance: PHChange).
I initially used .photoProxy because in the AVCapturePhotoCaptureDelegate implementation class, I always get the call in func photoOutput(_ output: AVCapturePhotoOutput, didFinishCapturingDeferredPhotoProxy deferredPhotoProxy: AVCaptureDeferredPhotoProxy?, error: Error?). So here is where I am capturing the photo data as photoData = deferredPhotoProxy?.fileDataRepresentation().
In the WWDC 24 session "Use HDR for dynamic image experiences in your app" it's noted this is how you save edits for Adaptive HDR:
SDR + HDR: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrImage: hdrImage])
SDR + Gain: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrGainMapImage: gainImage])
This won't compile because the format argument is missing. What format should be used?
In the WWDC 23 session "Support HDR images in your app" RGBAf, RGBAh, and RGBA16, and RGB10 were mentioned but I'm not sure which one to use.
If relevant, I'm editing photos from the user's photo library, so the image was probably taken on iPhone but perhaps not. Thanks!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Media
PhotoKit
Photos and Imaging
Core Image
Hey all!
in my personal quest to make future proof apps moving to Swift 6, one of my app has a problem when setting an artwork image in MPNowPlayingInfoCenter
Here's what I'm using to set the metadata
func setMetadata(title: String? = nil, artist: String? = nil, artwork: String? = nil) async throws {
let defaultArtwork = UIImage(named: "logo")!
var nowPlayingInfo = [
MPMediaItemPropertyTitle: title ?? "***",
MPMediaItemPropertyArtist: artist ?? "***",
MPMediaItemPropertyArtwork: MPMediaItemArtwork(boundsSize: defaultArtwork.size) { _ in
defaultArtwork
}
] as [String: Any]
if let artwork = artwork {
guard let url = URL(string: artwork) else { return }
let (data, response) = try await URLSession.shared.data(from: url)
guard (response as? HTTPURLResponse)?.statusCode == 200 else { return }
guard let image = UIImage(data: data) else { return }
nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size) { _ in
image
}
}
MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo
}
the app crashes when hitting
MPMediaItemPropertyArtwork: MPMediaItemArtwork(boundsSize: defaultArtwork.size) { _ in
defaultArtwork
}
or
nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size) { _ in
image
}
commenting out these two make the app work again.
Again, no clue on why.
Thanks in advance
When setting the now playing info for playing media in MPNowPlayingInfoCenter we can set artwork. But it seems the Apple API for creating the artwork is crashing on iOS 18 (FB15145734).
On iOS 17 this gave the warning that the completion handler was not run on the main thread.
I've tried to seek help here: https://stackoverflow.com/questions/78989543/swift-data-race-with-appkit-mpmediaitemartwork-function/78990231?noredirect=1#comment139277425_78990231
but it seems that it's not possible to override the completion handler and therefor it's up to Apple to fix this issue.
.task {
await MainActor.run {
let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default()
var nowPlayingInfo = [String: Any]()
let image = NSImage(named: "image")!
// warning: data race detected: @MainActor function at MPMediaItemArtwork/ContentView.swift:22 was not called on the main thread
nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size, requestHandler: { _ in
// Not on main thread here!
return image
})
nowPlayingInfoCenter.nowPlayingInfo = nowPlayingInfo
}
}
I'm wondering if there is an alternative method to set the now playing artwork?
I have an HDR10+ encoded video that if loaded as a mov plays back on the Apple Vision Pro but when that video is encoded using the latest (1.23b) Apple HLS tools to generate an fMP4 - the resulting m3u8 cannot be played back in the Apple Vision Pro and I only get back a "Cannot Open" error.
To generate the m3u8, I'm just calling mediafilesegmenter (with -iso-fragmented) and then variantplaylistcreator. This completes with no errors but the m3u8 will playback on the Mac using VLC but not on the Apple Vision Pro.
The relevant part of the m3u8 is:
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=40022507,BANDWIDTH=48883974,VIDEO-RANGE=PQ,CODECS="ec-3,hvc1.1.60000000.L180.B0",RESOLUTION=4096x4096,FRAME-RATE=24.000,CLOSED-CAPTIONS=NONE,AUDIO="audio1",REQ-VIDEO-LAYOUT="CH-STEREO"
{{url}}
Has anyone been able to use the HLS tools to generate fMP4s of MV-HEVC videos with HDR10?