We are currently in the process of migrating our application from using ALAssetsLibrary to PHPhotoLibrary to ensure compatibility with the latest versions of iOS. However, we have noticed a discrepancy in the file sizes of images obtained using PHPhotoLibrary compared to those obtained using ALAssetsLibrary.
Specifically, we would like to understand the following points:
1.Reason for File Size Differences:
What are the reasons for the difference in file sizes between images obtained using ALAssetsLibrary and those obtained using PHPhotoLibrary?
Could you provide detailed information on the settings and options in PHPhotoLibrary that affect the size and quality of the images?
2.Optimal Settings:
What are the optimal settings in PHPhotoLibrary to obtain images with the same quality and file size as those obtained using ALAssetsLibrary?
If possible, could you provide code examples or recommended option settings?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
Hi. I want to read ADPCM encoded audio data, coming from an external device to my Mac via serial port (/dev/cu.usbserial-0001) as 256 byte chunks, and feed it into an audio player. So far I am using Swift and SwiftSerial (GitHub - mredig/SwiftSerial: A Swift Linux and Mac library for reading and writing to serial ports. 3) to get the data via serialPort.asyncBytes() into a AsyncStream but I am struggling to understand how to feed the stream to a AVAudioPlayer or similar. I am new to Swift and macOS audio development so any help to get me on the right track is greatly appreciated. Thx
Hello.
We are trying to get audio volume from microphone.
We have 2 questions.
1. Can anyone tell me about AVAudioEngine.InputNode.volume?
AVAudioEngine.InputNode.volume
Return 0 in the silence, Return float type value within 1.0 depending on the
volume are expected work, but it looks 1.0 (default value) is returned at any time.
Which case does it return 0.5 or 0?
Sample code is below. Microphone works correctly.
// instance member
private var engine: AVAudioEngine!
private var node: AVAudioInputNode!
// start method
self.engine = .init()
self.node = engine.inputNode
engine.prepare()
try! engine.start()
// volume getter
print(\(self.node.volume))
2. What is the best practice to get audio volume from microphone?
Requirements are:
Without AVAudioRecorder. We use it for streaming audio.
it should withstand high frequency access.
Testing info
device: iPhone XR
OS version: iOS 18
Best Regards.
Hello, I will use AVFoundation's AVAssetWriter and AVPlayer for H.264 and H.265 encoding and decoding in my app. It will be used commercially, so I would like to know if I need to pay any licensing fees for H.264 and H.265 encoding and decoding.
I have an app that allows you to edit your photos. To preserve HDR, I edit both the SDR image and gain map image, like so:
let sdrImage = CIImage(data: data, options: [.applyOrientationProperty: true])
let gainMapImage = CIImage(data: data, options: [.applyOrientationProperty: true, .auxiliaryHDRGainMap: true])
// edit them...
try CIContext().writeHEIFRepresentation(of: sdrImage, to: url, format: .RGBA8, colorSpace: colorSpace, options: [.hdrGainMapImage: gainMapImage])
I also support editing the still photo in Live Photos. To do this you create a PHLivePhotoEditingContext, set the frameProcessor block which gives you a CIImage that I edit when the frame.type is .photo, then you create a PHContentEditingOutput and call saveLivePhoto. I’m not seeing any way to preserve HDR here. Interestingly the frame processor is called twice with .photo frame.type, but I don’t see any difference between these images. How can I edit a gain map image to preserve HDR in the still photo of a Live Photo?
My iphone 15 plus suddenly turns black and a losing icon keeps spinning. Then it turns off and I can use it again, it is only for a few seconds.
I have updated to iOS 18.1 beta, could this be the issue. Is my phone broken?
I have tried restarting my phone
I'm running into an issue where in some cases, when the AUHostingServiceXPC_arrow process is shut down by Logic, the process is terminated abruptly without calling AP_Close on all of the plugins hosted in the process. In our case, we have filesystem resources we need to clean up, and having stale files around from the last run can cause issues in new sessions, so this leak is having some pretty gnarly effects.
I can reproduce the issue using only Apple sample plugins, and it seems to be triggered by a timeout. If I have two different AU plugins in the session, and I add a 1 second sleep to the destructor of one of the sample plugins, Logic will force terminate the process and the remaining destructors are not called (even for the plugins without the 1 second sleep).
Is there a way to avoid this behavior? Or to safely clean up our plugin even if other plugins in the session take a second to tear down?
When setting the now playing info for playing media in MPNowPlayingInfoCenter we can set artwork. But it seems the Apple API for creating the artwork is crashing on iOS 18 (FB15145734).
On iOS 17 this gave the warning that the completion handler was not run on the main thread.
I've tried to seek help here: https://stackoverflow.com/questions/78989543/swift-data-race-with-appkit-mpmediaitemartwork-function/78990231?noredirect=1#comment139277425_78990231
but it seems that it's not possible to override the completion handler and therefor it's up to Apple to fix this issue.
.task {
await MainActor.run {
let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default()
var nowPlayingInfo = [String: Any]()
let image = NSImage(named: "image")!
// warning: data race detected: @MainActor function at MPMediaItemArtwork/ContentView.swift:22 was not called on the main thread
nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size, requestHandler: { _ in
// Not on main thread here!
return image
})
nowPlayingInfoCenter.nowPlayingInfo = nowPlayingInfo
}
}
I'm wondering if there is an alternative method to set the now playing artwork?
I am making an app that can two two videos, and then stitch them together on the screen (one video on top half and the other on bottom half).
This is achieved with AVMutableComposition, and then I am using AVAssetExportSession to export a mp4 file out:
guard let export = AVAssetExportSession(asset: composition, presetName: AVAssetExportPreset1920x1080) else {
return
}
export.exportAsynchronously {
....
When the two input videos are around 1GB each, starting the export session immediately increases memory usage by ~2GB, as if it moves the input files into memory immediately (my guess), and at some point my app is killed for using too much memory.
Is there a way to avoid this upfront memory usage and/or avoid getting killed?
I observe significant performance differences when encoding a video in mp4 format (H264). The code I use is standard (using AVAssetWriter, AVAssetWriterInput...).
Here is what I notice when I run the same code on different platforms:
On an iPhone, the video is encoded in 3 seconds (iPhone 13, 14, 15, 16, Pro...).
On a Mac equipped with an M2 Pro, the video is encoded in 50 seconds.
On a Mac equipped with an Intel processor (2,3 GHz Intel Xeon W 18 cœurs), the video is encoded in 2 minutes.
The encoding on an iPhone is very fast due to hardware acceleration. However, I don’t understand why I don’t get similar performance with a Mac M2 Pro, which is equipped with a dedicated component for hardware acceleration (H264 media engine)?
Is hardware acceleration disabled on a Mac?
See Configuration Details at the end of this message.
Despite numerous attempts, I have been unable to determine the correct syntax to fetch photo albums from my iPad Pro 13.0 using Xcode and Swift.
All the photo album were synced to the iPad Pro 13-inch using the latest versions of Apple iTunes for Windows from an external Western Digital G-Drive hard drive (No iCloud). All synced albums appear under "From My Mac" on the iPad. I only want to access each album's photo and video count.
See sample code snippet below. I have tried multiple subtype options and album types without success. Zero albums are always returned despite having around 3900 albums in the iPad Pro 13.0 photo library. Authorization to the photo library does not appear to be the problem.
PHPhotoLibrary.requestAuthorization { status in
if status == .authorized {
let result = PHAssetCollection.fetchAssetCollections(with: .album, subtype: .any, options: nil)
if result.count == 0 {
print("No albums found.")
return
}
}
}
Any help or suggestions would greatly be appreciated.
ApplePhoto
Configuration Details
iPad Pro 13-inch (M4) (iPad16,6)
iPadOS = 17.7
iCloud = Turned Off
iPad Pro Photo Library Albums = 3900
iPad Pro Photo Library Photos = 118000
iPad Pro Photo Library Videos = 4800
MacOS = Sonoma 14.6.1
XCode Version = 16.0
Swift Version = 5.0 (Xcode Default)
Microsoft Windows 10 Pro Version = 22H2
Apple iTunes for Windows = 12.13.4.4
In the WWDC 24 session "Use HDR for dynamic image experiences in your app" it's noted this is how you save edits for Adaptive HDR:
SDR + HDR: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrImage: hdrImage])
SDR + Gain: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrGainMapImage: gainImage])
This won't compile because the format argument is missing. What format should be used?
In the WWDC 23 session "Support HDR images in your app" RGBAf, RGBAh, and RGBA16, and RGB10 were mentioned but I'm not sure which one to use.
If relevant, I'm editing photos from the user's photo library, so the image was probably taken on iPhone but perhaps not. Thanks!
Hi,
I'm working on an integration with the Apple Music Feed, but over the last day, I've been getting a 500 Upstream Service Error on 99% of the API calls I make.
Remarkably, retrying the same endpoint 20-30 times sometimes gives the correct response, but mostly it's a 500 error.
Just to give an example, this:
GET https://api.media.apple.com/v1/feed/album/latest
Returns this generic error:
{
"errors": [
{
"id": "U4ARRA2QDCGYKYRI2IPEJVBTHY",
"title": "Upstream Service Error",
"detail": "Call to get metadata for album feed failed",
"status": "500",
"code": "50001"
}
]
}
The same goes for other feeds like song, artist, and so on.
Before today, I did get the same error message sometimes, but a few retries would solve the issue.
Any insight on what's happening and/or an ETA on fixing it?
Thank you
Can anyone please guide me on how to use SFCustomLanguageModelData.CustomPronunciation?
I am following the below example from WWDC23
https://wwdcnotes.com/documentation/wwdcnotes/wwdc23-10101-customize-ondevice-speech-recognition/
While using this kind of custom pronunciations we need X-SAMPA string of the specific word.
There are tools available on the web to do the same
Word to IPA: https://openl.io/
IPA to X-SAMPA: https://tools.lgm.cl/xsampa.html
But these tools does not seem to produce the same kind of X-SAMPA strings used in demo, example - "Winawer" is converted to "w I n aU @r".
While using any online tools it gives - "/wI"nA:w@r/".
AVAudioRecorder leaves a completely useless chunk of file if a crash happens while recording.
I need to be able to recover. I'm thinking of streaming the recording to disk. I know that is possible with AVAudioEngine but I also know that API is a headache that will lead to unexpected crashes unless you're lucky and the person who built it.
Does Apple have a recommended strategy for failsafe audio recordings? I'm thinking of chunking recordings using many instances of AVAudioRecorder and then stitching those chunks together.
Hi folks:
I've been creating .reality files out of Reality Composer for over a year. Some of the files are up to 500 mB and, prior to the last month they opened fine as AR projected experiences on even basic iPhones and iPads. Now, I think since iOS 18, a 64Mb file will open as an AR experience but files it seems from about 350MB up don't open. Files just opens a window displaying the name of the file, that it's a .reality file and the file size. But it no longer opens into either an AR or Object display of the .reality scene. Has there been a new file size limit put on .reality files that Files will open or what else is going on here. Have a client who was about to launch and experience based on the .Reality file I can no longer open. Please help.
I had no luck to compile a sample code provided by apple with Xcode 16.0 beta 5.
ScreenCaptureKit demo (https://developer.apple.com/documentation/screencapturekit/capturing_screen_content_in_macos)
The part it is failling is,
streamOutput.capturedFrameHandler = { continuation.yield($0) }
And the error message is
Sending '$0' risks causing data races
Task-isolated '$0' is passed as a 'sending' parameter; Uses in callee may race with later task-isolated uses
Please enlighten me why this is an issue and how to avoid?
Thanks in advance!
Set 3 controls to the AVCaptureSession and remove them all. The number of controls in the session is indeed 0, but the camera controls button still shows the previous 3 controls. If it is only 3->2 or 3->1, it can be modified normally, 3->0 is not OK, 0->3 is OK.
f (self.captureControl.zoom) {
if (self.zoomScaleControl) {
self.zoomScaleControl.enabled = false;
[_session removeControl:self.zoomScaleControl];
}
AVCaptureSlider *zoomSlider = [self.captureControl.zoom fetchCaptureSlider];
[zoomSlider setActionQueue:dispatch_get_main_queue() action:^(float zoomFactor) {
@strongify(self);
if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:tryChangeZoomScale:)]) {
[self.dataOutputDelegate videoCaptureSession:self tryChangeZoomScale:zoomFactor];
}
}];
self.zoomScaleControl = zoomSlider;
} else {
if (self.zoomScaleControl) {
self.zoomScaleControl.enabled = false;
[_session removeControl:self.zoomScaleControl];
}
self.zoomScaleControl = nil;
}
if (self.captureControl.exposure) {
if (self.exposureBiasControl) {
self.exposureBiasControl.enabled = false;
[_session removeControl:self.exposureBiasControl];
}
AVCaptureSlider *exposureSlider = [self.captureControl.exposure fetchCaptureSlider];
[exposureSlider setActionQueue:dispatch_get_main_queue() action:^(float bias) {
@strongify(self);
if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:tryChangeExposureBias:)]) {
[self.dataOutputDelegate videoCaptureSession:self tryChangeExposureBias:bias];
}
}];
self.exposureBiasControl = exposureSlider;
} else {
if (self.exposureBiasControl) {
self.exposureBiasControl.enabled = false;
[_session removeControl:self.exposureBiasControl];
}
self.exposureBiasControl = nil;
}
if (self.captureControl.len) {
if (self.lenControl) {
self.lenControl.enabled = false;
[_session removeControl:self.lenControl];
}
ORLenCaptureControlCustomModel *len = self.captureControl.len;
AVCaptureIndexPicker *picker = [len fetchCaptureSlider];
[picker setActionQueue:dispatch_get_main_queue() action:^(NSInteger selectedIndex) {
@strongify(self);
if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:didChangeLenIndex:datas:)]) {
[self.dataOutputDelegate videoCaptureSession:self didChangeLenIndex:selectedIndex datas:self.captureControl.len.indexDatas];
}
}];
self.lenControl = picker;
} else {
if (self.lenControl) {
self.lenControl.enabled = false;
[_session removeControl:self.lenControl];
}
self.lenControl = nil;
}
if ([_session canAddControl:self.zoomScaleControl]) {
[_session addControl:self.zoomScaleControl];
} else {
self.zoomScaleControl = nil;
}
if ([_session canAddControl:self.lenControl]) {
[_session addControl:self.lenControl];
} else {
self.lenControl = nil;
}
if ([_session canAddControl:self.exposureBiasControl]) {
[_session addControl:self.exposureBiasControl];
} else {
self.exposureBiasControl = nil;
}
if (_session.controlsDelegate == nil) {
[_session setControlsDelegate:self queue:GetCaptureControlQueue()];
}
We captured a spatial video with iPhone 15 pro.
When we try to export the video with AVAssetExportSession and AVAssetExportPresetMVHEVC960x960 it always go failed state and
exportSession.error?.localizedDescription yield "Operation Stopped" error.
Code implementation is straight forward .. other HEVC file works well.This problem occurred with only mv-hevc file.
func exportSpatialVideo(videoFilePath: String, outputUrl: URL){
let url:URL? = URL(fileURLWithPath: videoFilePath)
let asset: AVAsset = AVAsset(url:url!)
print(asset.description)
print(asset.tracks.first?.mediaType.rawValue)
let preset = "AVAssetExportPresetMVHEVC960x960"
let exportSession:AVAssetExportSession = AVAssetExportSession(asset: asset, presetName: preset)!
exportSession.outputURL = outputUrl
exportSession.shouldOptimizeForNetworkUse = true
exportSession.outputFileType = AVFileType.mov
exportSession.exportAsynchronously(completionHandler: {
switch exportSession.status {
case .unknown:
print("Unknown Error")
case .waiting:
print( "waiting ... ")
case .exporting:
print( "exporting ...")
case .completed:
print( "completed.")
case .failed:
print("failed.\(String(describing: exportSession.error?.localizedDescription))")
case .cancelled:
}
})
}
is there any solution for it ?
In iOS, when I use AVPlayerViewController to play back a slow motion video, it has a "ramp-up" stage at the start and a "ramp-down" stage at the end, and the video plays at the normal speed (i.e. not slow motion) during these stages.
My question is: are these non-slow-motion stages defined in the video file itself? (e.g. some kind of meta data?) Or, is it just a playback approach used by AVPlayerViewController ?
Thanks!