Hey, I’m building a camera app where I am applying real time effects to the view finder. One of those effects is a variable blur, so to improve performance I am scaling down the input image using CIFilter.lanczosScaleTransform(). This works fine and runs at 30FPS, but when running the metal profiler I can see that the scaling transforms use a lot of GPU time, almost as much as the variable blur. Is there a more efficient way to do this?
The simplified chain is like this:
Scale down viewFinder CVPixelBuffer (CIFilter.lanczosScaleTransform)
Scale up depthMap CVPixelBuffer to match viewFinder size (CIFilter.lanczosScaleTransform)
Create CIImages from both CVPixelBuffers
Apply VariableDepthBlur (CIFilter.maskedVariableBlur)
Scale up final image to metal view size (CIFilter.lanczosScaleTransform)
Render CIImage to a MTKView using CIRenderDestination
From some research, I wonder if scaling the CVPixelBuffer using the accelerate framework would be faster? Also, Instead of scaling the final image, perhaps I could offload this to the metal view?
Any pointers greatly appreciated!
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
Dear All,
Since installing iOS 18 public beta, I can't send music from my iPhone to my old Airport Express Gen 1 (unable to conect). Is this a general problem?
Thanks for your feedback,
Patrick
On iOS 18 public beta, user issues with photo quality being degraded when taken in the camera app and getting error message of failing to produce high resolution image.
I am looking for a way to know how much of the text is remaining (i.e., progress bar) when synthesizer.speak is called. I looked at this but it does not seem to provide any progress. is there any way to get the progress?
I have an HDR10+ encoded video that if loaded as a mov plays back on the Apple Vision Pro but when that video is encoded using the latest (1.23b) Apple HLS tools to generate an fMP4 - the resulting m3u8 cannot be played back in the Apple Vision Pro and I only get back a "Cannot Open" error.
To generate the m3u8, I'm just calling mediafilesegmenter (with -iso-fragmented) and then variantplaylistcreator. This completes with no errors but the m3u8 will playback on the Mac using VLC but not on the Apple Vision Pro.
The relevant part of the m3u8 is:
#EXT-X-STREAM-INF:AVERAGE-BANDWIDTH=40022507,BANDWIDTH=48883974,VIDEO-RANGE=PQ,CODECS="ec-3,hvc1.1.60000000.L180.B0",RESOLUTION=4096x4096,FRAME-RATE=24.000,CLOSED-CAPTIONS=NONE,AUDIO="audio1",REQ-VIDEO-LAYOUT="CH-STEREO"
{{url}}
Has anyone been able to use the HLS tools to generate fMP4s of MV-HEVC videos with HDR10?
I have this code:
class SpeechSynthesizerDelegate: NSObject, AVSpeechSynthesizerDelegate {
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
print("Speech finished.")
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didCancel utterance: AVSpeechUtterance) {
print("Speech canceled.")
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didStart utterance: AVSpeechUtterance) {
print("Speech started.")
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didPause utterance: AVSpeechUtterance) {
print("Speech paused.")
...
that I try to use like this
let synthesizer = AVSpeechSynthesizer()
let delegate = SpeechSynthesizerDelegate()
synthesizer.delegate = delegate
but when I call
synthesizer.speak(utterance)
the delegate methods are not being called. I am running this on Mac OS ventura. How can I fix this?
I am running the code sample here https://developer.apple.com/documentation/avfoundation/speech_synthesis/ in a REPL on Mac OS Ventura
import AVFoundation
// Create an utterance.
let utterance = AVSpeechUtterance(string: "The quick brown fox jumped over the lazy dog.")
// Configure the utterance.
utterance.rate = 0.57
utterance.pitchMultiplier = 0.8
utterance.postUtteranceDelay = 0.2
utterance.volume = 0.8
// Retrieve the British English voice.
let voice = AVSpeechSynthesisVoice(language: "en-GB")
// Assign the voice to the utterance.
utterance.voice = voice
// Create a speech synthesizer.
let synthesizer = AVSpeechSynthesizer()
// Tell the synthesizer to speak the utterance.
synthesizer.speak(utterance)
It runs without errors but I don't hear any sound and the call to
synthesizer.speak
returns immediately. How can I fix this? Note I am running in REPL so synthesizer is not going out of scope and getting garbage collected.
I have a user who is reporting an error and has been kind enough to share screen recordings to help diagnose. I am not experiencing this error, nor am I able to replicate on other devices I've tried, so I'm stuck trying to fix. His & other devices tested were all running iOS 17.5.1. Any details on the cause of this error or potential workarounds I could use to resolve would be greatly appreciated.
try await ApplicationMusicPlayer.shared.play()
throws:
The operation couldn't be completed (MPMusicPlayerControllerErrorDomain error 6.)
MusicAuthorization.currentStatus is .authorized
ApplicationMusicPlayer.shared.isPreparedToPlay is false
ApplicationMusicPlayer.shared.queue.currentEntry is nil (I've noticed this to be the case even when I am able to successfully play as well)
Queue was loaded using ApplicationMusicPlayer.shared.queue = [album] but I also tried ApplicationMusicPlayer.shared.queue = ApplicationMusicPlayer.Queue(album:startingAt:) and it made no difference. album.playParameters are correct. He experiences the error when attempting to play any album.
Any and all help is truly appreciated. Feedback Assistant filed has gone unanswered.
HI,
I'm developing an iOS app that accepts an audio signal as input with the goal of analyzing the signal.
For my experiment I purchased a cheap ADC-DAC produced by Sabrent.
It works well but the sampling rate is 44.1 khz but I need at least something more (96 khz).
I'm looking around but I find many DACs used to connect headphones.
Can any of you suggest me an ADC-DAC, preferably not too expensive with a sampling rate of at least 96khz, working with iphones?
When I use the exportPresetsCompatibleWithAsset interface in iOS 17, everything works fine. However, when I upgraded my iPhone XR to iOS 18, calling the exportPresetsCompatibleWithAsset interface to transcode 4K assets returned success, but when I tried to save to the album, it returned error 3302.
Is it because this interface has already been deprecated in iOS 16?
MPMusicPlayerControllers nowPlayingItem no longer seems to be able to change a song. The code use to work but seems to be broken on iOS 16, 17 and now the iOS 18 beta.
When newSong is triggered, the song restarts but it does not change songs. Instead I get the following error: Failed to set now playing item error=<MPMusicPlayerControllerErrorDomain.5 "Unable to play item <MPConcreteMediaItem: 0x9e9f0ef70> 206357861099970620" {}>.
The documentation seems to indicate I’m doing things correctly.
class MusicPlayer {
var songTwo: MPMediaItem?
let player = MPMusicPlayerController.applicationMusicPlayer
func start() async {
await MPMediaLibrary.requestAuthorization()
let myPlaylistsQuery = MPMediaQuery.playlists()
let playlists = myPlaylistsQuery.collections!.filter { $0.items.count > 2}
let playlist = playlists.first!
let songOne = playlist.items.first!
songTwo = playlist.items[1]
player.setQueue(with: playlist)
play(songOne)
}
func newSong() {
guard let songTwo else { return }
play(songTwo)
}
private func play(_ song: MPMediaItem) {
player.stop()
player.nowPlayingItem = song
player.prepareToPlay()
player.play()
}
}
Hi ,
For our application's usecase, we need to remove autofocus completely.
we dont need autofocus at all. is there anyway to remove or disable autofocus completely in the iOS applications?
Are there any plans to support developers for a portion of the iPhone 15 series' 24MP photoshoot? I wonder if the app can support it other than the basic camera.
Music app stops playing when switching to the background
In apps that play music or music files, if you move to the home screen or run another app while the app is running, the music playback stops.
Our app does not have the code to stop playing when switching to the background.
We are guessing that some people experience this and others do not.
We usually guide users to reboot their devices and try again.
How can this phenomenon be improved in the code?
Or is this a bug or error in the OS?
I'm trying to set a specific start time for the song, using ApplicationMusicPlayer.shared.playbackTime but is not working
musicPlayer.playbackTime = 10
try await musicPlayer.prepareToPlay()
try await musicPlayer.play()
Good morning,
I'm trying to use MusicKit functionalities in order to get last played songs and put them into a local DB, to be played later. Following the guide on developer.apple.com, I created the required AppServices integration:
Below is a minimal working version of what I'm doing:
func requestMusicAuthorization() async {
let status = await MusicAuthorization.request()
switch status {
case .authorized:
isAuthorizedForMusicKit = true
error = nil
case .restricted:
error = NSError(domain: "Music access is restricted", code: -10)
case .notDetermined:
break
case .denied:
error = NSError(domain: "Music access is denied", code: -10)
@unknown default:
break
}
}
on the SwiftUI ContentView there's something like that:
.onAppear {
Task {
await requestMusicAuthorization()
if MusicManager.shared.isAuthorizedForMusicKit {
let response = try await fetchLastSongs()
do {
let request = MusicRecentlyPlayedRequest<Song>()
let response = try await request.response()
var songs: [Song] = response.items.map { $0 }
// do some CloudKit handling with songs...
print("Recent songs: \(songs)")
} catch {
NSLog(error.localizedDescription)
}
}
}
}
Everything seems to works fine, but my console log is full of garbage like that:
MSVEntitlementUtilities - Process MyMusicApp PID[33633] - Group: (null) - Entitlement: com.apple.accounts.appleaccount.fullaccess - Entitled: NO - Error: (null)
Attempted to register account monitor for types client is not authorized to access: {(
"com.apple.account.iTunesStore"
)}
is there something I'm missing on? Should I ignore that and go forward with my implementation? Any help is really appreciated.
I'm looking for a sample code project on integrating Spatial Audio into my app, Tunda Island, a music-loving, make friends and dating app. I have gone as far as purchasing a book "Exploring MusicKit" by Rudrank Riyam but to no avail.
Hello Apple Community,
I am developing an iOS app and would like to add a feature that allows users to play and organize Audible.com files within the app. Does Audible or the App Store provide any API or SDK for third-party apps to access and manage Audible content? If so, could you please provide some guidance on how to integrate it into my app?
Thank you for your assistance!
Best regards,
Yes it labs
I am creating an locked camera capture extension that allows you to take a video with an overlay image on top of it. I'm using AVMutableComposition in order to achieve that. It works perfect in my main app, but when initializing AVMutableComposition in the locked camera extension it always returns nil.
Is this expected?
I’m a tracking company and have my own tracking platform. Looking for the solution that using tag device for animals like Air Tags but running on my platform.
Is there a way to allow my platform to interface with the Find My Phone to get the location data of my Tags ?