Hi there,
I want to set the iphone camera to "S mode, or Shutter Priority" in camera terminology. Which is a semi-auto exposure model with shutter speed fixed, or set manually.
However, when setting the shutter speed manually, it disables the auto exposure.
So is there a way to keep the auto exposure on while restrict the shutter speed?
Also, I would like to keep a low frame rate, e.g. 30 fps. Would I be able to set shutter speed independent of frame rate?
Here's the code for setting up the camera
Best,
                    
                  
                Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
  
    
    Selecting any option will automatically load the page
  
  
  
  
    
  
  
          Post
Replies
Boosts
Views
Activity
                    
                      I'm having a crash on an app that plays videos when the users activates close captions.
I was able to replicate the issue on an empty project. The crash happens when the AVPlayerLayer is used to instantiate an AVPictureInPictureController
These are the example project where I tested the crash:
struct ContentView: View {
    var body: some View {
        VStack {
            VideoPlaylistView()
        }
        .frame(maxWidth: .infinity, maxHeight: .infinity)
        .background(Color.black.ignoresSafeArea())
    }
}
class VideoPlaylistViewModel: ObservableObject {
    // Test with other videos
    var player: AVPlayer? = AVPlayer(url: URL(string:"https://d2ufudlfb4rsg4.cloudfront.net/newsnation/WIpkLz23h/adaptive/WIpkLz23h_master.m3u8")!)
}
struct VideoPlaylistView: View {
    @StateObject var viewModel = VideoPlaylistViewModel()
    var body: some View {
        ScrollView {
            VideoCellView(player: viewModel.player)
                .onAppear {
                    viewModel.player?.play()
                }
        }
        .scrollTargetBehavior(.paging)
        .ignoresSafeArea()
    }
}
struct VideoCellView: View {
    let player: AVPlayer?
    @State var isCCEnabled: Bool = false
    var body: some View {
        ZStack {
            PlayerView(player: player)
                .accessibilityIdentifier("Player View")
        }
        .containerRelativeFrame([.horizontal, .vertical])
        .overlay(alignment: .bottom) {
            Button {
                player?.currentItem?.asset.loadMediaSelectionGroup(for: .legible) { group,error in
                    if let group {
                        let option = !isCCEnabled ? group.options.first : nil
                        player?.currentItem?.select(option, in: group)
                        isCCEnabled.toggle()
                    }
                }
            } label: {
                Text("Close Captions")
                    .font(.subheadline)
                    .foregroundStyle(isCCEnabled ? .red : .primary)
                    .buttonStyle(.bordered)
                    .padding(8)
                    .background(Color.blue.opacity(0.75))
            }
            .padding(.bottom, 48)
            .accessibilityIdentifier("Button Close Captions")
        }
    }
}
import Foundation
import UIKit
import SwiftUI
import AVFoundation
import AVKit
struct PlayerView: UIViewRepresentable {
    let player: AVPlayer?
    func updateUIView(_ uiView: UIView, context: UIViewRepresentableContext<PlayerView>) {
    }
    func makeUIView(context: Context) -> UIView {
        let view = PlayerUIView()
        view.playerLayer.player = player
        view.layer.addSublayer(view.playerLayer)
        view.layer.backgroundColor = UIColor.red.cgColor
        
        view.pipController = AVPictureInPictureController(playerLayer: view.playerLayer)
        view.pipController?.requiresLinearPlayback = true
        view.pipController?.canStartPictureInPictureAutomaticallyFromInline = true
        view.pipController?.delegate = view
        return view
    }
}
class PlayerUIView: UIView, AVPictureInPictureControllerDelegate {
    let playerLayer = AVPlayerLayer()
    var pipController: AVPictureInPictureController?
    override init(frame: CGRect) {
        super.init(frame: frame)
    }
    required init?(coder: NSCoder) {
        fatalError("init(coder:) has not been implemented")
    }
    override func layoutSubviews() {
        super.layoutSubviews()
        playerLayer.frame = bounds
        playerLayer.backgroundColor = UIColor.green.cgColor
    }
    func pictureInPictureController(_ pictureInPictureController: AVPictureInPictureController, failedToStartPictureInPictureWithError error: any Error) {
        print("Error starting Picture in Picture: \(error.localizedDescription)")
    }
}
class AppDelegate: NSObject, UIApplicationDelegate {
    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey : Any]? = nil) -> Bool {
        let audioSession = AVAudioSession.sharedInstance()
        do {
            try audioSession.setCategory(.playback, mode: .moviePlayback)
            try audioSession.setActive(true)
        } catch {
            print("ERR: \(error.localizedDescription)")
        }
        return true
    }
}
UITest to make the app crash:
final class VideoPlaylistSampleUITests: XCTestCase {
    func testCrashiOS26ToggleCloseCaptions() throws {
        let app = XCUIApplication()
        app.launch()
        let videoPlayer = app.otherElements["Player View"]
        XCTAssertTrue(videoPlayer.waitForExistence(timeout: 30))
        let closeCaptionButton = app.buttons["Button Close Captions"]
        for _ in 0..<2000 {
            closeCaptionButton.tap()
        }
    }
}
                    
                  
                
                    
                      Hi,
I have an app that displays tens of short (<1mb) mp4 videos stored in a remote server in a vertical UICollectionView that has horizontally scrollable sections.
I'm caching all mp4 files on disk after downloading, and I also have a in-memory cache that holds a limited number (around 30) of players. The players I'm using are simple views that wrap an AVPlayerLayer and its AVPlayerItem, along with a few additional UI components.
The scrolling performance was good before iOS 26, but with the release of iOS 26, I noticed that there is significant stuttering during scrolling while creating players with a fileUrl.  It happens even if use the same video file cached on disk for each cell for testing.
I also started getting this kind of log messages after the players are deinitialized:
<<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1107
<<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1095
<<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1095
There's also another log message that I see occasionally, but I don't know what triggers it.
<< FigXPC >> signalled err=-16152 at <>:1683
Is there anyone else that experienced this kind of problem with the latest release?
Also, I'm wondering what's the best way to resolve the issue. I could increase the size of the memory cache to something large like 100, but I'm not sure if it is an acceptable solution because:
1- There will be 100 player instance in memory at all times.
2- There will still be stuttering during the initial loading of the videos from the web.
Any help is appreciated!
                    
                  
                
                    
                      I tested the accuracy of the depth map on iPhone 12, 13, 14, 15, and 16, and found that the variance of the depth map after iPhone 12 is significantly greater than that of iPhone 12.
Enabling depth filtering will cause the depth data to be affected by the previous frame, adding more unnecessary noise, especially when the phone is moving.
This is not friendly for high-precision reconstruction. I tried to add depth map smoothing in post-processing to solve the problem of large depth map deviation, but the performance is still poor.
Is there any depth map smoothing solutions already announced by Apple?
                    
                  
                
                    
                      What options do I have if I don't want to use Blackmagic's Camera ProDock as the external Sync Hardware, but instead I want to create my own USB-C hardware accessory which would show up as an AVExternalSyncDevice on the iPhone 17 Pro?
Which protocol does my USB-C device have to implement to show up as an eligible clock device in AVExternalSyncDevice.DiscoverySession?
                    
                  
                
                    
                      On macOS Sequoia, I'm having the hardest time getting this basic audio output to work correctly. I'm compiling in XCode using C99, and when I run this, I get audio for a split second, and then nothing, indefinitely.
Any ideas what could be going wrong?
Here's a minimum code example to demonstrate:
#include <AudioToolbox/AudioToolbox.h>
#include <stdint.h>
#define RENDER_BUFFER_COUNT 2
#define RENDER_FRAMES_PER_BUFFER 128
// mono linear PCM audio data at 48kHz
#define RENDER_SAMPLE_RATE 48000
#define RENDER_CHANNEL_COUNT 1
#define RENDER_BUFFER_BYTE_COUNT (RENDER_FRAMES_PER_BUFFER * RENDER_CHANNEL_COUNT * sizeof(f32))
void RenderAudioSaw(float* outBuffer, uint32_t frameCount, uint32_t channelCount)
{
    static bool isInverted = false;
    float scalar = isInverted ? -1.f : 1.f;
    for (uint32_t frame = 0; frame < frameCount; ++frame)
    {
        for (uint32_t channel = 0; channel < channelCount; ++channel)
        {
            // series of ramps, alternating up and down.
            outBuffer[frame * channelCount + channel] = 0.1f * scalar * ((float)frame / frameCount);
        }
    }
    
    isInverted = !isInverted;
}
AudioStreamBasicDescription coreAudioDesc = { 0 };
AudioQueueRef coreAudioQueue = NULL;
AudioQueueBufferRef coreAudioBuffers[RENDER_BUFFER_COUNT] = { NULL };
void coreAudioCallback(void* unused, AudioQueueRef queue, AudioQueueBufferRef buffer)
{
    // 0's here indicate no fancy packet magic
    AudioQueueEnqueueBuffer(queue, buffer, 0, 0);
}
int main(void)
{
    const UInt32 BytesPerSample = sizeof(float);
    
    coreAudioDesc.mSampleRate = RENDER_SAMPLE_RATE;
    coreAudioDesc.mFormatID = kAudioFormatLinearPCM;
    coreAudioDesc.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked;
    coreAudioDesc.mBytesPerPacket = RENDER_CHANNEL_COUNT * BytesPerSample;
    coreAudioDesc.mFramesPerPacket = 1;
    coreAudioDesc.mBytesPerFrame = RENDER_CHANNEL_COUNT * BytesPerSample;
    coreAudioDesc.mChannelsPerFrame = RENDER_CHANNEL_COUNT;
    coreAudioDesc.mBitsPerChannel = BytesPerSample * 8;
    
    coreAudioQueue = NULL;
    
    OSStatus result;
    // most of the 0 and NULL params here are for compressed sound formats etc.
    result = AudioQueueNewOutput(&coreAudioDesc, &coreAudioCallback, NULL, 0, 0, 0, &coreAudioQueue);
    
    if (result != noErr)
    {
        assert(false == "AudioQueueNewOutput failed!");
        abort();
    }
    
    for (int i = 0; i < RENDER_BUFFER_COUNT; ++i)
    {
        uint32_t bufferSize = coreAudioDesc.mBytesPerFrame * RENDER_FRAMES_PER_BUFFER;
        result = AudioQueueAllocateBuffer(coreAudioQueue, bufferSize, &(coreAudioBuffers[i]));
        if (result != noErr)
        {
            assert(false == "AudioQueueAllocateBuffer failed!");
            abort();
        }
    }
    
    for (int i = 0; i < RENDER_BUFFER_COUNT; ++i)
    {
        RenderAudioSaw(coreAudioBuffers[i]->mAudioData, RENDER_FRAMES_PER_BUFFER, RENDER_CHANNEL_COUNT);
        coreAudioBuffers[i]->mAudioDataByteSize = coreAudioBuffers[i]->mAudioDataBytesCapacity;
        
        AudioQueueEnqueueBuffer(coreAudioQueue, coreAudioBuffers[i], 0, 0);
    }
    AudioQueueStart(coreAudioQueue, NULL);
    sleep(10); // some time to hear the audio
    
    AudioQueueStop(coreAudioQueue, true);
    AudioQueueDispose(coreAudioQueue, true);
    
    return 0;
}
                    
                  
                
                    
                      I donate INPlayMediaIntent to systerm(donate success), but not show in control center
My code is as follows
let mediaItems = mediaItems.map { $0.inMediaItem }
let intent = if #available(iOS 13.0, *) {
INPlayMediaIntent(mediaItems: mediaItems,
mediaContainer: nil,
playShuffled: false,
playbackRepeatMode: .none,
resumePlayback: true,
playbackQueueLocation: .now,
playbackSpeed: nil,
mediaSearch: nil)
} else {
INPlayMediaIntent(mediaItems: mediaItems,
mediaContainer: nil,
playShuffled: false,
playbackRepeatMode: .none,
resumePlayback: true)
}
    intent.suggestedInvocationPhrase = "播放音乐"
    let interaction = INInteraction(intent: intent, response: nil)
    interaction.donate { error in
        if let error = error {
            print("Intent 捐赠失败: \(error.localizedDescription)")
        } else {
            print("Intent 捐赠成功 ✅")
        }
    }
                    
                  
                
                    
                      I'm writing some camera functionality that uses AVCaptureVideoDataOutput.
I've set it up so that it calls my AVCaptureVideoDataOutputSampleBufferDelegate on a background thread, by making my own dispatch_queue and configuring the AVCaptureVideoDataOutput.
My question is then, if I configure my AVCaptureSession differently, or even stop it altogether, is this guaranteed to flush all pending jobs on my background thread? For example, does [AVCaptureSession stopRunning] imply a blocking call until all pending frame-callbacks are done?
I have a more practical example below, showing how I am accessing something from the foreground thread from the background thread, but I wonder when/how it's safe to clean up that resource.
I have setup similar to the following:
// Foreground thread logic
dispatch_queue_t queue = dispatch_queue_create("qt_avf_camera_queue", nullptr);
AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
setupInputDevice(captureSession); // Connects the AVCaptureDevice...
// Store some arbitrary data to be attached to the frame, stored on the foreground thread
FrameMetaData frameMetaData = ...; 
MySampleBufferDelegate *sampleBufferDelegate = [MySampleBufferDelegate alloc];
// Capture frameMetaData by reference in lambda
[sampleBufferDelegate setFrameMetaDataGetter: [&frameMetaData]() { return &frameMetaData; }];
AVCaptureVideoDataOutput *captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureVideoDataOutput setSampleBufferDelegate:sampleBufferDelegate
                                          queue:queue];
[captureSession addOutput:captureVideoDataOutput];
[captureSession startRunning];
[captureSession stopRunning];
// Is it now safe to destroy frameMetaData, or do we need manual barrier?
And then in MySampleBufferDelegate:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
        didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
               fromConnection:(AVCaptureConnection *)connection
{
    // Invokes the callback set above
    FrameMetaData *frameMetaData = frameMetaDataGetter();
    
    emitSampleBuffer(sampleBuffer, frameMetaData);
}
                    
                  
                
                    
                      I’m using the shared instance of AVAudioSession. After activating it with .setActive(true), I observe the outputVolume, and it correctly reports the device’s volume.
However, after deactivating the session using .setActive(false), changing the volume, and then reactivating it again, the outputVolume returns the previous volume (before deactivation), not the current device volume. The correct volume is only reported after the user manually changes it again using physical buttons or Control Center, which triggers the observer.
What I need is a way to retrieve the actual current device volume immediately after reactivating the audio session, even on the second and subsequent activations.
Disabling and re-enabling the audio session is essential to how my application functions.
I’ve tested this behavior with my colleagues, and the issue is consistently reproducible on iOS 18.0.1, iOS 18.1, iOS 18.3, iOS 18.5 and iOS 18.6.2. On devices running iOS 17.6.1 and iOS 16.0.3, outputVolume correctly reflects the current volume immediately after calling .setActive(true) multiple times.
                    
                  
                
                    
                      Hello,
Does anyone have a recipe on how to raycast VNFaceLandmarkRegion2D points obtained from a frame's capturedImage?
More specifically, how to construct the "from" parameter of the frame's raycastQuery from a VNFaceLandmarkRegion2D point?
Do the points need to be flipped vertically? Is there any other transformation that needs to be performed on the points prior to passing them to raycastQuery?
                    
                  
                
                    
                      My Environment:
Device: Mac (Apple Silicon, arm64)
OS: macOS 15.6.1
Description:
I'm developing a music app and have encountered an issue where I cannot update the playbackState in MPNowPlayingInfoCenter after my app loses audio focus to another app. Even though my app correctly calls [MPNowPlayingInfoCenter defaultCenter].playbackState = .paused, the system's Now Playing UI (Control Center, Lock Screen, AirPods controls) does not reflect this change. The UI remains stuck until the app that currently holds audio focus also changes its playback state.
I've observed this same behavior in other third-party music apps from the App Store, which suggests it might be a system-level issue.
Steps to Reproduce:
Use two most popular music apps in Chinese app Store (NeteaseCloud music and QQ music) (let's call them App A and App B):
Start playback in App A.
Start playback in App B. (App B now has audio focus, and App A is still playing).
Attempt to pause App A via the system's Control Center or its own UI.
Observed Behavior: App A's audio stream stops, but in the system's Now Playing controls, App A still appears to be playing. The progress bar continues to advance, and the pause button becomes unresponsive.
If you then pause App B, the Now Playing UI for App A immediately corrects itself and displays the proper "paused" state.
My Questions:
Is there a specific procedure required to update MPNowPlayingInfoCenter when an app is not the current "Now Playing" application?
Is this a known issue or expected behavior in macOS?
Are there any official workarounds or solutions to ensure the UI updates correctly?
                    
                  
                
                    
                      Hi everyone,
I noticed that Apple recently added a few new beta sample codes related to video encoding:
Encoding video for low-latency conferencing
Encoding video for live streaming
While experimenting with H.264 encoding, I came across some questions regarding certain configurations:
When I enable kVTVideoEncoderSpecification_EnableLowLatencyRateControl, is it still possible to use kVTCompressionPropertyKey_VariableBitRate? In my tests, I get an error.
It also seems that kVTVideoEncoderSpecification_EnableLowLatencyRateControl cannot be used together with kVTCompressionPropertyKey_ConstantBitRate when encoding H264. Is that expected?
When using kVTCompressionPropertyKey_ConstantBitRate with kVTCompressionPropertyKey_MaxKeyFrameInterval set to 2, the encoder outputs only keyframes, and the frame size keeps increasing, which doesn’t seem like the intended behavior.
Regarding the following code from the sample:
let byteLimit = (Double(bitrate) / 8) * 1.5 as CFNumber
let secLimit = Double(1.0) as CFNumber
let limitsArray = [ byteLimit, secLimit ] as CFArray // Each 1 second limit byte as bitrate
err = VTSessionSetProperty(session, key: kVTCompressionPropertyKey_DataRateLimits, value: limitsArray)
This DataRateLimits setting doesn’t seem to have any effect in my tests. Whether I set it or not, the values remain unchanged.
Since the documentation on developer.apple.com/documentation
doesn’t clearly explain these cases, I’d like to ask if anyone has insights or recommendations about the proper usage of these settings.
Thanks in advance!
                    
                  
                
                    
                      Queria saber quando lança o IOS 26 oficialmente
I wanted to know when iOS 26 will be officially released.
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Media Technologies
  	
                
                
                SubTopic:
                  
                    
	
		General
		
  	
                  
                
              
              
              
  
  
    
    
  
  
              
                
                
              
            
          
                    
                      Hello, I want to know if there are any restrictions with MusicKit to be used in a mobile app to be able to manipulate audio with an EQ on tracks coming from Apple Music, without modifying the actual track structure/data of course, just the audio output.
                    
                  
                
                    
                      I am trying to debug the AAX version of my plugin (MIDI effect) on Pro Tools, but I am getting the following error (Mac console) when attempting to load it:
dlsym cannot find symbol g_dwILResult in CFBundle etc..
I used Xcode 16.4 to build the plugin.
Has anybody come across the same or a similar message?
Best,
Achillefs
Axart Labs
                    
                  
                
                    
                      Hello,
I'm trying to determine the best/recommended AVAudioSession configuration (i.e category, mode, and options) for the following use-case.
Essentially, I'd like to switch between periods of playing an audio file and then recognizing speech. The audio file is typically speech and I don't intend for playback and speech recognition to occur simultaneously. I'd like for the user to sill be able to interact with Siri and I'd like for it to work with CarPlay where navigation prompts can occur.
I would assume the category to use is 'playAndRecord', but I'm not sure if it's better to just set that once for the entire lifecycle, or set to 'playback'  for audio file playback and then switch to 'playAndRecord' for speech recognition . I'm also not sure on the best 'mode' and 'options' to set. Any suggestions would be appreciated.
Thanks.
                    
                  
                
                    
                      Hello,
I am trying to access the Apple Music Feed API, but I am recieving a 401 Unauthorized error message whenever I try to access it.
I have tried using my own code to generate a JWT and directly call the API (which can call the standard Apple Music API successfully).
> GET /v1/feed/song/latest HTTP/2
> Host: api.media.apple.com
> user-agent: insomnia/2023.5.8
> authorization: Bearer [REDACTED]
> accept: */*
< HTTP/2 401 
< content-type: application/json; charset=utf-8
< content-length: 0
< x-apple-jingle-correlation-key: AV5IOHBNM2UUJVOFQ4HZ2TGF6Q
< x-daiquiri-instance: daiquiri:10001:daiquiri-all-shared-ext-7bb7c9b9bb-r459v:7987:25RELEASE91:daiquiri-amp-kubernetes-shared-ext-ak8s-prod-pv4-amp-daiquiri-ingress-prod
and also the Apple provided Python example code, which gives me authentication errors too.
$ python3 ./apple_music_feed_example.py --key-id NMBH[...] --team-id 3TNZ[...] --secret-key-file-path "/Users/foxt/Documents/am-feed/NMBH[...].p8" --out-dir .
running....
INFO:__main__:Sending requests to https://api.media.apple.com
INFO:__main__:Getting the latest export for feed artist
Exception: Authentication Failed. Did you provide the correct team id, key id, and p8 file?
Does this API need to be enabled on my account separately from the main Apple Music API? The documentation reads to me as if anyone with an Apple Developer Programme membership can use this API and I did not see any information regarding any other requirements
                    
                  
                
                    
                      Because I want to control the grid size and number of HEIC images myself, I decided to perform HEVC encoding manually and then generate the HEIC image. Previously, I used VTCompressionSession to accomplish this task, and the results were satisfactory. It worked perfectly on iOS 16 through iOS 18 — in other words, it was able to generate correct HEVC encoding, and its CMFormatDescription should also have been correct, since I relied on it to generate the decoderConfig; otherwise, the final image would have decoding issues.
However, it can no longer generate a valid HEIC image on a physical device running iOS 26. Interestingly, it still works fine on the iOS 26 simulator — it only fails on real hardware. The abnormal result is that the image becomes completely black, although the image dimensions are still correct.
After my troubleshooting, I suspect that the encoding behavior of VTCompressionSession has been modified on iOS 26, which causes the final hvc1 encoding I pass in to be incorrect.
I created a VTCompressionSession using the following configuration.
var newSession: VTCompressionSession!
var status = VTCompressionSessionCreate(
    allocator: kCFAllocatorDefault,
    width: Int32(frameSize.width),
    height: Int32(frameSize.height),
    codecType: kCMVideoCodecType_HEVC,
    encoderSpecification: nil,
    imageBufferAttributes: nil,
    compressedDataAllocator: nil,
    outputCallback: nil,
    refcon: nil,
    compressionSessionOut: &newSession
)
try check(status, VideoToolboxErrorDomain)
let properties: [CFString: Any] = [
    kVTCompressionPropertyKey_AllowFrameReordering: false,
    kVTCompressionPropertyKey_AllowTemporalCompression: false,
    kVTCompressionPropertyKey_RealTime: false,
    kVTCompressionPropertyKey_MaximizePowerEfficiency: false,
    kVTCompressionPropertyKey_ProfileLevel: profileLevel,
    kVTCompressionPropertyKey_Quality: quality.rawValue,
]
status = VTSessionSetProperties(newSession, propertyDictionary: properties as CFDictionary)
try check(status, VideoToolboxErrorDomain) {
    VTCompressionSessionInvalidate(newSession)
}
Then use the following code to encode each Grid of the image.
let status = VTCompressionSessionEncodeFrame(
    session,
    imageBuffer: buffer,
    presentationTimeStamp: presentationTimeStamp,
    duration: frameDuration,
    frameProperties: nil,
    infoFlagsOut: nil) { [weak self] status, _, sampleBuffer in
        try check(status, VideoToolboxErrorDomain)
        if let sampleBuffer {
            let encodedImage = try self.encodedImage(from: sampleBuffer)
            // handle encodedImage
        }
    }
try check(status, VideoToolboxErrorDomain)
If I try to display this abnormal image in the App, my console outputs the following error, so it can be inferred that the issue probably occurred during decoding.
createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL
callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray
createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL
callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray
createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL
callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray
It needs to be emphasized again that this code used to work fine in the past, and the issue only occurs on an iOS 26 physical device. I noticed that iOS 26 has introduced many new properties, but I’m not sure whether some of these new properties must be set in the new system, and there’s no information about this in the official documentation.
                    
                  
                
                    
                      3
I am working on an application to get when input audio device is being used. Basically I want to know the application using the microphone (built-in or external)
This app runs on macOS. For Mac versions starting from Sonoma I can use this code:
int getAudioProcessPID(AudioObjectID process)
     {
       pid_t pid;
       if (@available(macOS 14.0, *)) {
        constexpr AudioObjectPropertyAddress prop {
            kAudioProcessPropertyPID,
            kAudioObjectPropertyScopeGlobal,
            kAudioObjectPropertyElementMain
        };
        UInt32 dataSize = sizeof(pid);
        OSStatus error = AudioObjectGetPropertyData(process, &prop, 0, nullptr, &dataSize, &pid);
        if (error != noErr) {
            return -1;
        }
    } else {
        // Pre sonoma code goes here
    }
    return pid;
}
which works.
However, kAudioProcessPropertyPID was added in macOS SDK 14.0.
Does anyone know how to achieve the same functionality on previous versions?
                    
                  
                
                    
                      hi,
Is there an Audio Unit logo I can show on my website? I would love to show that my application is able to host Audio Unit plugins.
regards, Joël