since the release of iOS26 i get new reports of people making home screen apps of website pages that had camera accessibility to take pictures that mention the camera being 90degree sideways to what it should be.
i have tested it myself and was able to reproduce the issue quite easily on iPhones 13|15|16 regular and pro versions.
this affects all cameras when trying using them with navigator.mediaDevices.getUserMedia({...})...
Camera
RSS for tagDiscuss using the camera on Apple devices.
Posts under Camera tag
108 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I just recently updated to iOS 26 beta (23A5336a) to test an app I am developing
I running an MLModel loaded from a .mlmodelc file.
On the current iOS version 18.6.2 the model is running as expected with no issues.
However on iOS 26 I am now getting error when trying to perform an inference to the model where I pass a camera frame into it.
Below is the error I am seeing when I attempt to run an inference.
at the bottom it says "Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model " does this indicate I need to convert my model or something? I don't understand since it runs as normal on iOS 18.
Any help getting this to run again would be greatly appreciated.
Thank you,
processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: Could not process request ret=0x1d lModel=_ANEModel: { modelURL=file:///var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/ : sourceURL=(null) : UUID=46228BFC-19B0-45BF-B18D-4A2942EEC144 : key={"isegment":0,"inputs":{"input":{"shape":[512,512,1,3,1]}},"outputs":{"var_633":{"shape":[512,512,1,19,1]},"94_argmax_out_value":{"shape":[512,512,1,1,1]},"argmax_out":{"shape":[512,512,1,1,1]},"var_637":{"shape":[512,512,1,19,1]}}} : identifierSource=1 : cacheURLIdentifier=01EF2D3DDB9BA8FD1FDE18C7CCDABA1D78C6BD02DC421D37D4E4A9D34B9F8181_93D03B87030C23427646D13E326EC55368695C3F61B2D32264CFC33E02FFD9FF : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=259022032430 : intermediateBufferHandle=13949 : queueDepth=127 } : state=3 :
[Espresso::ANERuntimeEngine::__forward_segment 0] evaluate[RealTime]WithModel returned 0; code=8 err=Error Domain=com.apple.appleneuralengine Code=8 "processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error" UserInfo={NSLocalizedDescription=processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error}
[Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": ANEF error: /private/var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/model.espresso.net, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error status=-1
Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).
Error Domain=com.apple.Vision Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x114d92940 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}
Hello everyone,
I'm looking for a definitive clarification on how to completely disable all video stabilization, including the hardware OIS, using AVFoundation. The goal is to achieve a completely raw, unstabilized video feed, which is crucial when using external equipment like gimbals to avoid conflicting stabilization motions.
My research points to using the AVCaptureConnection property preferredVideoStabilizationMode and setting it to AVCaptureVideoStabilizationMode.off.
The documentation for the .off case states:
A mode that doesn’t stabilize video capture.
This description is slightly ambiguous. It's unclear whether this only affects software-level stabilization (EIS, EIS+OIS, etc) or if it guarantees the complete deactivation of the physical OIS module. For professional video applications, this is a critical distinction.
So, I'd like to ask the community:
Has anyone been able to definitively confirm that setting preferredVideoStabilizationMode to .off also disables the hardware OIS? Are there any known tests or documentation that prove this behavior?
Is there an alternative or more direct method to ensure the OIS module is physically inactive during video capture?
What is the community's best practice for ensuring absolutely no stabilization is applied to the video pipeline?
Any insights or shared experiences on this topic would be greatly appreciated.
Thank you!
Hi there,
I want to set the iphone camera to "S mode, or Shutter Priority" in camera terminology. Which is a semi-auto exposure model with shutter speed fixed, or set manually.
However, when setting the shutter speed manually, it disables the auto exposure.
So is there a way to keep the auto exposure on while restrict the shutter speed?
Also, I would like to keep a low frame rate, e.g. 30 fps. Would I be able to set shutter speed independent of frame rate?
Here's the code for setting up the camera
Best,
I tested the accuracy of the depth map on iPhone 12, 13, 14, 15, and 16, and found that the variance of the depth map after iPhone 12 is significantly greater than that of iPhone 12.
Enabling depth filtering will cause the depth data to be affected by the previous frame, adding more unnecessary noise, especially when the phone is moving.
This is not friendly for high-precision reconstruction. I tried to add depth map smoothing in post-processing to solve the problem of large depth map deviation, but the performance is still poor.
Is there any depth map smoothing solutions already announced by Apple?
What options do I have if I don't want to use Blackmagic's Camera ProDock as the external Sync Hardware, but instead I want to create my own USB-C hardware accessory which would show up as an AVExternalSyncDevice on the iPhone 17 Pro?
Which protocol does my USB-C device have to implement to show up as an eligible clock device in AVExternalSyncDevice.DiscoverySession?
When changing a camera's exposure, AVFoundation provides a callback which offers the timestamp of the first frame captured with the new exposure duration: AVCaptureDevice.setExposureModeCustom(duration:, iso:, completionHandler:).
I want to get a similar callback when changing frame duration.
After setting AVCaptureDevice.activeVideoMinFrameDuration or AVCaptureDevice.activeVideoMinFrameDuration to a new value, how can I compute the index or the timestamp of the first camera frame which was captured using the newly set frame duration?
The camera/microphone usage light comes on whenever I switch apps. The only thing it shows when I try to see what app is accessing the camera or microphone is What it’s accessin: Camera and the app that is accessing Camera is the Camera app… shouldn’t it show the process or app that is using the Camera app to access the camera?
I want to build an app for ios using react native. preferably expo.
The app will be for recording user experiences with technology. the SLUDGE that they face while navigating through technology.
I want to have basic login, signup.
The main feature would be to have 2 recording modes.
First is record the screen and the front camera simultaneously.
Second is to record the back camera and the front camera simultaneously.
I can then patch the two outputs later on that is the screen recording and the front camera clip in post processing.
I want to know if this is possible as I was told that react native and expo does not have the support yet. if not is there any library or another approach to make this app come alive.
Since iOS 18, the system setting “Allow Audio Playback” (enabled by default) allows third-party app audio to continue playing while the user is recording video with the Camera app. This has created a problem for the app I’m developing.
➡️ The problem:
My app plays continuous audio in both foreground and background states. If the user starts recording video using the iOS Camera app, the app’s audio — still playing in the background — gets captured in the video — obviously an unintended behavior.
Yes, the user could stop the app manually before starting the video recording, but that can’t be guaranteed. As a developer, I need a way to stop the app’s audio before the video recording begins.
So far, I haven’t found a reliable way to detect when video recording starts if ‘Allow Audio Playback’ is ON.
➡️ What I’ve tried:
— AVAudioSession.interruptionNotification → doesn’t fire
— devicesChangedEventStream → not triggered
I don’t want to request mic permission (app doesn’t use mic). also, disabling the app from playing audio in the background isn’t an option as it is a crucial part of the user experience
➡️ What I need:
A reliable, supported way to detect when the Camera app begins video recording, without requiring mic access — so I can stop audio and avoid unintentional overlap with the user’s recordings.
Any official guidance, workarounds, or AVFoundation techniques would be greatly appreciated.
Thanks.
Please can anyone advise where to find SloMo video on IOS26?
it was part of the options at the bottom on IOS18 but now appears to have disappeared on IOS26.
Many thanks
Hey,
Quick question. I noticed that Adobe's new app, Project Indigo, allows you to open the app using the Camera Control button. However, when your device is locked it just shows this screen:
Would this normally be approved by the Appstore approval process? I ask because I would like to do something similar with my camera app.
I know that this is not the best user experience, but my apps UI is not built in Swift and I don't have the resources to build the UI again. At least this way the user experience would be improved from what it is now, where users cannot even launch the app. I get many requests per week about this feature and would love to improve the UX for my users, even if it's not the best possible.
Thanks, Alex
Access to VisionPro cameras is required for a research project. The project is on mixed reality software development for healthcare applications in dentistry.
My back camera freezes after first frame is displayed on the screen. Issue is present in all the apps that use the back camera. Same issue with front camera, however front camera works with Facetime and WhatsApp but not the native camera app.
AVCaptureSession's startRunning method is thread blocking and seems to be slow. What is this method doing behind the scenes?
For context: I'm working on Simulator Camera support and I have a 'fake' AVCaptureDevice that might be causing this. My hypothesis is that AVCaptureSession tries to connect to the device and waits for a notification to be posted back.
I'd love to find a way to let my fake device message AVCaptureSession that it's connected.
So I've spent the last five years optimizing my video AI system so that it runs with less than 5% CPU while processing a 30fps video feed on a Macbook Pro M2, and everything is great, until Sonoma comes out, and I find myself consuming 40% CPU for the exact same workload.
So I fire up Instruments, and the "heaviest stack trace" (see screenshot) turns out to be Espresso doing some completely unasked-for and absolutely useless processing on my video frames. I turn off Reactions, but nothing helps - the CPU consumptions stays at 40%.
"Reactions" is nothing but a useless toy to please some WWDC keynote fanboys, I don't want it anywhere near my app or my users, and I especially do not want to take the blame for it pissing away the user's CPU cycles and battery.
Now, how do I make it go away, for ever?
Best regards
Jacob
Will UVC native support come for the Iphone as well?
So, using external cameras with the ipad is greatly beneficial, but for the iphone, it can make it a production powerhouse!
So, have there been discussions around bringing UVC support for the Iphone as well? and if so, what were your conclusions?
Before you post —Camera doesn't work on the Simulator— that's no longer true. I've made a solution that makes the Simulator believe there's an actual hardware device connected, allowing users to stream the macOS camera to the iOS Simulator (see for more info RocketSim's documentation: https://docs.rocketsim.app/features/hzQMSrSga7BGWvxdNVdwYs/simulator-camera-support/58tQ5jvevLNSnyUEA7VgAv)
Now, it works for VNDocumentCameraViewController, but when I try opening DataScannerViewController, I directly run into:
Failed to start scanning: The operation couldn’t be completed. (VisionKit.DataScannerViewController.ScanningUnavailable error 0.)
My question:
How does this view controller determine whether scanning is available?
Is there a certain capability the available AVCaptureDevice's need to support maybe?
Any direction would be helpful for me to make this work for developers, making them build apps faster!
I was update my phone 12 promax to ios 26 and the back camera can’t use anymore
Hi,
I’m building a PPG-based heart rate feature where the user places their finger over the rear telephoto camera. On iPhone 16 Pro Max, I'm explicitly selecting the telephoto lens like this:
videoDevice = AVCaptureDevice.default(.builtInTelephotoCamera, for: .video, position: .back)
And trying to lock it:
if #available(iOS 15.0, *),
device.activePrimaryConstituentDeviceSwitchingBehavior != .unsupported {
try? device.lockForConfiguration()
device.setPrimaryConstituentDeviceSwitchingBehavior(.locked, restrictedSwitchingBehaviorConditions: [])
device.unlockForConfiguration()
}
I also lock everything else to prevent dynamic changes:
try device.lockForConfiguration()
device.focusMode = .locked
device.exposureMode = .locked
device.whiteBalanceMode = .locked
device.videoZoomFactor = 1.0
device.automaticallyEnablesLowLightBoostWhenAvailable = false
device.automaticallyAdjustsVideoHDREnabled = false
device.unlockForConfiguration()
Despite this, the camera still switches to another lens, especially under different lighting, even though the user’s finger fully covers the lens.
Questions:
How can I completely prevent lens switching in this scenario?
Would using videoZoomFactor = 3.0 or 5.0 better enforce use of the telephoto lens?
Thanks!
Gal