Photos & Camera

RSS for tag

Explore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.

Posts under Photos & Camera subtopic

Post

Replies

Boosts

Views

Activity

WWDC25 Camera & Photos group lab summary (Part 2 of 3)
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3) At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos. WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025 Question 10 Can we directly integrate auto-capture triggers (e.g., when image is steady or text is detected) using Vision and AVFoundation? Yes apps can use AVCaptureSession's VDO + AVCapturePhotoOutput, run vision on VDO buffers and capture photo when certain scene or text is detected. Just to be careful to run Vision on VDO buffers async so it doesn't cause frame drops. Question 11 What Camera or Photos framework features support working with images from external media, like connected cameras or SD cards? Any best practices? The ImageCaptureCore framework supports camera devices, memory cards, scanners read and write, where supported check out the docs to see how to browse connected devices, folders, files, etc. Question 12 Hi Brad, to follow up on your SwiftUI cautionary note: using AVCaptureVideoPreview inside a UIViewRepresentable, is okay, right? Thanks all for the great info! Yes, this is totally fine. AppKit or UIKit views inside appropriate SwiftUI representables should be equivalent performance Question 13 What’s the “right” way to transition media in my photos app between HDR modes? When I’m in a one-up view, we use HDR, but in other contexts (like thumbnail) we don’t want HDR. Is there a nice way to tone map? There’s a suite of new System Tone Mapper APIs in this years’ OSes CoreImage ImageKit CoreAnimation, CoreGraphics For example: CoreImage: new CISystemToneMap filter. CoreAnimation: layer.preferredDynamicRange = CADynamicRangeConstrainedHigh Using image views (NSImageView/UIImageView/SwiftUI Image/CALayer) support animations on preferredDynamicRange Can go from high to constrained to standard Tone mapping is provided by the system (CISystemToneMap for controllable example) Question 14 What is your recommendation to preprocess and upscale your depth map in order to render a realistic portrait mode image? One way to do this: the CIEdgePreserveUpsample CIFilter can be use to upsample a lower resolution depth map by using a higher resolution RGB image as a guide. Question 15 For buffering frames for later processing from real-time camera output should we prefer a AVSampleBufferDisplayLayer centered approach or AVCaptureVideoDataOutputSampleBufferDelegate centered approach? When would we use each? AVSampleBufferDisplayLayer and AVCaptureVideoDataOutputSampleBufferDelegate are used hand in hand for custom camera preview. For buffering for later processing, ensure you make copies of VDO buffers to not drop frames from the output Question 16 Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready CIFilter can be called on the final at that point Photo will have to be re-inserted into the Photo library as adjustment Question 17 Is digital zoom (e.g., 1.5x) before taking a photo the same as cropping the photo afterward? digital zoom upscales the image to output dimensions and cropping will yield a smaller output image while digital zoom will crop, it also upscales Question 18 How do you design camera interfaces that work for both casual users and photography enthusiasts? Progressive disclosure: Put the most common controls up front, and make it easy for pros to drill down. Sensible Defaults: Choose defaults that work well for casual users, but allow those defaults to be modified for photography enthusiasts A good philosophy is: Keep the simple things easy, make the hard things possible Question 19 Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values. Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available Question 20 a couple of years ago at WWDC, the option of replacing a camera with a virtual camera was mentioned. How does one do that - make the “physical” camera effectively disappear, so only the virtual camera is accessible to the user? You can't prevent the built-in camera from being available to the user Question 21 Can developers now integrate custom Core ML models with Vision for on-device photo analysis more seamlessly? Yes they can, use CoreMLRequest , provide their model container Been supported for a while (iOS 18/macOS 15) For more details go to Machine Learning & AI group lab Thursday use smaller images for better performance Question 22 What would you recommend for capture of the new immersive and spatial formats? To capture Spatial Video use AVCaptureMovieFileOutput’s spatialVideoCaptureEnabled property Not all device formats support spatial capture, check AVCaptureDevice.activeFormat.spatialVideoCaptureSupported See WWDC 2024 talk “Build compelling spatial photo and video experiences” for more details Question 23 You mentioned JPEG-XL. What is the current status of support on iOS and macOS for encoding and decoding? For decoding, we support JPEG-XL files in all our OSes, regular SDR files, as well as ISO HDR files. For encoding, we only support JPEG-XL for ProRAW DNG capture in the Camera app or via third-party AVFoundation APIs. If you have any requests for improvement or new features related to JPEG-XL, please file a Feedback request using the Feedback Assistant. (Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
0
0
140
Jul ’25
iPhone 16 Camera Control and AVCaptureSlider – Is there a way to detect which slider is active?
I am following the Apple sample code and trying to add a manual focus lens position slider: @available(iOS 18.0, *) private func addCameraControls() { if !self.session.controls.isEmpty { for control in self.session.controls { self.session.removeControl(control) } } self.cameraControlFocusSlider = nil //Focus Slider if self.videoDevice!.isLockingFocusWithCustomLensPositionSupported { self.cameraControlFocusSlider = AVCaptureSlider("Focus", symbolName: "dot.square", in: 0.0...1.0) self.cameraControlFocusSlider!.setActionQueue(self.sessionQueue) { focusValue in //Do manual focus } if self.session.canAddControl(self.cameraControlFocusSlider!) { self.session.addControl(self.cameraControlFocusSlider!) } } } So there are these AVCaptureSessionControlsDelegate methods: final func sessionControlsDidBecomeActive(_ session: AVCaptureSession) { print ("sessionControlsDidBecomeActive") } final func sessionControlsWillEnterFullscreenAppearance(_ session: AVCaptureSession) { print ("sessionControlsWillEnterFullscreenAppearance") } final func sessionControlsWillExitFullscreenAppearance(_ session: AVCaptureSession) { print ("sessionControlsWillExitFullscreenAppearance") } final func sessionControlsDidBecomeInactive(_ session: AVCaptureSession) { print ("sessionControlsDidBecomeInactive") } So when self.cameraControlFocusSlider is presented, I have to show the current value of the lense position. Lens position can change from auto focus and also from manual focus by the user using the app UI. Is there a way to see if self.cameraControlFocusSlider is active or being used? Please note that I will have more than one AVCaptureSlider in the final code.
0
0
387
Mar ’25
PHIImageManager cannot resize images
I am using PHImageRequestOptions and PHImageManager to load images to my app. I use version.original and resizeMode.none, version.original and resizeMode.extract. Both used to work well but since iOS18 version.original and resizeMode.extract doesn't work anymore. The images are loaded but the they are not shown. (Only the frames?) Anyone knows why? Thank you for reading.
0
0
252
Oct ’24
Set the capture device color space to apple log not works.
I set the device format and colorspace to Apple Log and turn off the HDR, why the movie output is still in HDR format rather than ProRes Log? Full runnable demo here: https://github.com/SpaceGrey/ColorSpaceDemo session.sessionPreset = .inputPriority // get the back camera let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back) backCamera = deviceDiscoverySession.devices.first! try! backCamera.lockForConfiguration() backCamera.automaticallyAdjustsVideoHDREnabled = false backCamera.isVideoHDREnabled = false let formats = backCamera.formats let appleLogFormat = formats.first { format in format.supportedColorSpaces.contains(.appleLog) } print(appleLogFormat!.supportedColorSpaces.contains(.appleLog)) backCamera.activeFormat = appleLogFormat! backCamera.activeColorSpace = .appleLog print("colorspace is Apple Log \(backCamera.activeColorSpace == .appleLog)") backCamera.unlockForConfiguration() do { let input = try AVCaptureDeviceInput(device: backCamera) session.addInput(input) } catch { print(error.localizedDescription) } // add output output = AVCaptureMovieFileOutput() session.addOutput(output) let connection = output.connection(with: .video)! print( output.outputSettings(for: connection) ) /* ["AVVideoWidthKey": 1920, "AVVideoHeightKey": 1080, "AVVideoCodecKey": apch,<----- prores has enabled. "AVVideoCompressionPropertiesKey": { AverageBitRate = 220029696; ExpectedFrameRate = 30; PrepareEncodedSampleBuffersForPaddedWrites = 1; PrioritizeEncodingSpeedOverQuality = 0; RealTime = 1; }] */ previewSource = DefaultPreviewSource(session: session) queue.async { self.session.startRunning() } }
1
0
386
Mar ’25
Does PHAsset localIdentifier change after device transfer?
Hi, I’m working on a photo backup app. I track the PHAsset localIdentifier to determine which photos have been backed up and which haven’t. Recently, I’ve noticed that two users seem to have experienced the localIdentifier changes after transferring data to a new iPhone using Quick Start. Additionally, others on StackOverflow have mentioned that the localIdentifier sometimes changes after updating the iOS version. https://stackoverflow.com/questions/40094728/phobject-localidentifier-reliability I’d like to confirm the reliability of the localIdentifier after an iOS version upgrade or device transfer. Can I continue using these locally stored localIdentifiers? Or is there another recommended approach, such as using PHCloudIdentifier?
2
0
365
Feb ’25
CGImageDestinationAddImageFromSource causes issues in iOS 18 / macOS 15
There seems to be an issue in iOS 18 / macOS 15 related to image thumbnail generation and/or HEIC. We are transcoding JPEG images to HEIC when they are loaded into our app (HEIC has a much lower memory footprint when loaded by Core Image, for some reason). We use Image I/O for that: guard let source = CGImageSourceCreateWithURL(inputURL, nil), let destination = CGImageDestinationCreateWithURL(outputURL, UTType.heic.identifier as CFString, 1, nil) else { throw <error> } let primaryImageIndex = CGImageSourceGetPrimaryImageIndex(source) CGImageDestinationAddImageFromSource(destination, source, primaryImageIndex, nil) When we use CGImageDestinationAddImageFromSource, we get the following warnings on the console: createImage:1445: *** ERROR: bad image size (0 x 0) rb: 0 CGImageSourceCreateThumbnailAtIndex:5195: *** ERROR: CGImageSourceCreateThumbnailAtIndex[0] - 'HJPG' - failed to create thumbnail [-67] {alw:-1, abs: 1 tra:-1 max:4620} writeImageAtIndex:1025: ⭕️ ERROR: '<app>' is trying to save an opaque image (4620x3466) with 'AlphaPremulLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha. It seems that CGImageDestinationAddImageFromSource is trying to extract/create a thumbnail, which fails somehow. I re-wrote the last part like this: guard let primaryImage = CGImageSourceCreateImageAtIndex(source, primaryImageIndex, nil), let properties = CGImageSourceCopyPropertiesAtIndex(source, primaryImageIndex, nil) else { throw <error> } CGImageDestinationAddImage(destination, primaryImage, properties) This doesn't cause any warnings. An issue that might be related has been reported here. I've also heard from others having issues with CGImageSourceCreateThumbnailAtIndex.
0
0
875
Nov ’24
Accessing AV External Storage
Is it possible to use the AVExternalStorageDevice to access external storage from a connected camera or usb drive (via USB C or Lightning connector) on an iPad/iPhone. I have tested the following code on an iPhone 14 (iOS 18.1.1) and an iPad Gen 10 (18.3.1), and both return false for: // returns false on iPhone 14, iPad gen 10 print(AVExternalStorageDeviceDiscoverySession.isSupported) The following code returns null, when I try to access the external storage discovery session. // returns null on iOS devices print(AVExternalStorageDeviceDiscoverySession.shared) The following returns false, without displaying a permission dialog: AVExternalStorageDevice.requestAccess(completionHandler: { (granted: Bool) in // returns false with no permission dialog print(granted); What type of iOS devices are supported by AVExternalStorageDeviceDiscoverySession? What situations has it been used for (e.g. connecting to Camera via the external storage protocol, accessing photos from a SD card with an adapter, accessing photos from usb drive). Is there are sample code for using the AV External Storage api?
0
0
372
Feb ’25
Prevent iOS from Switching Between Back Camera Lenses in getUserMedia (Safari/WebView, iOS 18)
I’m developing a hybrid app (WebView / Turbo Native) that uses getUserMedia to access the back camera for a PPG/heart rate measurement feature (the user places their finger on the camera). Problem: Even when I specify constraints like: { video: { deviceId: '...', facingMode: { exact: 'environment' }, advanced: [{ zoom: 1.0 }] }, audio: false } On iPhone 15 (iOS 18), iOS unexpectedly switches between the wide, ultra-wide, and telephoto lenses during the measurement. This breaks the heart rate detection, and it forces the user to move their finger in the middle of the measurement. Question: Is there any way, via getUserMedia/WebRTC, to force iOS to use only the wide-angle lens and prevent automatic lens switching? I know that with AVFoundation (Swift) you can pick .builtInWideAngleCamera, but I’m hoping to avoid building a custom native layer and would prefer to stick with WebView/JavaScript if possible to save time and complexity. Any suggestions, workarounds, or updates from Apple would be greatly appreciated! Thanks a lot!
1
0
355
Mar ’25
Distortion corrected images
Hi, Currently I am developing a 3D reconstruction project. Which requires images to be distortion-free (rectilinear) and with known intrinsics. The session I am developing on is a builtInDualWideCamera, with isGeometricDistortionCorrectionEnabled set to false to be able to get the intrinsic matrix of the images, isVirtualDeviceConstituentPhotoDeliveryEnabled set to true and isAutoVirtualDeviceFusionEnabled set to false to get both images and isCameraCalibrationDataDeliveryEnabled set to true to actually get the calibration data. The distortion correction parameters such as lensDistortionLookupTable are used. The 42 coefficients mapping array is used as described in the AVCameraCalibrationData header file. A simple piecewise linear interpolation. There are two questions I would like to get support on: A way to set the calibration parameters in each image. I have an approach that sets the parameters in the kCGImagePropertyExifDictionary -> "UserComment". Is there a better approach to write calibration parameter data into the images? I feel like this is a bit dirty and there might be a better and neat approach. For the ultra-wide angle camera's images, the lensDistortionLookupTable contains several zeros at the end of the array. For example (last 10 elements are zero): "LensDistortionLookupTable":"0.000000000000000,0.000349554029526,0.001385628827848,0.003071037586778,... ,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000" The problem comes when the complete array is used to correct the image (including zeros), the end result is a wrapped-like-circle image close to the edges of it which is completely wrong. In contrast, if the LensDistortionLookupTable is used without the last zeros and the new size accommodated the image looks better (although not as rectilinear as if you take the image from the iPhone's camera app), but definitely less distorted. Including zeros (full array): Excluding zeros (array size changed): Am I missing an important point in the usage of the lensDistortionLookupTable where this case is addressed (zeros at the end)? What is the criteria to shrink/exclude elements of the array? Any advice is very much welcome.
0
0
393
Feb ’25
Vision Framework VNTrackObjectRequest: Minimum Valid Bounding Box Size Causing Internal Error (Code=9)
I'm developing a tennis ball tracking feature using Vision Framework in Swift, specifically utilizing VNDetectedObjectObservation and VNTrackObjectRequest. Occasionally (but not always), I receive the following runtime error: Failed to perform SequenceRequest: Error Domain=com.apple.Vision Code=9 "Internal error: unexpected tracked object bounding box size" UserInfo={NSLocalizedDescription=Internal error: unexpected tracked object bounding box size} From my investigation, I suspect the issue arises when the bounding box from the initial observation (VNDetectedObjectObservation) is too small. However, Apple's documentation doesn't clearly define the minimum bounding box size that's considered valid by VNTrackObjectRequest. Could someone clarify: What is the minimum acceptable bounding box width and height (normalized) that Vision Framework's VNTrackObjectRequest expects? Is there any recommended practice or official guidance for bounding box size validation before creating a tracking request? This information would be extremely helpful to reliably avoid this internal error. Thank you!
1
0
74
Apr ’25
Different information values depending on how the metadata of the image is obtained (PHAsset vs PHPickerResult)
While customizing ImagePicker and using it, we find out that the metadata is not reflected normally and report it. The situation is as follows. The time or time zone of an image is changed in the Photos app. Changing the time zone of an image with an actual capture date of 2024:11:08 08:27:44 → 2024:11:07 17:27:44 Image data is extracted from a PHAsset using PHImageManager. The metadata is obtained from this image data. The time zone information exposed in the Exif tag information does not reflect the time or time zone changed in the Photos app. let asset: PHAsset = ... .... let options = PHImageRequestOptions() options.isSynchronous = true options.version = .current options.deliveryMode = .highQualityFormat options.resizeMode = .none options.normalizedCropRect = .zero options.isNetworkAccessAllowed = true options.progressHandler = { progress, error, _, _ in } PHImageManager.default().requestImageDataAndOrientation(for: asset, options: options) { imageData, uti, orientation, info in let cgImageSource = CGImageSourceCreateWithData(imageData! as CFData, nil) let properties = CGImageSourceCopyPropertiesAtIndex(cgImageSource!, 0, nil) as? Dictionary&lt;String, Any&gt; let exif = properties!["{Exif}"] let dictionary = exif as? Dictionary&lt;String, Any&gt; } Metadata Check In this case, it is reflected in the creationDate of PHAsset, so it can be somewhat compensated by forcibly replacing the metadata. However, because PHAsset does not include time zone information, when changing the time zone as well, it's impossible to calculate the correct time according to the time zone. PHPicker This issue is resolved when using the PHPickerResult provided by PHPicker. extension PhotosPickerViewController: PHPickerViewControllerDelegate { public func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) { ..... for result in results { let identifier = UTType.image.identifier if result.itemProvider.hasItemConformingToTypeIdentifier(identifier) { result.itemProvider.loadDataRepresentation(forTypeIdentifier: identifier) { data, error in guard let data = data, let cgImageSource = CGImageSourceCreateWithData(data as CFData, nil), let properties = CGImageSourceCopyPropertiesAtIndex(cgImageSource, 0, nil) as? Dictionary&lt;String, Any&gt;, let exif = properties["{Exif}"], let dictionary = exif as? Dictionary&lt;String, Any&gt; else { return } } } } } } Metadata Check Question I wonder why this happens, and if this is normal behavior. Instead of the System Picker that Apple provides as a base, I wonder if there is any way I can supplement it in that situation if I use a customizer.
0
0
562
Nov ’24
Dockkit custom tracking does not work on iOS18.3
Hi all, I'm using Apple Sample Code below to create application using dockkit. "Controlling a DockKit accessory using your camera app" https://developer.apple.com/documentation/dockkit/controlling-a-dockkit-accessory-using-your-camera-app?changes=_8 I used vision hand recognition and put the observation data to dockAccessory.track, but Belkin or Insta360 devices never move on iPhone 16 Pro Max with iOS 18.3. If I use other functions like face search (system tracking) in the app, those work ok. I used Belkin and Insta360 Flow 2 Pro to reproduce the problem. My friend is also saying that the custom tracking feature was working fine on the OS 18 beta, but on recent iOS 18.3 that feature does not work. If I can get the iOS 18.0 beta then we can test that feature. But I cannot revert my iOS from 18.3 to the iOS 18.0 Beta. Regards, TO
0
0
67
Mar ’25
Checking authorization status of AVCaptureDevice or CLLocation Manager gives runtime warnings in iOS 18
I have the following code in my ObservableObject class and recently XCode started giving purple coloured runtime issues with it (probably in iOS 18): Issue 1: Performing I/O on the main thread can cause slow launches. Issue 2: Interprocess communication on the main thread can cause non-deterministic delays. Issue 3: Interprocess communication on the main thread can cause non-deterministic delays. Here is the code: @Published var cameraAuthorization:AVAuthorizationStatus @Published var micAuthorization:AVAuthorizationStatus @Published var photoLibAuthorization:PHAuthorizationStatus @Published var locationAuthorization:CLAuthorizationStatus var locationManager:CLLocationManager override init() { // Issue 1 (Performing I/O on the main thread can cause slow launches.) cameraAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.video) micAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.audio) photoLibAuthorization = PHPhotoLibrary.authorizationStatus(for: .addOnly) //Issue 1: Performing I/O on the main thread can cause slow launches. locationManager = CLLocationManager() locationAuthorization = locationManager.authorizationStatus super.init() //Issue 2: Interprocess communication on the main thread can cause non-deterministic delays. locationManager.delegate = self } And also in route Change notification handler of AVAudioSession.routeChangeNotification, //Issue 3: Hangs - Interprocess communication on the main thread can cause non-deterministic delays. let categoryPlayback = (AVAudioSession.sharedInstance().category == .playback) I wonder how checking authorisation status can give these issues? What is the fix here?
1
0
741
Dec ’24
[iPadOS18 Beta3] on iPad 7th gen, camera app cannot detect QR code
Hi, After installing iPadOS18 Beta3 on my iPad 7th gen, the default camera app no ​​longer detects QR codes. I tried updating to Beta7, but the issue remained. Also, third-party apps that use AVCaptureMetadataOutput in AVFoundation Framework to detect QR codes also no longer work. You can reproduce the issue by running default camera app or the AVFoundation sample code from the Apple developer site on iPad 7th gen (iPadOS18Beta installed). https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces Has anyone else experienced this issue? I would like to know if this issue occurs on other iPad models as well. This is similar to the following issue that previously occurred with iPadOS 17.4. https://support.apple.com/en-lamr/118614 https://developer.apple.com/forums/thread/748092
5
0
970
Oct ’24
NonLiDAR Photogrammetry
Hello! In iOS1.7.5, photogrammetry sessions cannot be performed on iPhones without LiDAR, but I don't think there is much difference in GPU performance between those with and without LiDAR. For example, the chips installed in the iPhone 14 Pro and iPhone 15 are the same A16 Bionic, and I think the GPU performance is also the same. Despite this, photogrammetry can be performed on the iPhone 14 Pro but not on the iPhone 15. Why is this? In fact, we have confirmed that if you transfer images taken with an iPhone 16 without LiDAR to an iPhone 16 Pro and run a photogrammetry session using those images, a 3D model can be generated. Also, will photogrammetry be able to be performed on high-performance iPhones without LiDAR in the future?
0
0
64
Apr ’25
PHFetchOptions: Full List of Supported Predicate Keys
Hi, Could anybody share the full list of supported Predicate keys for the PHFetchOptions? I'm aware of the list that is posted in the documentation: https://developer.apple.com/documentation/photos/phfetchoptions However I have reason to believe that this is not an exhaustive list and there also seem to be mistakes in this doc. i.e. isFavorite does not work but favorite does. Through some experimentation I also found that this works: NSPredicate(format: "adjustmentFormatIdentifier == 'com.pixelmatorteam.touch.x.photo.PhotosAdjustmentData.EmbeddedSlimSidecarFileInfo.compressed'") even though adjustmentFormatIdentifier is not listed as a supported key. Are there other secret keys that you are aware of? Specifically I want to filter a fetch result for edited items. Something like this: NSPredicate(format: "hasAdjustments == true") (I tried this, doesn't work) The native Photos app has such a filter which leads me to believe that there probably is a key for this: If one of the Framework Developers reads this: Could you please update the documentation page with this information? Finally if there really aren't any more secret keys, is there a way to achieve this with adjustmentFormatIdentifier? I have tried a bunch of stuff already like adjustmentFormatIdentifier != nil but for some reason that gives me the exact opposite of what I want: all the photos without edits. 🫠 Any tips on the correct syntax here would be much appreciated.
1
0
105
Jun ’25
iPhone Camera opening unexpectedly
My Camera app is repeatedly opening even though I am not taking any action to open it when I use iPhone. Today during a FaceTime call, the Camera app opened while the phone was unlocked without me touching anything. It didn’t end the FaceTime call, but just put the video on pause for the person I was speaking with. I force closed the Camera app, then it happened again a few minutes later. This has happened while using Google Maps and other apps as well, while the phone is unlocked. This is also happening while the phone is locked, just sitting on a table. All the sudden I look over and the screen is active showing the camera view. Today this has happened at least 20 times. I need to know how to stop it. I am on iOS 18.1 and enrolled in iOS 18 Public Beta. There are no pending software updates.
2
0
1.1k
Nov ’24