for a while i had one photo widget (no special app, just the standard apple one) and it was set to shuffle to an album of pics of my bf. no problems at all. a few weeks later i added one to shuffle through an album of pics of my cat, and that one worked fine, but it made the one of my bf stop working, and it just showed a blank white widget, no error message or anything. so i removed the one of my cat hoping the one of my bf would go back to working, and it didn’t. i only have the widgets for find my, my bank, and then apps on my home screen otherwise.
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I was able to obtain the depth map image using AVCapturePhotoOutput from the delegate method
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?)
I convert the depth map to kCVPixelFormatType_DepthFloat32 format and get the pixel values of the depth map using the below code
func convertDepthData(depthMap: CVPixelBuffer) -> [[Float32]] {
let width = CVPixelBufferGetWidth(depthMap)
let height = CVPixelBufferGetHeight(depthMap)
var convertedDepthMap: [[Float32]] = Array(
repeating: Array(repeating: 0, count: width),
count: height
)
CVPixelBufferLockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2))
let floatBuffer = unsafeBitCast(
CVPixelBufferGetBaseAddress(depthMap),
to: UnsafeMutablePointer<Float32>.self
)
for row in 0 ..< height {
for col in 0 ..< width {
if floatBuffer[width * row + col].isFinite{
convertedDepthMap[row][col] = floatBuffer[width * row + col]
}
}
}
CVPixelBufferUnlockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2))
return convertedDepthMap
}
Is this the right way of accessing the depth float values from a depth map. And what will be the unit for it. Because some times the depth values are in range of 0.7 when I keep the device close to the subject around 15 to 30 cm.
Hey, I have a complex CIFilter chain I'm trying to debug to improve processing time. Is there any documentation on what all the colours mean and the naming, e.g. sRGB Linear_to_workspace?
Thanks
Alex
when I get results from picker: PHPickerViewController, didFinishPicking results: [PHPickerResult])
and I load the image using itemProvider .loadFileRepresentation (the itemProvider is the NSItemProvider provided by the PHPickerResult)
will the url that's returned by this method be guaranteed to have the file extension ie, "file://image.jpeg" not "file://image"
I want to know if i need to just check the extension to know its file type.
(FYI in case this makes a difference, im only interested in user screenshots and screenrecordings)
Hi guys,
How to achieve the following feature on macOS when a USB device (Camera/Mic/Speaker) is connected:
When a third-party video conferencing app is not in a meeting, ensure the app defaults to using the USB device (Camera/Mic/Speaker).
When a third-party conferencing app is in a meeting, ensure the app automatically switches to the USB device (Camera/Mic/Speaker).
I want to make use of IOKit extension to hidden or ignore build-in camera to realize the requirement.
however the extension can't be loaded for Invalid permissions in MacOS 15.4.1, buildVersion:24E263. I also tried to run in in MacOS 14.4.1, which can be loaded but can't auto load when restart laptop as KDK version not match.
Could you please give me some suggestion? Is it possible hidden build-in camera in MacOS M-series chip? Is there any other method to realize the feature. Thanks a lot.
Does the library exists in xCode 16.4?
"import WorldCaptureKit" gives error "No such module 'WorldCaptureKit'".
And I do not find any information about the library in the apple documentation.
But AI keeps suggesting me to use the library
Topic:
Media Technologies
SubTopic:
Photos & Camera
I'm developing an iOS app using DockKit to control a motorized stand. I've noticed that as the zoom factor of the AVCaptureDevice increases, the stand's movement becomes increasingly erratic up and down, almost like a pendulum motion. I'm not sure why this is happening or how to fix it.
Here's a simplified version of my tracking logic:
func trackObject(_ boundingBox: CGRect, _ dockAccessory: DockAccessory) async throws {
guard let device = AVCaptureDevice.default(for: .video),
let input = try? AVCaptureDeviceInput(device: device) else {
fatalError("Camera not available")
}
let currentZoomFactor = device.videoZoomFactor
let dimensions = device.activeFormat.formatDescription.dimensions
let referenceDimensions = CGSize(width: CGFloat(dimensions.width), height: CGFloat(dimensions.height))
let intrinsics = calculateIntrinsics(for: device, currentZoom: Double(currentZoomFactor))
let deviceOrientation = UIDevice.current.orientation
let cameraOrientation: DockAccessory.CameraOrientation = {
switch deviceOrientation {
case .landscapeLeft: return .landscapeLeft
case .landscapeRight: return .landscapeRight
case .portrait: return .portrait
case .portraitUpsideDown: return .portraitUpsideDown
default: return .unknown
}
}()
let cameraInfo = DockAccessory.CameraInformation(
captureDevice: input.device.deviceType,
cameraPosition: input.device.position,
orientation: cameraOrientation,
cameraIntrinsics: useIntrinsics ? intrinsics : nil,
referenceDimensions: referenceDimensions
)
let observation = DockAccessory.Observation(
identifier: 0,
type: .object,
rect: boundingBox
)
let observations = [observation]
try await dockAccessory.track(observations, cameraInformation: cameraInfo)
}
func calculateIntrinsics(for device: AVCaptureDevice, currentZoom: Double) -> matrix_float3x3 {
let dimensions = CMVideoFormatDescriptionGetDimensions(device.activeFormat.formatDescription)
let width = Float(dimensions.width)
let height = Float(dimensions.height)
let diagonalPixels = sqrt(width * width + height * height)
let estimatedFocalLength = diagonalPixels * 0.8
let fx = Float(estimatedFocalLength) * Float(currentZoom)
let fy = fx
let cx = width / 2.0
let cy = height / 2.0
return matrix_float3x3(
SIMD3<Float>(fx, 0, cx),
SIMD3<Float>(0, fy, cy),
SIMD3<Float>(0, 0, 1)
)
}
I'm calling this function regularly (10-30 times per second) with updated bounding box information. The erratic movement seems to worsen as the zoom factor increases.
Questions:
Why might increasing the zoom factor cause this erratic movement?
I'm currently calculating camera intrinsics based on the current zoom factor. Is this approach correct, or should I be doing something differently?
Are there any other factors I should consider when using DockKit with a variable zoom?
Could the frequency of calls to trackRider (10-30 times per second) be contributing to the erratic movement? If so, what would be an optimal frequency?
Any insights or suggestions would be greatly appreciated. Thanks!
I have noticed a problem when a PHAsset creation request is made with the resource type PHAssetResourceType.photoProxy.
let creationRequest = PHAssetCreationRequest.forAsset()
creationRequest.addResource(with: .photoProxy, data: photoData, options: nil)
creationRequest.location = location
creationRequest.isFavorite = true
After successfully saving the resulting asset through PHPhotoLibrary.shared().performChanges, I could verify it in the Photos app.
I noticed that the created photo was initially marked as Favorite and that the location was added to the info as expected. The title of the image changes from "Today" to "" too.
Next, the photo was refreshed, and location data was purged. However, the title remains unchanged and displays the .
This refresh was also observed in the code. PHPhotoLibraryChangeObserver protocols func photoLibraryDidChange(_ changeInstance: PHChange) receives a change notification. The same asset has been changed, and there is no location information anymore. isFavorite information persists correctly.
After debugging for a few hours, I discovered that changing the resource type to .photo fixes this issue. Location data is not removed in the Photos app, and no refresh callback is seen in func photoLibraryDidChange(_ changeInstance: PHChange).
I initially used .photoProxy because in the AVCapturePhotoCaptureDelegate implementation class, I always get the call in func photoOutput(_ output: AVCapturePhotoOutput, didFinishCapturingDeferredPhotoProxy deferredPhotoProxy: AVCaptureDeferredPhotoProxy?, error: Error?). So here is where I am capturing the photo data as photoData = deferredPhotoProxy?.fileDataRepresentation().
In a photo editing extension, is it possible to display the photo in HDR? In this context you only have a placeholder UIImage and a PHContentEditingInput which has a displaySizeImage and fullSizeImageURL. The displaySizeImage has isHighDynamicRange false.
I tried stacking 30 RAW exposures of 1 second each, but the quality is far inferior to the 30-second long exposure in night mode.
Topic:
Media Technologies
SubTopic:
Photos & Camera
If I want to edit image in preview app. But there is only option to rotate left and right 90degree rotations. No option to rorate in any prticular angle. So Please look into this and provide option in next update
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Graphics and Games
App Review
Media
I'm trying to apply a Core Image filter to an UIImage. For that I want to get the CIImage format of the UIImage.
I'm trying to obtain the CIImage of the UIImage as shown below.
if let inputImage = self.orginalImageView.image{
if let ciImage = CIImage(image: inputImage){
print(ciImage)
print(self.orginalImageView.image?.ciImage)
}
}
}
This method works. But one thing I noticed is that there is already a ciImage property and it inside UIImage and it is always nil.
According to documentation
ciImage
The underlying Core Image data.
var ciImage: CIImage? { get }
Discussion
If the UIImage object was initialized using a CGImage, the value of the property is nil.
Does accessing image property of UIImage comes from CGImage so that the ciImage porperty is nil?
is forKey:fileSize considered accessing non-public API?
has your app been rejected at review stage due to this?
let resources = PHAssetResource.assetResources(for: asset)
if let resource = resources.first {
if let fileSize = resource.value(forKey: "fileSize") as? Int {
return fileSize
}
}
Hello everyone, I need some help about this things.
If you also know, pls comment.
Overview
We are planning to develop an app using the “Support external cameras in your iPadOS app” feature introduced in iPadOS 17.
Before implementing this feature, it is necessary for the iPad to recognize external cameras. However, among the iPad models compatible with iPadOS 17, we have found that some of the iPads owned by our development team can recognize external cameras, while others cannot.
If you have any reports regarding compatibility issues or information on how to resolve these problems, please share them with us.
Detailed Explanation:
The results of our investigation are as follows:
External Camera Used: A 360-degree camera
Devices Firmware
RICOH Theta X 2.61.0(2024/12/26Latest)
RICOH Theta Z1
Tested iPad
Devices Firmware Status
12.9インチiPad Pro(第3世代) IOS 17.5.1 OK
11インチiPad Pro(M4) IOS 18.2 NG
Verification Method
Step 1: Power on the iPad and the external camera, ensuring both are ready for connection.
Step 2: Connect the iPad and the external camera using a USB-C cable.
Step 3: Launch FaceTime on the iPad and check the displayed camera feed.
If the external camera is recognized, the feed from the external camera will be displayed.
I am experiencing a bug when using a AVCapturePhotoBracketSettings object to capture a bracketed photo sequence on iPhone 16 Pro.
Specifically, when I pass in an array of exposure values: [-x, 0, +x], where x >= 3.
Specifically, the high exposure photo capture returns a black image.
STEPS TO REPRODUCE
Run the sample app I have provided on an iPhone 16 Pro
Notice that bracketed images captured where the eV is set to [-3,0,+3], [-4,0,+4], or [-5,0,+5] return a black image for the high exposure photo.
Notice that on other iOS devices (like iPhone 13 Pro), the high exposure photo is returned as high brightness as expected.
I have also added two folders in the sample project that show screenshots of the bug: iPhone13Pro & iPhone16Pro
Sample Project:
https://www.icloud.com/iclouddrive/090O_68Z0Nh2UOxmPRwu56Tmw#Focused16ProBracketedCaptureBug
Hi Team,
Camera preview plugin stopped working after upgrading the iOS 18.1.1 in mobile. Is there any way to implement the CameraPreview Plugin application. After clicking on camera icon, only black screen shown, on the other hand same build working fine with prior version of iOS 18.
Is there any way to resolved this issue.
Error CameraPreview Plugin upgraded to iOS 18
Topic:
Media Technologies
SubTopic:
Photos & Camera
I want to apply a SCNTechnique pipeline to the camera feed. To achieve this, I want to bring the camera input into the SceneKit world.
The perfects API seems to be:
let captureDevice = …
scnScene.background.contents = captureDevice
This is demonstrated in "SceneKit: What's New" (WWDC17) (at 44m19s) and is mentioned in the documentation of SCNMaterialProperty's contents.
Instead of showing camera feed, it crashes with these messages:
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[AVCaptureVideoDataOutput setVideoSettings:] Unsupported pixel format type - use -availableVideoCVPixelFormatTypes'
*** First throw call stack:
(0x18993c7cc <REDACTED> 0x211e18488)
libc++abi: terminating due to uncaught exception of type NSException
Please advise.
STEPS TO REPRODUCE
Create a new Xcode project, starting from the SceneKit game template.
Add Info.plist entry for NSCameraUsageDescription.
Add a capture device property to GameViewController:
class GameViewController: UIViewController {
let captureDevice = AVCaptureDevice.default(for: .video)
Set the background contents:
scene.background.contents = captureDevice
Run the app on device.
PLATFORM AND VERSION
iOS
Development environment: Xcode 16.1, macOS 15.0.1. Run-time configuration: iOS 18.1
I’m building a camera app using SwiftUI and UIKit (with UIViewControllerRepsrwsentable). My app already is able to capture photos, but I also want to implement the important feature - apply my custom image filter to the image for live preview in camera and when this image is saving to the photo library (like in the default Apple camera app with Photographic styles).
My image filter must be pretty advanced because I’m a photographer and I trying to achieve the same colours as I have with my custom image preset in Lightroom. I want to control the image parameters such as basic (exposure, contrast, shadows, etc.), tone curves for each channel (Red, Green, Blue channels separately), HSL (for Red, Orange, Yellow, Green, Blue, Aqua, Purple and Magenta), apply colour grading and more.
Currently I’m straggling with implementation of this. I tried to create a custom image filter using Metal (it works with saturation) but I’m not sure if it is the best approach. I need help and recommendations of how developers implement this complex thing in their apps (what technologies should I use and etc.)
Support external cameras in your iPadOS app and use Swift to read multiple camera feeds?
thanks
Our app filters the photo library to a certain date range for ease of picking photos. However, to do this, we have to require full permissions to the photo library. We would like to use the PHPickerViewController and have it filter the results by the assets creation date? This would allow us to use it.
I see other filter options, but not this one. And if it isn't there, is this something that is being thought about or on a roadmap?