Overview

Post

Replies

Boosts

Views

Activity

Captured photos in wrong orientation
I'm building a custom camera screen that displays the camera image on a preview layer and then captures an image, using AVCaptureSession. When the picture is captured, I immediately load it into a UIImageView in order to display it to the user for approval. I've actually done this many times before, but this is the first time I've tried to do it in an app that supports interface rotation. If I hold the phone in Portrait mode and capture a picture, everything works as expected. When the user rotates the phone into Landscape orientation, I detect this and I replace the preview layer (AVCaptureVideoPreviewLayer) with a new one, specifying connection.videoRotationAngle in order to make the image appear in the right orientation. I'm a little surprised that this is necessary, and it's not a smooth transition, but that doesn't matter. What does matter is that when I capture the image, it is in the wrong orientation. I tried rotating it myself, but this doesn't seem to make any difference. What am I doing wrong?
2
0
86
2d
Need the information of minimum focus distance of different cameras in each iPhone model
Our app involves using the camera to scan barcodes or QR codes, with a working distance of about 5 cm. However, we’ve noticed variations in the focus distance of camera lenses across different iPhone models. Currently, we mainly use two types of lenses: wide-angle and ultra-wide-angle. • For iPhone 13 and earlier models, we use the wide-angle lens. • For iPhone 13 Pro and later models, we use the ultra-wide-angle lens. We are not certain if this setup is correct since we don’t have all iPhone models to test.
There is a users have reported focus issues on his iPhone 15. We would like to ask if there’s a resource where we can find the minimum focus distance of different cameras in each iPhone model. This is to verify whether our current configuration is accurate. Alternatively, if such data is not readily available, could apple tam advise which camera should be used on various iPhone models for scenarios with a working distance of approximately 5 cm? Thank you!
1
0
78
2d
shouldAutomaticallyForwardAppearanceMethods returns NO by default in UITabBarController, but documentation states YES
Hi all, I’ve been facing a behavior issue with shouldAutomaticallyForwardAppearanceMethods in UITabBarController. According to Apple’s documentation, this property should default to YES, which means that the appearance lifecycle methods (like viewWillAppear and viewDidAppear) should be automatically forwarded to child view controllers. However, in my current development environment, I’ve noticed that shouldAutomaticallyForwardAppearanceMethods returns NO by default in UITabBarController, and this is causing some issues with lifecycle management in my app. I even tested this behavior in several projects, both in Swift and Objective-C, and the result is consistent. Here are some details about my setup: I’m using Xcode 16.0 with iOS 16.4 Simulator. I’ve tested the behavior in both a new UIKit project and a simple SwiftUI project that uses a UITabBarController. Even with a clean new project, the value of shouldAutomaticallyForwardAppearanceMethods is NO by default. This behavior contradicts the official documentation, which states that it should be YES by default. Could someone clarify if this is expected behavior in newer versions of iOS or if there is a known issue regarding this? Any help or clarification would be greatly appreciated! Thanks in advance!
1
0
67
12h
Selecting an appropriate AVCaptureDeviceFormat
My app currently captures video using an AVCaptureSession set with the AVCaptureSessionPreset1920x1080 preset. However, I'd like to update this behavior, such that video can be recorded at a range of different resolutions. There isn't a preset aligning to each desired resolution, so I thought I'd instead directly set the AVCaptureDeviceFormat. For any desired resolution, I would find the format that is closest without going under the desired resolution, and then crop it down as a post-processing step. However, what I've observed is that there can be a range of available formats for a device at each resolution, with various differing settings. Presumably there is logic within AVCaptureSession that selects a reasonable default based on all these different settings, but since I am applying the format directly, I think I don't have a way to make use of that default logic? And it is undocumented? Does this mean that the only way to select a format is to implement a comparison function that considers all different values of all different properties on AVCaptureDeviceFormat, and then sort the formats according to this comparator? If so, what if some new property is added to AVCaptureDeviceFormat in the future? The sort would not take this new property into account, and the function might select a format with some new undesired property. Are there any guarantees about what types for formats will be supported on a device? For example, can I take for granted that a '420v' format will exist at each resolution? If so I could filter the formats down only to those with this setting without risking filtering out all of the supported formats. I suspect I may be missing something obvious. Any help would be greatly appreciated!
2
0
79
1d
Creating a multiview video playback experience in visionOS. There is no back button on the player.
Function Introduction "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/" When I use this function, my videoPlayer has no back Action in player. And we did not find any method provided by the system "addChildViewControllerAndView(form)" "https://developer.apple.com/documentation/avkit/adopting-the-system-player-interface-in-visionos" Referencing this document also did not work As long as you enter this line of code let playerController = AVPlayerViewController() // Enable the multiview experience along with the default recommended set. playerController.experienceController.allowedExperiences = .recommended(including: [.multiview]) there is no back button, only full screen and zoom out
8
0
213
1w
AVSpeechSynthesizer - just not working on 15.1.1
So get a swift file and put this in it import Foundation import AVFoundation let synthesizer = AVSpeechSynthesizer() let utterance = AVSpeechUtterance(string: "Hello, testing speech synthesis on macOS.") if let voice = AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-GB.Daniel") { utterance.voice = voice print("Using voice: \(voice.name), \(voice.language)") } else { print("Daniel voice not found on macOS.") } synthesizer.speak(utterance) I get no speech output and this log output Error reading languages in for local resources. Error reading languages in for local resources. Using voice: Daniel, en-GB Program ended with exit code: 0 Why? and whats with "Error reading languages in for local resources." ?
0
0
41
11h
"Unable to Add for Review" "There are still screenshot uploads in progress."
Is anyone else experiencing this error after uploading new screenshots and attempting to submit a new release for review? `Unable to Add for Review The items below are required to start the review process: There are still screenshot uploads in progress.` I have tried waiting several minutes after uploading the new screenshots before submission. I have also tried waiting several minutes after uploading before rearranging the images. And I have tried uploading the images one at a time. No luck any way I try it. I also had an issue signing in this morning either. It seemed like I was in a loop where it would just return me back to login after entering username and password. I saw that others had reported this, too. Thanks
0
0
46
11h
Rejected Review - Mac App Sandbox entitlements required for Bluetooth
I submitted a Mac Catalyst app for TestFlight and before it can be tested by external testers it requires an App Review. The iOS app passed review, but the Mac Catalyst app failed review. The rejection reason given was that App Sandbox needed the entitlement: "com.apple.security.network.client" to be YES / true (not false). I do have "com.apple.security.device.bluetooth" set to YES / true. The Developer docs for entitlement "com.apple.security.network.client" say "Use this key to allow your sandboxed app to connect to a server process running on another machine, or on the same machine." for entitlement "com.apple.security.network.client", then go on to discuss TCP and UDP. https://developer.apple.com/documentation/security/app_sandbox While technically a Bluetooth app connecting to another Bluetooth device puts the app in "client mode" and the device in "server mode", I think this network entitlement was intended for TCP / UDP, not Bluetooth. The entitlement "com.apple.security.device.bluetooth" says "A Boolean value indicating whether your app may interact with Bluetooth devices." - this seems to cover all the necessary needs for Bluetooth "your app may interact with Bluetooth devices"..... Would someone at Apple familiar with the docs please clarify what entitlements are required for an app that only uses Bluetooth? If the "com.apple.security.network.client" is required, then I believe the docs for that property should also specify Bluetooth.
1
0
96
1d
AR anchor shared across multiple immersive scenes
Hello, I am currently working on an app that features multiple environments in which I combine Reality Composer Pro scenes with objects managed at runtime as well as make heavy use of RealityView attachments that modify the appearance of certain objects. Is it possible to keep track of an AR anchor when transitioning between immersive spaces? About my app: There are two main contexts/scenes in the app that the user progresses through. The first takes place in AR and is non-interactive and driven by a timeline animation. The second is in VR and allows the user to change materials of select models. Both scenes need to be placed relative to a real-life object that functions as an image anchor. Anchoring is necessary for visual purposes in AR context and it would be nice to use it in the VR context as well in order to provide passive haptics to the user. If the user doesn't have access to the physical object, we make use of plane-based anchoring. Either way, we would like to keep the anchor's position across the scenes.
1
0
79
2d
How to get the actual distance of the depth map image subject from the true depth camera
I was able to obtain the depth map image using AVCapturePhotoOutput from the delegate method func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) I convert the depth map to kCVPixelFormatType_DepthFloat32 format and get the pixel values of the depth map using the below code func convertDepthData(depthMap: CVPixelBuffer) -> [[Float32]] { let width = CVPixelBufferGetWidth(depthMap) let height = CVPixelBufferGetHeight(depthMap) var convertedDepthMap: [[Float32]] = Array( repeating: Array(repeating: 0, count: width), count: height ) CVPixelBufferLockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2)) let floatBuffer = unsafeBitCast( CVPixelBufferGetBaseAddress(depthMap), to: UnsafeMutablePointer<Float32>.self ) for row in 0 ..< height { for col in 0 ..< width { if floatBuffer[width * row + col].isFinite{ convertedDepthMap[row][col] = floatBuffer[width * row + col] } } } CVPixelBufferUnlockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2)) return convertedDepthMap } Is this the right way of accessing the depth float values from a depth map. And what will be the unit for it. Because some times the depth values are in range of 0.7 when I keep the device close to the subject around 15 to 30 cm.
1
0
53
22h
UIViewRepresentable & MVVM
I am trying to get my head around how to implement a MapKit view using UIViewRepresentable (I want the map to rotate to align with heading, which Map() can't handle yet to my knowledge). I am also playing with making my LocationManager an Actor and setting up a listener. But when combined with UIViewRepresentable this seems to create a rather convoluted data flow since the @State var of the vm needs to then be passed and bound in the UIViewRepresentable. And the listener having this for await location in await lm.$lastLocation.values seems at least like a code smell. That double await just feels wrong. But I am also new to Swift so perhaps what I have here actually is a good approach? struct MapScreen: View { @State private var vm = ViewModel() var body: some View { VStack { MapView(vm: $vm) } .task { vm.startWalk() } } } extension MapScreen { @Observable final class ViewModel { private var lm = LocationManager() private var listenerTask: Task&lt;Void, Never&gt;? var course: Double = 0.0 var location: CLLocation? func startWalk() { Task { await lm.startLocationUpdates() } listenerTask = Task { for await location in await lm.$lastLocation.values { await MainActor.run { if let location { withAnimation { self.location = location self.course = location.course } } } } } Logger.map.info("started Walk") } } struct MapView: UIViewRepresentable { @Binding var vm: ViewModel func makeCoordinator() -&gt; Coordinator { Coordinator(parent: self) } func makeUIView(context: Context) -&gt; MKMapView { let view = MKMapView() view.delegate = context.coordinator view.preferredConfiguration = MKHybridMapConfiguration() return view } func updateUIView(_ view: MKMapView, context: Context) { context.coordinator.parent = self if let coordinate = vm.location?.coordinate { if view.centerCoordinate != coordinate { view.centerCoordinate = coordinate } } } } class Coordinator: NSObject, MKMapViewDelegate { var parent: MapView init(parent: MapView) { self.parent = parent } } } actor LocationManager{ private let clManager = CLLocationManager() private(set) var isAuthorized: Bool = false private var backgroundActivity: CLBackgroundActivitySession? private var updateTask: Task&lt;Void, Never&gt;? @Published var lastLocation: CLLocation? func startLocationUpdates() { updateTask = Task { do { backgroundActivity = CLBackgroundActivitySession() let updates = CLLocationUpdate.liveUpdates() for try await update in updates { if let location = update.location { lastLocation = location } } } catch { Logger.location.error("\(error.localizedDescription)") } } } func stopLocationUpdates() { updateTask?.cancel() updateTask = nil } func locationManagerDidChangeAuthorization(_ manager: CLLocationManager) { switch clManager.authorizationStatus { case .authorizedAlways, .authorizedWhenInUse: isAuthorized = true // clManager.requestLocation() // ?? case .notDetermined: isAuthorized = false clManager.requestWhenInUseAuthorization() case .denied: isAuthorized = false Logger.location.error("Access Denied") case .restricted: Logger.location.error("Access Restricted") @unknown default: let statusString = clManager.authorizationStatus.rawValue Logger.location.warning("Unknown Access status not handled: \(statusString)") } } func locationManager(_ manager: CLLocationManager, didFailWithError error: Error) { Logger.location.error("\(error.localizedDescription)") } }
1
0
67
1d
PSSO 2.0: is previous password expected to unlock keychain?
Wondering if others have encountered this issue with PSSO 2.0. We are observing that if, after registration, a user changes their IDP password, they may be prompted for their previous password in order to unlock the Keychain. We are trying to determine if this is expected behavior or if there is a way to avoid it. To reproduce this, the flow would be as follows: user registers with PSSO user logs out and logs back in with their IDP password user is authenticated (and not prompted for previous password) user logs out user changes their IDP password on another machine user logs in and is prompted to use their previous password to unlock the Keychain. Failure to provide the previous password nukes the Keychain, which is not an outcome we want. Any insight anyone has on this issue would be most welcome. Thanks
0
0
50
12h
App Store Connect version number
Hello, I'm publishing new app version, but I don't know why the version number in the App Store Connect is not the same as the version in the Info.plist. The version number was ok before, but now it's showing only a number that looks like the number of times I have uploaded the 1.1.0 build. In Info.plist the version number is 1.1.0.009 so it should be the same on the App Store Connect page. Have a look.
0
0
56
12h
EXIF Makernote no read in Ventura
I have a custom app running on a Mac Studio with Ventura that grabs a snapshot image from a network camera. It then adds some extra information into the EXIF "MakerNote" field. However the metadata cannot be read back out of the image when running Ventrua, it can however be read out of the same image file on a Mac that is not running Ventura. It would appear Apple has removed support for reading MakerNote in Ventura but still supports writing MakerNote in Ventura. This code is about 7 years old and written in ObjC and has worked with no issue until Ventura came along. Calls used CGImageDestinationAddImageFromSource(); // used to write the image to disk with the extra metadata - Works on Ventura CGImageSourceCopyPropertiesAtIndex(); // used to read the meta data from an image - does not return "MakeNote" data Is there a new way to read EXIF "MakeNote" data from image files that was introduced with Ventura?
2
1
605
Mar ’23
SwiftUI: Command SwiftCompile failed with a nonzero exit code
I have a SwiftUI app that I've been working on in XCode 16.1. The project builds and runs in the simulators, on my mac and on my iPhone/iPad without any issues. I'm also able to build my unit test project and run them without any errors. The project has zero warnings in it. When I go to the Edit Schemes options and change the Run scheme to be a Release build with the Debug Executable unchecked I get a compiler error: Command SwiftCompile failed with a nonzero exit code I've attempted this Release Run with the following target devices in XCode: My iPhone 15 Pro Max (iOS 18.2 Beta 3) MacBook Air (M1) (15.2 Beta) iPhone 16 Simulator (iOS 18.1) Any iOS Simulator Device (arm64, x86_64) All 3 of these target have the same issue. Normally I would just debug the error from the logs but when I look at the build output I can't see any information in the log to tell me what happened. It looks like the source files are sent into the SwiftCompiler and the compiler fails without bubbling up the issue. I've provided the full error log export as a Gist HERE due to it's size. Is there anything in the log I'm missing? Is there a way for me to turn on more verbose logging during compilation of a Release Build? I created a brand new Multiplatform App in XCode and I added all of my source files to it. No project configuration settings were changed. I could build it successfully with the debug configuration. I then changed it to the Release configuration and experienced the same error. I can create another fresh project and make the same release configuration with none of my source files in it and get a successful build. I t seems there is something wrong with my source files and the release configuration but the compiler doesn't indicate what. I'm lost at this point as I can't figure out how to get a release build and can't seem to find any indication as to why.
6
0
113
2d
Swift Charts: chartScrollTargetBehavior fails to snap to a day unit
I am working on a scrollable chart that displays days on the horizontal axis. As the user scrolls, I always want them to be able to snap to a specific day. I implemented the following steps described in this WWDC23 session to achieve this. I have set the chartScrollTargetBehavior to .valueAligned(matching: DateComponents(hour: 0)) I have set the x value unit on the BarMark to Calendar.Component.day I ended up with the chart code that looks like this: Chart(dates, id: \.self) { date in BarMark( x: .value("Date", date, unit: Calendar.Component.day), y: .value("Number", 1) ) .annotation { Text(date.formatted(.dateTime.day())) .font(.caption2) } } .chartXAxis { AxisMarks(format: .dateTime.day()) } .chartScrollableAxes(.horizontal) .chartScrollTargetBehavior(.valueAligned(matching: DateComponents(hour: 0))) .chartXVisibleDomain(length: fifteenDays) .chartScrollPosition(x: $selection) However, this fails to work reliably. There is often a situation where the chart scroll position lands on, for instance, Oct 20, 11:56 PM, but the chart snaps to Oct 21. I attempted to solve this problem by introducing an intermediate binding between a state value and a chart selection. This binding aims to normalize the selection always to be the first moment of any given date. But this hasn't been successful. private var selectionBinding: Binding<Date> { Binding { Calendar.current.startOfDay(for: selection) } set: { newValue in self.selection = Calendar.current.startOfDay(for: newValue) } } It's also worth mentioning that this issue also exists in Apple's sample project on Swift Charts. How would you approach solving this? How can I find a way to make the chart scroll position blind to time values and only recognize whole days? Here's the minimal reproducible example project for your reference.
1
0
85
1w