Post not yet marked as solved
I'm using RealityKit to give an immersive view of 360 pictures. However, I'm seeing a problem where the window disappears when I enter immersive mode and returns when I rotate my head. Interestingly, putting ".glassBackground()" to the back of the window cures the issue, however I prefer not to use it in the UI's backdrop. How can I deal with this?
here is link of Gif:-
https://firebasestorage.googleapis.com/v0/b/affirmation-604e2.appspot.com/o/Simulator%20Screen%20Recording%20-%20Apple%20Vision%20Pro%20-%202024-01-30%20at%2011.33.39.gif?alt=media&token=3fab9019-4902-4564-9312-30d49b15ea48
Post not yet marked as solved
Hey, I'm working on a UI that a designer created. But he added an object behind the glass, with an offset, pretty much like the cloud in this video:
https://dribbble.com/shots/23039991-Weather-Widget-Apple-Vision-Pro-visionOS
I tried a couple of methods, but I always ended up clipping my object.
So, here's the question:
Is there a way to have some object behind the glass panel, but with a slight offset on the x and y?
Post not yet marked as solved
I am attempting to create a simple compass for Apple Vision Pro. The method I am familiar with involves using:
locationManager.startUpdatingHeading()
locationManager(_ manager: CLLocationManager, didUpdateHeading newHeading: CLHeading)
However, this does not function on visionOS as 'CLHeading is unavailable in visionOS'.
Is there any way to develop this simple compass on visionOS?
Post not yet marked as solved
I am using Vision Framework to recognize text in my app. However, some umlaut diacritics are recognized incorrectly, for example: Grudziński (The incorrect result is: Grudzinski).
I already changed language to DE (because my app needs to support DE text) and tried to use VNRecognizeTextRequest#customWord with usesLanguageCorrection but the result still is incorrect.
Does Apple provide any APIs to solve this problem? This issue also happens when I open the Gallery on my phone, copy text from images, and paste it to another place.
Post not yet marked as solved
I'm currently building an iOS app that requires the ability to detect a person's height with a live video stream. The new VNDetectHumanBodyPose3DRequest is exactly what I need but the observations I'm getting back are very inconsistent and unreliable. When I say inconsistent, I mean the values never seem to settle and they can fluctuate anywhere from 5 '4" to 10'1" (I'm about 6'0"). In terms of unreliable, I have once seen a value that closely matches my height but I rarely see any values that are close enough (within an inch) of the ground truth.
In terms of my code, I'm not doing any fancy. I'm first opening a LiDAR stream on my iPhone Pro 14:
guard let videoDevice = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, position: .back) else { return }
guard let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice) else { return }
guard captureSession.canAddInput(videoDeviceInput) else { return }
captureSession.addInput(videoDeviceInput)
I'm then creating an output synchronizer so I can get both image and depth data at the same time:
videoDataOutput = AVCaptureVideoDataOutput()
captureSession.addOutput(videoDataOutput)
depthDataOutput = AVCaptureDepthDataOutput()
depthDataOutput.isFilteringEnabled = true
captureSession.addOutput(depthDataOutput)
outputVideoSync = AVCaptureDataOutputSynchronizer(dataOutputs: [depthDataOutput, videoDataOutput])
Finally, my delegate function that handles the synchronizer is roughly:
fileprivate func perform3DPoseRequest(cmSampleBuffer: CMSampleBuffer, depthData: AVDepthData) {
let imageRequestHandler = VNImageRequestHandler(cmSampleBuffer: cmSampleBuffer, depthData: depthData, orientation: .up)
let request = VNDetectHumanBodyPose3DRequest()
do {
// Perform the body pose request.
try imageRequestHandler.perform([request])
if let observation = request.results?.first {
if (observation.heightEstimation == .measured) {
print("Body height (ft) \(formatter.string(fromMeters: Double(observation.bodyHeight))) (m): \(observation.bodyHeight)")
...
I'd appreciate any help determining how to get accurate results from the observation's bodyHeight. Thanks!
Post not yet marked as solved
I'm going to the U.S. to buy a vision pro, does anyone have any information about where they sell it? Will it be sold in Hawaii by any chance? For now, I'm thinking about New York.
Post not yet marked as solved
Currently in South Korea, due to my personal experiences with what seems like warranty but isn't, and the operation of a ruthless Genius Bar, I feel compelled to purchase the officially released Vision Pro. I'd like to discuss with other developers here about their thoughts on the release schedule. The product launched in the USA in February, but I'm curious about the months following for the secondary and tertiary launch countries. Naturally, we'll know the launch is imminent when local staff are summoned to the headquarters for training. However, the urgency for localized services, development, and personal purchase is growing on my mind.
Post not yet marked as solved
I seem to be having some trouble running the example app from the WWDC 2023 session on 3D Body Pose Detection (this one).
I'm getting an issue about the request revision being incompatible, I tried searching the API documentation for any configuration that has been changed or introduced but to no avail. I also couldn't find much online for it. Is this a known issue? Or am I doing something wrong?
Error in question:
Unable to perform the request: Error Domain=com.apple.Vision Code=16 "VNDetectHumanBodyPose3DRequest does not support VNDetectHumanBodyPose3DRequestRevision1" UserInfo={NSLocalizedDescription=VNDetectHumanBodyPose3DRequest does not support VNDetectHumanBodyPose3DRequestRevision1}.
Code Snippet:
guard let assetURL = fileURL else {
return
}
let request = VNDetectHumanBodyPose3DRequest()
self.fileURL = assetURL
let requestHandler = VNImageRequestHandler(url: assetURL)
do {
try requestHandler.perform([request])
if let returnedObservation = request.results?.first {
Task { @MainActor in
self.humanObservation = returnedObservation
}
}
} catch {
print("Unable to perform the request: \(error).")
}
Thank you for any and all advice!
Post not yet marked as solved
Env
Intel Core i7
macOS :14.0
Xcode 15 Beta 8
simulator:visionOS 1.0 beta 3(21N5233e)
simulator: ios 17.0.1 ,ios 17.0 beta 8
Step
Xcode create a new Vision Demo, it can't build.
[macosx] error: Failed to find newest available Simulator runtime Command RealityAssetsCompile failed with a nonzero exit code
Post not yet marked as solved
I am trying to parse text from an image, split it into words and store the words in a String array. Additionally I want to store the bounding box of each recognized word.
My code works but for some reason the bounding boxes of words that are not separated by a space but by an apostrophe come out wrong.
Here is the complete code of my VNRecognizeTextRequestHander:
let request = VNRecognizeTextRequest { request, error in
guard let observations = request.results as? [VNRecognizedTextObservation] else {
return
}
// split recognized text into words and store each word with corresponding observation
let wordObservations = observations.flatMap { observation in
observation.topCandidates(1).first?.string.unicodeScalars
.split(whereSeparator: { CharacterSet.letters.inverted.contains($0) })
.map { (observation, $0) } ?? []
}
// store recognized words as strings
recognizedWords = wordObservations.map { (observation, word) in String(word) }
// calculate bounding box for each word
recognizedWordRects = wordObservations.map { (observation, word) in
guard let candidate = observation.topCandidates(1).first else { return .zero }
let stringRange = word.startIndex..<word.endIndex
guard let rect = try? candidate.boundingBox(for: stringRange)?.boundingBox else { return .zero }
let bottomLeftOriginRect = VNImageRectForNormalizedRect(rect, Int(captureRect.width), Int(captureRect.height))
// adjust coordinate system to start in top left corner
let topLeftOriginRect = CGRect(origin: CGPoint(x: bottomLeftOriginRect.minX,
y: captureRect.height - bottomLeftOriginRect.height - bottomLeftOriginRect.minY),
size: bottomLeftOriginRect.size)
print("BoundingBox for word '\(String(word))': \(topLeftOriginRect)")
return topLeftOriginRect
}
}
And here's an example for what's happening. When I'm processing the following image:
the code above produces the following output:
BoundingBox for word 'In': (23.00069557577264, 5.718113962610181, 45.89460636656961, 32.78087073878238)
BoundingBox for word 'un': (71.19064286904202, 6.289275587192936, 189.16024359557852, 34.392966621800475)
BoundingBox for word 'intervista': (71.19064286904202, 6.289275587192936, 189.16024359557852, 34.392966621800475)
BoundingBox for word 'del': (262.64622870703477, 8.558512219726875, 54.733978711037985, 32.79967358237818)
Notice how the bounding boxes of the words 'un' and 'intervista' are exactly the same. This happens consistently for words that are separated by an apostrophe. Why is that?
Thank you for any help
Elias
Post not yet marked as solved
What is the accuracy and resolution of the angles measured using Vision?
Post not yet marked as solved
Hello,
I am currently developping an app that counts coins using Computer vision to determine the total value in the picture. However I notice there are no apps in the app store that do anything similar which makes me wonder if there are any restrictions on publishing these types of apps from either apple or governments?
I would like to know if it will be possible to launch my app in the European Union once it is finished.
Thanks in advance,
Guus
Post not yet marked as solved
I'm developing 3D Scanner works on iPad.
I'm using AVCapturePhoto and Photogrammetry Session.
photoCaptureDelegate is like below:
extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic")
let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ])
let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)
let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ])
try? fileData!.write(to: fileUrl, options: .atomic)
}
}
But, Photogrammetry session spits warning messages:
Sample 0 missing LiDAR point cloud!
Sample 1 missing LiDAR point cloud!
Sample 2 missing LiDAR point cloud!
Sample 3 missing LiDAR point cloud!
Sample 4 missing LiDAR point cloud!
Sample 5 missing LiDAR point cloud!
Sample 6 missing LiDAR point cloud!
Sample 7 missing LiDAR point cloud!
Sample 8 missing LiDAR point cloud!
Sample 9 missing LiDAR point cloud!
Sample 10 missing LiDAR point cloud!
The session creates a usdz 3d model but scale is not correct.
I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.
Post not yet marked as solved
Is there a way to determine finger joint/root circumference, finger length, tip of finger to wrist crease, hand breadth and wrist breadth with Vision hand pose? Or alternative method?
Any insight is appreciated.
In visionOS, how can I use vision in machine learning to recognize hand gestures? After all, currently visionOS does not provide any image frame data.
Post not yet marked as solved
Hello!
I would like to develop a visionOS application that tracks a single object in a user's environment. Skimming through the documentation I found out that this feature is currently unsupported in ARKit (we can only recognize images). But it seems it should be doable by combining CoreML and Vision frameworks. So I have a few questions:
Is it the best approach or is there a simpler solution?
What is the best way to train a CoreML model without access to the device? Will videos recorded by iPhone 15 be enough?
Thank you in advance for all the answers.
Post not yet marked as solved
I am trying to use Vision framework in iOS but getting below error in logs.
Not able to find any resources in Developer Forums.
Any help would be appreciated!
ABPKPersonIDTracker not supported on this device
Failed to initialize ABPK Person ID Tracker
public func runHumanBodyPose3DRequest() {
let request = VNDetectHumanBodyPose3DRequest()
let requestHandler = VNImageRequestHandler(url: filePath!)
do {
try requestHandler.perform([request])
if let returnedObservation = request.results?.first {
self.humanObservation = returnedObservation
print(humanObservation)
}
} catch let error{
print(error.localizedDescription)
}
}
Post not yet marked as solved
Hi,
I am developing a fitness app that detects technique mistakes during workout. Can we use 3D data from VNDetectHumanBodyPose3DRequest with ML model?
Post not yet marked as solved
hi there,
i'm not sure if i'm missing something, but i've tried passing a variety of CGImages into SCSensitivityAnalyzer, incl ones which should be flagged as sensitive, and it always returns false. it doesn't throw an exception, and i have the Sensitive Content Warning enabled in settings (confirmed by checking the analysisPolicy at run time).
i've tried both the async and callback versions of analyzeImage.
this is with Xcode 15 beta 5.
i'm primarily testing on iOS/iPad simulators - is that a known issue?
cheers,
Mike
Post not yet marked as solved
var accessibilityComponent = AccessibilityComponent()
accessibilityComponent.isAccessibilityElement = true
accessibilityComponent.traits = [.button, .playsSound]
accessibilityComponent.label = "Cloud"
accessibilityComponent.value = "Grumpy"
cloud.components[AccessibilityComponent.self] = accessibilityComponent
// ...
var isHappy: Bool {
didSet {
cloudEntities[id].accessibilityValue = isHappy ? "Happy" : "Grumpy"
}
}