Function Introduction "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/"
When I use this function, my videoPlayer has no back Action in player.
And we did not find any method provided by the system "addChildViewControllerAndView(form)"
"https://developer.apple.com/documentation/avkit/adopting-the-system-player-interface-in-visionos"
Referencing this document also did not work
As long as you enter this line of code
let playerController = AVPlayerViewController()
// Enable the multiview experience along with the default recommended set.
playerController.experienceController.allowedExperiences = .recommended(including: [.multiview])
there is no back button, only full screen and zoom out
VisionKit
RSS for tagScan documents with the camera on iPhone and iPad devices using VisionKit.
Posts under VisionKit tag
48 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I'm seeking detailed information about the rotation matrix of the iPhone's front-facing (selfie) camera when using ARKit.
Specifically, I need to understand:
The exact rotation matrix applied to the front-facing camera's output in ARKit.
Whether this matrix is consistent across all iPhone models or if there are variations.
If there are any transformations applied to align the camera's coordinate system with the device's orientation, particularly in portrait mode.
How this rotation matrix relates to the transform property of `ARFrame.camera
I am using Apple’s Vision framework with DetectHorizonRequest to detect the horizon in an image. Here is my code:
func processHorizonImage(_ ciImage: CIImage) async {
let request = DetectHorizonRequest()
do {
let result = try await request.perform(on: ciImage)
print(result)
} catch {
print(error)
}
}
After calling the perform method, I am getting result as nil. To ensure the request's correctness, I have verified the following:
The input CIImage is valid and contains a visible horizon.
No errors are being thrown.
The relevant frameworks are properly imported.
Given that my image contains a clear horizon, why am I still not getting any results? I would appreciate any help or suggestions to resolve this issue.
Thank you for your support!
This is the image
iPhone 12 is updated to iOS 18 and my Mac is running macOS Sequoia. The first time I requested the Scan Document feature from my Mac to iPhone, it opened the camera and scanned successfully. However, after that, the Scan Document feature crashed.
I attempted multiple times to reopen the Scan Document feature on my iPhone, but it has stopped working entirely.
VNDocumentCameraViewController has no option to switch or flip back camera to front one.
Is there any way to use front camera on VNDocumentCameraViewController?
I don't get cameraFrame from cameraFrameUpdates in vision pro app, why it's no getting , where I am doing wrong in code please guide me.
for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: (cameraFrame)") }
var body: some View {
VStack {
image
.resizable()
.scaledToFit()
if(self.finalImage != nil){
self.finalImage!
.resizable()
.scaledToFit()
}else{
image
.resizable()
.scaledToFit()
}
}
.task {
if #available(visionOS 2.0, *) {
guard CameraFrameProvider.isSupported else {
print("CameraFrameProvider not supported.")
return
}
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [CameraFrameProvider.CameraPosition.left])
let cameraFrameProvider = CameraFrameProvider()
do {
try await arkitSession.run([cameraFrameProvider])
} catch {
guard let sessionError = error as? ARKitSession.Error else {
preconditionFailure("ARKitSession.run() returned a non-session error: \(error)")
print("ARKitSession.run() returned a non-session error: \(error)")
}
}
guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else {
preconditionFailure("Failed to get an async sequence for the first format.")
print("Failed to get an async sequence for the first format.")
}
print("cameraFrameUpdates:: \(cameraFrameUpdates)")
for await cameraFrame in cameraFrameUpdates {
print("cameraFrame:: \(cameraFrame)")
print("Camera Frame ::: LEFT :: \(cameraFrame.sample(for: .left))")
guard let leftSample = cameraFrame.sample(for: .left) else {
print("CameraFrameProviderSample - Nil camera frame left sample")
print("CameraFrameProviderSample - Nil camera frame left sample")
continue
}
self.pixelBuffer = leftSample.pixelBuffer
print(" ======== PIXEL BUFFER ::: \(self.pixelBuffer) ========")
self.finalImage = self.setImage()
}
} else {
// Fallback on earlier versions
}
}
}
Is there a way to retrieve the position of the main window in Vision OS? I'd like to implement a feature where users can drag the window and have a 3D model follow it
I have tested my application in iOS 15, 16, 17 Version in that vision kit reading value in Horizontal direction once I got updated my device to iOS 18.0 beta value was reading as in vertical direction
The build was generated in Xcode 13.4.1.
Team please help to understand why this and need to change anything in code level
Essentially, I'm trying to find the most straightforward/simple way to outline an Image with varying contours. The intention is similar to the way iMessage allows you to add an outline to a sticker. The "goal" in the example is simply the input image on top of the outline.
I try vision frameworks with VisionPro but does not work only with VisionOS2.0.
When I perform requests, do not work and below error is caught.
I try same code with VisionOS1.2, iOS18.0beta it works.
I try also new beta API but does not work and same error.
ex.GenerateForegroundInstanceMaskRequest
do you have any idea? is it any permission for use vision framework with visionOS2.0.
This is my try list
with VisionOS2.0beta4
GenerateForegroundInstanceMaskRequest (not work error1)
VNGenerateForegroundInstanceMaskRequest(not work error1)
VNRecognizeTextRequest (not work error2)
with VisionOS1.2
VNRecognizeTextRequest (work)
with iOS 18beta
GenerateForegroundInstanceMaskRequest (work)
My Development Env
Env1
VisionPro: VIsionOS2.0beta4
Xcode: 16.0beta4,16.0beta2.
macOS: 14.5(23F79)
Env2
VisionPro: VIsionOS1.2.
Xcode: 15.4
macOS: 14.5(23F79).
Error1
Error Domain=com.apple.Vision Code=9 "Could not build inference plan - ANECF error: failed to load ANE model file:///System/Library/Frameworks/Vision.framework/subject_lifting_gen1_rev5_gv8dsz6vxu_multihead_int8.espresso.net Error= (DESIGN)" UserInfo={NSLocalizedDescription=Could not build inference plan - ANECF error: failed to load ANE model file:///System/Library/Frameworks/Vision.framework/subject_lifting_gen1_rev5_gv8dsz6vxu_multihead_int8.espresso.net Error= (DESIGN)}
Error2
Error Domain=com.apple.Vision Code=11 "VNRecognizeTextRequest produced an internal error" UserInfo={NSLocalizedDescription=VNRecognizeTextRequest produced an internal error, NSUnderlyingError=0x3001f6850 {Error Domain=CRImageReaderErrorDomain Code=-5 "Unknown error" UserInfo={NSLocalizedDescription=Unknown error}}}
App crashes on iOS 16.4 when there is usage for ImageAnalysisInteraction api from VisionKit. App crashes before even starts.
Here is output:
dyld[3240]: Symbol not found: _$s9VisionKit24ImageAnalysisInteractionC7subject2atAC7SubjectVSgSo7CGPointV_tYaFTu
Referenced from: <BAD7A699-FB4E-3D0E-8CD4-45CC9FC3D5E5> /Users/sereza/Library/Developer/CoreSimulator/Devices/B64EAF39-0DD9-49EC-A3F7-69675C94B8BE/data/Containers/Bundle/Application/F4E30E86-ED4D-4748-AB99-434208D55483/VisionKitChecker.app/VisionKitChecker
Expected in: <F05E3A17-D74A-3EE2-BC8D-DDCC23E48707> /Library/Developer/CoreSimulator/Volumes/iOS_20E247/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 16.4.simruntime/Contents/Resources/RuntimeRoot/System/Library/Frameworks/VisionKit.framework/VisionKit
Here is enough code to produce this crash. Please note that this code never gets called. It is enough that it exists in the project:
import VisionKit
@MainActor
final class LiftHelper: ObservableObject {
func doSomething() async throws {
let interaction = ImageAnalysisInteraction()
let _ = try await interaction.image(for: [])
}
}
I would like to know what is the global path of the Vision Pro file system. For instance, if I put a file called example.pdf inside "On My Apple Vision Pro" what would be the global path for that file? "On My Apple Vision Pro/user_name/example.pdf" or "/example.pdf" or "/username/example.pdf" and so on. I tried to search about it but I didn't found no official source about it. Thanks in advance!
Does anyone know which control is used to automatically recognize objects in photos and achieve the function of cutout by right-clicking the mouse?
有人知道这个鼠标点击右键自动识别照片中的对象然后可以实现抠图的功能用的是哪个控件吗?
There is a long press recognition feature in the photo album of the mobile phone system. What is this feature called in Apple development, and which control should I use to have this feature?
手机系统相册中有个长按识别对象的功能,这个功能在苹果开发中叫做什么,我应该使用哪个控件才能拥有这个功能?
I try to use the new VNCalculateImageAestheticsScoresRequest API.
Code is compiling and running but delivers the same result for every image
Xcode 16 Beta 2 Simulator
Did I missing anything ?
What is the best way to demonstrate or create 2D video to demonstrate an immersive video app? So far I've shared the AVP to my desktop Mac and screen captured the resulting view. Rather shaky at times. With visionOS 2.0 beta (2) is there a better way?
Thanks, David
I'm looking for a solution to take a picture or point the camera at a piece of clothing and match that image with an image the user has stored in my app.
I'm storing the data in a Core Data database as a Binary Data object. Since the user also takes the pictures they store in the database I think I cannot use pre-trained Core ML models.
I would like the matching to be done on device if possible instead of going to an external service. That will probably describe the item based on what the AI sees, but then I cannot match the item with the stored images in the app.
Does anyone know if this is possible with frameworks as Vision or VisionKit?
Hello,
I want to capture video from Vision Pro in the Vision OS app. I am referring to the (https://developer.apple.com/videos/play/wwdc2024/10139/) Apple video and their code. step like below
import ARKit
com.apple.developer.arkit.main-camera-access.allow = true in info.plist
Do below code
func loadCameraFeed() async {
// Main Camera Feed Access Example
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left])
let cameraFrameProvider = CameraFrameProvider()
var arKitSession = ARKitSession()
var pixelBuffer: CVPixelBuffer?
await arKitSession.queryAuthorization(for: [.cameraAccess])
do {
try await arKitSession.run([cameraFrameProvider])
} catch {
return
}
guard let cameraFrameUpdates =
cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else {
return
}
print(cameraFrameUpdates)
for await cameraFrame in cameraFrameUpdates {
print(cameraFrame)
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
pixelBuffer = mainCameraSample.pixelBuffer
}
}
I want to convert "pixelBuffer" into video streaming and show it in a frame like iOS.
Please guide me on how to achieve my next step. I am blank after this code.