Post not yet marked as solved
I have noticed that iOS 14, macOS 11, and tvOS 14 include the ability to process video files using a new VNVideoProcessor class. I have tried to leverage this within my code, in an attempt to perform a VNTrackObjectRequest, with no success. Specifically, my observations report invalid within the body, and the confidence and detected bounding box never change.
I am setting up my code like such;
let videoProcessor = VNVideoProcessor(url: videoURL)
let asset = AVAsset(url: videoURL)
let completion: VNRequestCompletionHandler = { request, error in
		let observations = request.results as! [VNObservation]
		if let observation = observations.first as? VNDetectedObjectObservation {
print("OBSERVATION:", observation)
		}
}
let inputObservation = VNDetectedObjectObservation(boundingBox: rect.boundingBox)
let request: VNTrackingRequest = VNTrackObjectRequest(detectedObjectObservation: inputObservation, completionHandler: completion)
request.trackingLevel = .accurate
do {
	 try videoProcessor.add(request, withProcessingOptions: [:])
	 try videoProcessor.analyze(with: CMTimeRange(start: .zero, duration: asset.duration))
} catch(let error) {
	 print(error)
}
A sample output I receive in the console during observation is;
OBSERVATION: <VNDetectedObjectObservation: 0x2827ee200> 032AB694-62E2-4674-B725-18EA2804A93F requestRevision=2 confidence=1.000000 timeRange={{0/90000 = 0.000}, {INVALID}} boundingBox=[0.333333, 0.138599, 0.162479, 0.207899]
I note that the observation reports something is invalid, alongside the fact that the confidence is always reported as 1.000000 and the bounding box coordinates never change. I'm unsure if this has to do with my lack of VNVideoProcessingOption setup or something else I am doing wrong.
Post not yet marked as solved
Is any documentation available for supporting the AfterBurner card in third-party applications? Documentation for the AfterBurner card indicates that support is available for third-party developers, but I cannot seem to find any documentation that would indicate how to take advantage of this hardware within my own video processing application.Thanks!
Post not yet marked as solved
With the latest few releases of Reality Composer beta (on iOS), the ability to create a scene using a 3D object as an "anchor" is now a possibility. I have created a scene using the object anchor choice, scanned my object in 3D within Reality Composer, and can successfully test this experience by viewing my Reality Composer project in AR, then choosing "Play," and seeing my 3D item appear when the object is detected.For posterity, my scanned anchor is a bottle, and my 3D item is a metallic sphere.When attemting to bring this experience to Xcode, I am unsure how to use the object as the anchor. I am loading like so;let bottle = try! Bottle.loadScene()
arView.scene.anchors.append(bottle)In this case, Bottle is my .rcproject and my scene is named "scene." When I build and run the project, my 3D item (the metallic sphere) appears immediately on screen, rather than remaining hidden until the "object anchor" (the bottle) is detected.Using the Scanning and Detecting 3D Objects documentation as a guide, do I need to manually set up the ARWoldTracking reference objects, like so;let configuration = ARWorldTrackingConfiguration()
guard let referenceObjects = ARReferenceObject.referenceObjects(inGroupNamed: "gallery", bundle: nil) else {
fatalError("Missing expected asset catalog resources.")
}
configuration.detectionObjects = referenceObjects
sceneView.session.run(configuration)And if so, how do I get access to the .arobject scanned from Reality Composer?Thanks!
Post not yet marked as solved
Wondering if anyone could shed some light on a question. I am being tasked with building a feature in my app that would allow a user to "stream" their iPhone camera to another nearby iOS device (either wireless or hard-wired). Both devices would be running a custom app I develop, but I have the following requirements;Functionality must exist whether or not internet connectivity is available. This removes the option to do any sort of RTMP or HLS livestream to a server.The "preview" device must be relatively responsive and low-latency, though some loss of quality would be acceptable as this is solely for preview purposes.A hard-wired solution (such as connecting iPad Pro with USB-C to iPhone XS Max with Lightning) would be feasible and preferred, if possible.I've attempted to build this functionality using MultiPeer Connectivity frameworks. While I've been successful in compressing my sample buffers to a small file size, and transmitting between the devices, overall interference and connectivity can significantly degrade the experience, resulting in huge latency issues. I am using a semaphore to ensure I receive and display frames in order, but the latency is too much of an issue to consider this a solution.Are there any suggested frameworks to investigate that might yield results for this scenario?
Post not yet marked as solved
While I'm new to ARKit and face-tracking on iPhone X/iPad Pro, I have been enjoying using the ARSCNFaceGeometry to apply a static texture to the tracked face. Using the "wireframeTexture.png" that is included in the ARKitFaceExample project, I've been able to use this texture as a starting point to then draw things like tattoos and make-up over the faces of my user.I am wondering if there are any available resources to discern how to affix more than just a face application using the ARSCNFaceGeometry. For example, I may want to use the texture to build not just a "mask" over the user's face, but some hair that extends above the user's forehead. The face would be the tracked area, but I've not been able to figure out how to build content that extends up above the face or to the sides (I've tried enlarging the canvas of the wireframeTexture.png, larger than its original 2048x2048 size, but this just forces the texture to scale, rather than extend).Thanks!