Post not yet marked as solved
I tried to add com.apple.developer.kernel.increased-memory-limit entitlements, but don't know which capability to add.
Post not yet marked as solved
In WWDC21 Lecture video, he said's only iphone with neural engine support captureTextFromCamera Function.
As I tried, it does work on iPhone, but not on ipad pro 5th. gen.
During beta, does it planed to support iPad-os?
I've built an app with ARKit 3.5 previously.
With [configuration.sceneReconstruction = .mesh],
I put all meshAnchors to 3D models.
Do I able to filter this meshAnchors by confidence,
and add color data from camera feed?
Or with demo code with MetalKit, how could I able to convert point cloud into 3d model?
Post not yet marked as solved
I tried to use handposerequest on arkit.
It does get result on VNRecognizedPointsObservation.
But, when I tried to get information of detail,
like : let thumbPoint = try!observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb)
[Segmentation fault: 11 ] keep comes up.
Is this a bug ? or do I making some mistakes?
Post not yet marked as solved
Please give me some answer...
On arkit, I put these codes...
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let handler = VNImageRequestHandler(cvPixelBuffer: frame.capturedImage, orientation: .up, options: [:])
do {
try? handler.perform([handPoseRequest])
guard let observation = handPoseRequest.results?.first as?VNRecognizedPointsObservation else {return}
let thumb = try! observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb)
} catch {}
}
At last part with let thumb.. segmentation fault 11 comes up.
I saw a twit of video running this, but I tried this for week in any possible ways, I couldn't figure this out.
Please, please give me some answer 😭
Post not yet marked as solved
Like answered on previous question.
[If you want to color the mesh based on the camera feed, you could do so manually, for example by unprojecting the pixels of the camera image into 3D space and color the according mesh face with the pixel's color.]
Could you be able to give some hint how to solve this?
I'm not really familiar with this concept.
Post not yet marked as solved
In your video and demo codes, I get that detect hand landmarks with VNImageRequestHandler in captureOutput functions.
With what method, could I able to draw hand skeleton as shown in video?
Post not yet marked as solved
On Arkit project, in funtion -
func session(_ session: ARSession, didUpdate frame: ARFrame)
I tried to get -
guard let observation = handPoseRequest.results?.first as? VNRecognizedPointsObservation else { return }
and get -
let thumb = try! observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb)
segmentation fault 11 pop up.
Is this a bug ? or did I made any mistake?
Post not yet marked as solved
I've trained my GAN pytorch model, converted into onnx, and then mlmodel.but the mlmodel's input/output type is MLMultiarray.How can I change this into Image?Which stage (pytorch or onnx or mlmodel) should I retouch?
Post not yet marked as solved
I've converted pix2pix model whose input/output is Image(256x256)But the model are normalized to the range [-1,1], and the output image shows nothing but black.Does anyone know how to convert this image to [0, 255]?