[tags:machine learning,vision]

100 results found

Post not yet marked as solved
0 Replies
415 Views
Hello, I want to use Vision for text recognition. However as soon as I fire my VNRecognizeTextRequest I get the following error on my device Resource initialization failed: could not open file /System/Library/LinguisticData/RequiredAssets_en.bundle/AssetData/en.lm/variants.dat I have actually no clue where this error is coming from and how to fix it since it seems some files are missing? I tested this on two different 13.5 phones, both gave me the same result. Below is my code: ttttlet textRecognitionRequest = VNRecognizeTextRequest { (request, error) in ttttttguard let observations = request.results as? [VNRecognizedTextObservation] else { ttttttttprint(The observations are of an unexpected type.) ttttttttreturn tttttt} ttttttlet maximumCandidates = 1 ttttttfor observation in observations { ttttttttguard let candidate = observation.topCandidates(maximumCandidates).first else { continue } tttttttttmp += candidate.string + n tttttt} ttttttprint(hello 2) tttt}
Posted
by
Post not yet marked as solved
1 Replies
713 Views
Hi, ) I checked the below page for ML models, But I didn't found the CoreML model for HandPose on this link. https://developer.apple.com/machine-learning/models/ Can you provide a link for the CoreML model. ) And can we use this with AR? I want to create an app, where we can add a 3D OBJ on fingers. Thanks Abhishek
Posted
by
Post marked as Apple Recommended
Vision can detect poses such as push-ups and sit-ups, although I encourage you to experiment using your dataset as many factors can affect the quality of detection. Check out Build an Action Classifier with Create ML - https://developer.apple.com/videos/play/wwdc2020/10043/ for some good practices on capturing training data and Detect Body and Hand Pose with Vision - https://developer.apple.com/videos/play/wwdc2020/10653/ for details on using Vision to detect poses.
Post not yet marked as solved
3 Replies
Did you get any further on this? I am interested in using optical flow to interpolated between to images. I go the basic to work and got a VNPixelBufferObservation back. As I understand it it is supposed to contain two float values for each pixel describing how it moves. But while I can print out the values they don't make much sense. Did you get it to work? Sten
Post not yet marked as solved
5 Replies
A little more info. The simulator shows The VNCoreMLTransform request failed during the detectBoard() routine in SetupViewController.swift with the NSUnderlyingError domain com.apple.CoreML code = 0. Any clues what to do about it? This is with the demo project unchanged other than the bundle ID and Team assigned.
Post not yet marked as solved
5 Replies
I have the same issue on a real device iPhone 11 Pro in ios14 beta 4 with the following error: Error Domain=com.apple.vis Code=12 processing with VNANERuntimeProcessingDevice is not supported UserInfo={NSLocalizedDescription=processing with VNANERuntimeProcessingDevice is not supported} Environment: Big Sur 11.0 beta 20A5343i XCode 12.0 beta 12A8179i iPhone 11 Pro on iOS 14 beta 4
Post marked as solved
5 Replies
967 Views
Trying to run the demo project for wwc20-10099 in the simulator using the supplied sample.mov, the entire time the app has the Locating board overlay instead of finding the board before the bean bags begin to be tossed. Is this due to the environment? Has anybody got the demo video to work? Environment: 2018 Mac mini, 3.0GHz, 6 core, 8GB memory Big Sur 11.0 beta 20A4299v XCode 12.0 beta 12A6159 iOS default simulator in XCode 12 (a SE on OS 14)
Posted
by
Post not yet marked as solved
1 Replies
I'd say we're not the only ones... fingers crossed.
Post not yet marked as solved
1 Replies
909 Views
I tried to use handposerequest on arkit. It does get result on VNRecognizedPointsObservation. But, when I tried to get information of detail, like : let thumbPoint = try!observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb) [Segmentation fault: 11 ] keep comes up. Is this a bug ? or do I making some mistakes?
Posted
by
Post not yet marked as solved
1 Replies
816 Views
Demo Classifying Images with Vision and Core ML is crashing: The demo app Classifying Images with Vision and Core ML - https://developer.apple.com/documentation/vision/classifying_images_with_vision_and_core_ml has gone outdated, creating conversion effort to the newest swift version (with the convertor working incorrectly), but furthermore, the demo app crashes, raising this exception (right at the AppDelegate declaration): Thread 1: Your application has presented a UIAlertController () of style UIAlertControllerStyleActionSheet from Vision_ML_Example.ImageClassificationViewController (). The modalPresentationStyle of a UIAlertController with this style is UIModalPresentationPopover. You must provide location information for this popover through the alert controller's popoverPresentationController. You must provide either a sourceView and sourceRect or a barButtonItem. If this information is not known when you present the alert controller, you may provide it in the UIPopoverPresentationControllerDelegate met
Posted
by
Post marked as solved
2 Replies
I have already found the answer. The method is called periodically and basically automatically whenever a new video frame is received via the delegate.
Post not yet marked as solved
1 Replies
Here are all the sounds supported in the beta version of Xcode. list of sounds