Post not yet marked as solved
libobjc.A.dylib
objc_retain + 8
arrow_right
1
ContextKitExtraction
+[CKContextContentProviderUIScene _bestVisibleStringForView:usingExecutor:] + 1320
2
ContextKitExtraction
+[CKContextContentProviderUIScene _donateContentsOfWindow:usingExecutor:withOptions:] + 608
3
ContextKitExtraction
__78+[CKContextContentProviderUIScene extractFromScene:usingExecutor:withOptions:]_block_invoke + 72
4
ContextKitExtraction
__64-[CKContextExecutor addWorkItemToQueue:withWorkItem:andContext:]_block_invoke + 76
5
libdispatch.dylib
_dispatch_call_block_and_release + 32
6
libdispatch.dylib
_dispatch_client_callout + 20
7
libdispatch.dylib
_dispatch_main_queue_drain + 1020
8
libdispatch.dylib
_dispatch_main_queue_callback_4CF + 44
9
CoreFoundation
CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE + 16
10
CoreFoundation
__CFRunLoopRun + 2532
11
CoreFoundation
CFRunLoopRunSpecific + 600
12
GraphicsServices
GSEventRunModal + 164
13
UIKitCore
-[UIApplication _run] + 1100
14
UIKitCore
UIApplicationMain + 364
15
PDFReaderPro Free
main.m - Line 31
main + 31
Post not yet marked as solved
Hello, is there a possibility to use the actionClassifier in CreateML ro create a fitnessApp that can recognize the action AND GIVE CORRECTION feedbacks to the user by using the the recognized keypoints?
Maybe 3 keypoints as an angle and give feedback? How can I access those joints in Xcode?
Hi I have been the following WWDC21 "dynamic training on iOS" - I have been able to get the training working, with an output of the iterations etc being printed out in the console as training progresses.
However I am unable to retrieve the checkpoints or result/model once training has completed (or is in progress) nothing in the callback fires.
If I try to create a model from the sessionDirectory - it returns nil (even though training has clearly completed).
Please can someone help or provide pointers on how to access the results/checkpoints so that I can make a MlModel and use it.
var subscriptions = [AnyCancellable]()
let job = try! MLStyleTransfer.train(trainingData: datasource, parameters: trainingParameters, sessionParameters: sessionParameters)
job.result.sink { result in
print("result ", result)
}
receiveValue: { model in
try? model.write(to: sessionDirectory)
let compiledURL = try? MLModel.compileModel(at: sessionDirectory)
let mlModel = try? MLModel(contentsOf: compiledURL!)
}
.store(in: &subscriptions)
This also does not work:
job.checkpoints.sink { checkpoint in
// Process checkpoint
let model = MLStyleTransfer(trainingData: checkpoint)
}
.store(in: &subscriptions)
}
This is the printout in the console:
Using CPU to create model
+--------------+--------------+--------------+--------------+--------------+
| Iteration | Total Loss | Style Loss | Content Loss | Elapsed Time |
+--------------+--------------+--------------+--------------+--------------+
| 1 | 64.9218 | 54.9499 | 9.97187 | 3.92s |
2022-02-20 15:14:37.056251+0000 DynamicStyle[81737:9175431] [ServicesDaemonManager] interruptionHandler is called. -[FontServicesDaemonManager connection]_block_invoke
| 2 | 61.7283 | 24.6832 | 8.30343 | 9.87s |
| 3 | 59.5098 | 27.7834 | 11.7603 | 16.19s |
| 4 | 56.2737 | 16.163 | 10.985 | 22.35s |
| 5 | 53.0747 | 12.2062 | 12.0783 | 28.08s |
+--------------+--------------+--------------+--------------+--------------+
Any help would be appreciated on how to retrieve models.
Thanks
Post not yet marked as solved
I followed Apple's guidance in their articles Creating an Action Classifier Model, Gathering Training Videos for an Action Classifier, and Building an Action Classifier Data Source. With this Core ML model file now imported in Xcode, how do use it to classify video frames?
For each video frame I call
do {
let requestHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer)
try requestHandler.perform([self.detectHumanBodyPoseRequest])
} catch {
print("Unable to perform the request: \(error.localizedDescription).")
}
But it's unclear to me how to use the results of the VNDetectHumanBodyPoseRequest which come back as the type [VNHumanBodyPoseObservation]?. How would I feed to the results into my custom classifier, which has an automatically generated model class TennisActionClassifier.swift? The classifier is for making predictions on the frame's body poses, labeling the actions as either playing a rally/point or not playing.
Post not yet marked as solved
In a section of my app I would like to recommend restaurants to users based on certain parameters. Some parameters have a higher weighting than others. In this WWDC Video a very similar app is made.
Here if a user likes a dish a value of 1.0 is assigned to those specific keywords (that apple to the dish) with a value of -1.0 to all other keywords that don't apply to that dish.
For my app if the user has ordered I then apply a value of 1.0 (to the keywords that apply) but if a user has just expressed interest (but not ordered) then can I apply a value of 0.5 (and -0.5) instead, would the model adapt to this?
Post not yet marked as solved
Hi,
is it possible to get the code for the demo app used in this presentation for the dynamic style transfer example please?
thanks
Post not yet marked as solved
I have problems when viewing an App on an iPad, it distorts the image, that is, the size. The App looks good on the iPhone. Thank you for your support.
Post not yet marked as solved
Hi Team,
We are seeing a strange issue on the iPad device with iOS v14.6 where our backend API is not getting invoked. We have verified the same API calling code on multiple iPhone devices & simulators with different iOS versions ranging from v13.0-v14.6 and on iPad simulators with iOS versions ranging from v14.0-v14.5. It seems to be working fine on all except the real iPad device with v14.6. Any help in these is really appreciated.
Post not yet marked as solved
Fatal Exception: NSGenericException
*** Collection <__NSArrayM: 0x28313ce10> was mutated while being enumerated.
-[CAMPriorityNotificationCenter _removeObserver:fromObserversByName:]
keyboard_arrow_down
Fatal Exception: NSGenericException
0 CoreFoundation 0x1878e8298 __exceptionPreprocess
1 libobjc.A.dylib 0x19b642480 objc_exception_throw
2 CoreFoundation 0x1878e7c5c -[__NSSingleObjectEnumerator initWithObject:collection:]
3 CameraUI 0x1affb7e7c -[CAMPriorityNotificationCenter _removeObserver:fromObserversByName:]
4 CameraUI 0x1affb81a8 -[CAMPriorityNotificationCenter removeObserver:]
5 CameraUI 0x1b0066510 -[CAMCaptureEngine dealloc]
6 libsystem_blocks.dylib 0x1cfcbc784 _Block_release
7 libdispatch.dylib 0x18750a290 _dispatch_source_handler_dispose
8 libdispatch.dylib 0x18750935c _dispatch_source_invoke$VARIANT$armv81
9 libdispatch.dylib 0x1874fc210 _dispatch_lane_serial_drain$VARIANT$armv81
10 libdispatch.dylib 0x1874fce2c _dispatch_lane_invoke$VARIANT$armv81
11 libdispatch.dylib 0x18750666c _dispatch_workloop_worker_thread
12 libsystem_pthread.dylib 0x1cfd355bc _pthread_wqthread
13 libsystem_pthread.dylib 0x1cfd3886c start_wqthread
Crashed: com.google.firebase.crashlytics.ios.exception
SIGABRT ABORT 0x00000001b36927b0
FIRCLSProcessRecordAllThreads
keyboard_arrow_down
Post not yet marked as solved
arabic fonts on ios 15 it very bad :(
Many Arabs did not like it..
I hope the developers use ios 14 arabic fonts .