Posts

Post not yet marked as solved
1 Replies
530 Views
Multi selection of photos is usually used by either keeping the order followed by the user or by following the chronological order of the photos. PHPicker - did not find yet a enum or so to define the order of photos while iterating through the result; PhotoKit Picker - the nature of this picker does not seem to allow it. We have to assume a "random" order in photoLibraryDidChange and always have a second step of selection to define the order.
Posted
by _mc.
Last updated
.
Post not yet marked as solved
3 Replies
629 Views
Let's take an example of an app that has in its nature photo organization has an important component of its functionality. Let's also imagine that photo meta-information is crucial for the app. In this situation it makes sense to request permission to access the camera roll. PHPicker & PhotoKit Picker (if that is the correct name) have quite appealing filters that can be quite useful for the user. If the user granted access to the camera roll, life is beautiful and we can use PHPicker + PhotoKit to have PHAssets. If the user opts by the limited access mode, we can end up with crazy flows. In this situation we have to call PhotoKit Picker to request access to photos + (meta-data) that the user wants to select. Imagine now that in one session the user wants to do something around people. We can can call presentLimitedLibraryPicker in a place that makes sense for the user. In a second session the user wants to do something with photos of a specific travel. We call again presentLimitedLibraryPicker to request access to the desired photos. In fact some of the photos can be already selected. 2 from the 30 photos selected belong to the travel journey. To know the new selection, the one that is currently in the user's mind, we have to handle photoLibraryDidChange. The problem is that 2 photos did not change. This means that we'll have to have a second custom "picker" for the "true" selection. For the user this will be a selection of a selection... Are we missing something or we'll be forced to do something like this?
Posted
by _mc.
Last updated
.
Post not yet marked as solved
4 Replies
308 Views
Is it giving access to albums created automatically with people? I'm not able to see it in the simulator. The search area suggests it is available, though. And what about memories available in Photos app (Month tab)? If the user wants to use a trip to Paris as the entry point for the selection is this possible?
Posted
by _mc.
Last updated
.
Post not yet marked as solved
1 Replies
624 Views
validateComputeFunctionArguments:817: failed assertion `Compute Function(mainMetalEntryPoint): Non-writeable texture format MTLPixelFormatBGRA8Unorm_sRGB is being bound at index 0 to a shader argument with write access enabled.' CIImage *ciImage = [[CIImage alloc] initWithImage: img]; VNImageRequestHandler* handler = [[VNImageRequestHandler alloc] initWithCIImage:ciImage orientation:(CGImagePropertyOrientation)img.imageOrientation options:@{}]; VNDetectFaceRectanglesRequest* request = [[VNDetectFaceRectanglesRequest alloc] initWithCompletionHandler:^(VNRequest * _Nonnull request, NSError * _Nullable error) { ... }]; NSError *error; //This is throwing the error [handler performRequests:@[request] error:&error];This works fine in 12
Posted
by _mc.
Last updated
.
Post not yet marked as solved
4 Replies
1.9k Views
We do have a classification model trained with CreateML that is throwingNSInvalidArgumentException[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[0]in MLClassifier predictionFromFeaturesThis is happening within stack of a call to prediction of the auto-generated classmodel.prediction(image: cvPxbuffer)Any idea of what can cause this? Google search is not helping on this one.
Posted
by _mc.
Last updated
.
Post not yet marked as solved
3 Replies
1k Views
I have a model with a custom layer with a Metal Shader, have appropriated "class @available(iOS 11.2, *)" and "do not run it in devices with OS bellow 11.2".Model Input - Image (Color ...)Output - MultiArray (Double ...)Facts1 - If i put the app targeting some version bellow 11.2, model returns NaN in all entries of the output. On the other end, if i target 11.2, and run on a 12.1 it works as expected. 2 - Targeting <11.2 has an even weider behaviour. If the app is uninstalled and installed, sometimes it returns the correct set of values only one time and after that all predictions will be NaN. You can kill the app and open it and model will always return NaN in all output entries.3 - If remove "encode" method from MLCustomLayer or set it to run CPU, it runs just fine.This looks like a bug. Manuel
Posted
by _mc.
Last updated
.
Post not yet marked as solved
0 Replies
472 Views
Hi guyswith the new reality brought by A12 (Neural Engine) is it possible to execute models in background if do not use GPU?If that is the case, how to know if GPU will be used or not? I think that models with custom layers are forced or parcially forced to go to GPU.If this is true and Neural Engine is allowed in backgroud, a possible path is to see if A12 is present and your model do not have custom layers before forcing it to CPU.Is there an offical link about this? Did not find it yet.cheers
Posted
by _mc.
Last updated
.
Post marked as solved
2 Replies
1.5k Views
i guysJust to share a concern that i personally realise an hour ago through the following article "www.cleverhans.io/security/privacy/ml/2016/12/16/breaking-things-is-easy.html". I have to say i was surprised how easy it is force classification models to make mistakes.In iOS 12 there is the option to have classification with most of the model available on the operating system, which makes the task of creating fake images to produce wrong classifications easier. Even if Apple is having this in mind in the trainning process to minimize this type of hack, we should all be aware of this when design apps based on deep learning in general and CNN in particular.cheersManuel
Posted
by _mc.
Last updated
.
Post not yet marked as solved
1 Replies
878 Views
CIDetector* detector...[detector featuresInImage:ciImage];Although Face information that is supposed to be returned in this method seems to be working, we do have the following error in the console.LandmarkDetector error -20:out of bounds in int vision::mod::LandmarkAttributes::computeBlinkFunction(const vImage_Buffer &, const Geometry2D_rect2D &, const std::vector<Geometry2D_point2D> &, vImage_Buffer &, vImage_Buffer &, std::vector<float> &, std::vector<float> &) @ /BuildRoot/Library/Caches/com.apple.xbs/Sources/Vision/Vision-2.0.49/LandmarkDetector/LandmarkDetector_Attributes.mm:535It seems that CIDetector is now executed with vision framework. Not sure if this already happening in 11, but something does not seem ok with landmarks.
Posted
by _mc.
Last updated
.
Post not yet marked as solved
2 Replies
619 Views
First of all i have to thank you guys to for your cleaver solution to share bytes between models. Although it is not new, it us super clever in mobile ecosystem! I'm confused with compatability in iOS 11, though. I'm confused, because a model that uses knowledge transfer requires a compatible model for the inital layers and to execute it in iOS 11 that capacity has to be implemented already. To define a compatability strategy some clarification on this is appreciated.cheers
Posted
by _mc.
Last updated
.
Post not yet marked as solved
4 Replies
1.4k Views
Hiwe are executing a MLModel predection on GPU on a background thread. When the model is executing it seems that something will happen on main thread or at least something is happening that affects main thread for a short moment. This affects UI with some anoying small unresponsive moments.Any suggestion how to overcome / mitigate this?
Posted
by _mc.
Last updated
.
Post not yet marked as solved
8 Replies
1.5k Views
We’ve been trying several style transfer models available on public domain on iOS with Metal Performance Shaders CNN, mlmodelzoo CoreML models and we are not happy with its performance yet.For MPSCNN we use 512 x 512 output images and for mlmodelzoo’s models we have to use 480 x 640. On an iPhone 6 the best MPSCNN gave us something like 0.8 secs / image and mlmodelzoo’s gave us near 5 secs / image. Note that this does not mean that CoreML is slower, the model used was the key, though.By looking to some known apps out there we saw that Apple Clips has two filters that really run real time! **** real time! Our best results seem to be close to Facebook app (that has also a few filters with style transfer).Any idea what are the models used in Apple Clips?
Posted
by _mc.
Last updated
.