Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Guidance Implementing IndexedEntity and CSSearchableItemAttributeSet
I am working to add Spotlight indexing for my app entities as discussed in WWDC24's video "What's New in App Intents". That video goes over the IndexedEntity protocol and the integration with Spotlight via CSSearchableItemAttributeSet. What I'm seeing though does not match the video. In the video, the presenter goes through the sort of progressive approach you can take to getting this data into Spotlight starting with the basics and then expanding to include more support depending on how much the developer wants to do. What I'm seeing is that if you conform to IndexedEntity, your entities will appear in Spotlight using the name derived from public var displayRepresentation: DisplayRepresentation So, that works. Name appears... BUT the next part of the video goes into how to expand your implementation with more metadata for Spotlight via CSSearchableItemAttributeSet. The issue I'm seeing is that once that's implemented, the items disappear from Spotlight, almost like that implementation is overriding the base implementation in a way that no longer functions. My expectation is that an item with custom attributes would use them in Spotlight as appropriate, not disappear from search, i.e. what's shown in the video should work. I've got a sample project here: https://hanchor.s3.amazonaws.com/misc/IndexingTest.zip To reproduce with the sample: Build and run. Indexing is setup in the init() method so it will just run. Go to Spotlight and search for 'Huntersblau', a string included in the content set. At this point you should see a result - good! Stop the app and go back and uncomment the var attributeSet: CSSearchableItemAttributeSet implementation in IndexingTestApp.swift. This will provide custom attributes to Spotlight. Repeat steps 1 and 2 - you'll see now, it no longer appears in the search results - when CSSearchableItemAttributeSet is implemented, the item drops out of Spotlight.
3
0
1.2k
Dec ’24
Image Playground Supported Devices
I'm trying to determine the best practice for handling if Image Playground is available but not installed or simply not supported. If ImagePlaygroundViewController.isAvailable is true, I will just display a button to start an Image Playground session. If it is false, does that mean ImagePlayground is supported but not installed? If it's supported and not installed, instead of a button to launch it, I want to display something like "Enable Apple Intelligence in Settings" or, better yet, a button that opens the Intelligence settings. Is that possible? But if it is on a system that doesn't support it, of course, I don't want to instruct the user to enable it. How can I determine if a device cannot install Image Playground? I read that Apple Intelligence requires iPhone 15 Pro, iPhone 15 Pro Max, and all iPhone 16 models, and no mention of the M1 iPad Pro, yet Image Playground runs on my M1 iPad Pro. What are the hardware requirements for Image Playground?
2
1
1.5k
Dec ’24
Image Playground App Rejection Requirement
My app was rejected because of this error below but I cannot find any documentation on a key related to Image Playground. My app is set to minimum of 18.2 already. Rejection Message: The UIRequiredDeviceCapabilities key in the Info.plist is set in such a way that the app will not install on iPhone running iOS 18.1.1 Next Steps To resolve this issue, check the UIRequiredDeviceCapabilities key to verify that it contains only the attributes required for the app features or the attributes that must not be present on the device. Attributes specified by a dictionary should be set to true if they are required and false if they must not be present on the device. Resources Learn more about the UIRequiredDeviceCapabilities key.
2
0
605
Dec ’24
Requirements for Image Playground?
I cannot find the hardware requirements for Image Playground documented anywhere. I'm also not sure if they are identical to devices that support Apple Intelligence. On the App Store, the only requirement listed for Image Playground is iOS 18.2. Not knowing the requirements is an issue because I need to be able to clearly state the requirements for the feature in my app description. Also, I'm sure my mother's current iPad is too old, but I'm not sure what models support it if I were to buy her a new one.
2
0
1.6k
Dec ’24
Ho to export a PyTorch model to CoreML model for usage in a iOS App
Hi, as showed in the course I created the PyTorch model sample and want to export / convert this model o a CoreML iOS Model using the coremltools. Input is a 224x224 image and output is a image classification (3 different classes) I am using coremltools for this with this code: import coremltools as ct modelml = ct.convert( scripted_model, inputs=[ct.ImageType(shape=(1,3,224,244))] ) I have a working iOS App code which performs with another model which was created using Microsoft Azure Vision. The PyTorch exported model is loaded and a prediction is performed, but I am getting this error: Foundation.MonoTouchException: Objective-C exception thrown. Name: NSInvalidArgumentException Reason: -[VNCoreMLFeatureValueObservation identifier]: unrecognized selector sent to instance 0x2805dd3b0 When I check the exported model with Xcode and compare it with another model which is working with the sample iOS App code (created and exported from Microsoft Azure) I can see that the input (for image classification using the device camera) seems ok and is equal, but the output is totally different. (see screenshots) The working model has two outputs: loss => Dictionary (String => Double) classLabel => String My exported model using coremltools just has one export: MultiArray(Float32) (name var_1620, I think this is the last feature layer output of the EfficentNetB2) How do I change my model or my coremltools export to get the correct output for the prediction ? I read the coreml documentation (https://coremltools.readme.io/docs/pytorch-conversion) and tried some GitHub samples. But I never get the correct output. How do I export the PyTorch model so that the output is correct and the prediction will work ? Best Marco
2
1
1.5k
Dec ’24
How to use a Decimal as @Property of AppEntity
I’m trying to use a Decimal as a @Property in my AppEntity, but using the following code shows me a compiler error. I’m using Xcode 16.1. The documentation notes the following: You can use the @Parameter property wrapper with common Swift and Foundation types: Primitives such as Bool, Int, Double, String, Duration, Date, Decimal, Measurement, and URL. Collections such as Array and Set. Make sure the collection’s elements are of a type that’s compatible with IntentParameter. Everything works fine for other primitives as bools, strings and integers. How do I use the Decimal though? Code struct MyEntity: AppEntity { var id: UUID @Property(title: "Amount") var amount: Decimal // … } Compiler Error This error appears at the line of the @Property definition: Generic class 'EntityProperty' requires that 'Decimal' conform to '_IntentValue'
1
0
641
Dec ’24
How to use CoreML outside of XCode as a library ?
I'm working on a cross-platform AI app. It is a CMake project. The inference part should be built as a library separately on Windows and MacOS. On MacOS it should be built with objective-c and CoreML. Here's my step roughly: Create a XCode Project for CoreML inference and build it as static library. Models are compiled to ".mlmodelc", and codes are compile to binary ".a" lib. Create a CMake Project for the app, and use the ".a" lib built by XCode. Run the App. I initialize the CoreML model like this(just for demostration): #include "det.h" // the model header generated by xcode auto url = [[NSURL alloc] initFileURLWithPath:[NSString stringWithFormat:@"%@/%@", dir, @"det.mlmodelc"]]; auto model = [[det alloc] initWithContentsOfURL:url error:&error]; // no error The url is valid, and the initialization doesn't report any error. However, when I tried to do inference using codes like this: auto cvPixelBuffer = createCVPixelBuffer(960, 960); // util function auto preds = [model predictionFromImage:cvPixelBuffer error:NULL]; The output preds will be null and I got these errors: 2024-12-10 14:52:37.678201+0800 望言OCR[50204:5615023] [e5rt] E5RT encountered unknown exception. 2024-12-10 14:52:37.678237+0800 望言OCR[50204:5615023] [coreml] E5RT: E5RT encountered an unknown exception. (11) 2024-12-10 14:52:37.870739+0800 望言OCR[50204:5615023] H11ANEDevice::H11ANEDeviceOpen kH11ANEUserClientCommand_DeviceOpen call failed result=0xe00002e2 2024-12-10 14:52:37.870758+0800 望言OCR[50204:5615023] Device Open failed - status=0xe00002e2 2024-12-10 14:52:37.870760+0800 望言OCR[50204:5615023] (Single-ANE System) Critical Error: Could not open the only H11ANE device 2024-12-10 14:52:37.870769+0800 望言OCR[50204:5615023] H11ANEDeviceOpen failed: 0x17 2024-12-10 14:52:37.870845+0800 望言OCR[50204:5615023] H11ANEDevice::H11ANEDeviceOpen kH11ANEUserClientCommand_DeviceOpen call failed result=0xe00002e2 2024-12-10 14:52:37.870848+0800 望言OCR[50204:5615023] Device Open failed - status=0xe00002e2 2024-12-10 14:52:37.870849+0800 望言OCR[50204:5615023] (Single-ANE System) Critical Error: Could not open the only H11ANE device 2024-12-10 14:52:37.870853+0800 望言OCR[50204:5615023] H11ANEDeviceOpen failed: 0x17 2024-12-10 14:52:37.870857+0800 望言OCR[50204:5615023] [common] start: ANEDeviceOpen() failed : ret=23 : It seems that CoreML failed to find ANE device. Is there anything need to be done before we use a CoreML Model as a library in a CMake or other non-XCode project? By the way, codes like above will work on an XCode Native App with CoreML (I tested this before) . So I guess I missed some environment initializations in my non-XCode project?
1
0
642
Dec ’24
Converting FastAI Cat vs Dog Model into Core ML
FB:FB16079804 Hello, I've made the FastAI's Cat vs Dog model into model that distinguishes lemons from limes and it all works fine in a notebook. I am now looking to transform this model into Core ML for my iOS app using TorchScript and Apple official guidelines for coremltools. Model converts but I cannot see the Preview Tab in. Xcode. Have anyone of you tried to convert to Core ML? I guess my input types are not matching with coremltools expectations for preview but I am stuck . Here is my code. import torch import coremltools as ct from fastai.vision.all import * import json from torchvision import transforms # Load your Fastai model (replace with your actual path) learn = load_learner('lemonmodel.pkl') # Example input image (you can use any image from your dataset) input_image = PILImage.create('example.jpg') # Preprocess the image (assuming you used these transforms during training) to_tensor = transforms.ToTensor() normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) input_tensor = to_tensor(input_image) input_tensor = normalize(input_tensor) # Apply normalization # Add a batch dimension input_tensor = input_tensor.unsqueeze(0) # Ensure float32 type input_tensor = input_tensor.float() # Trace the model trace = torch.jit.trace(learn.model, input_tensor) # Define the Core ML input type (considering your model's input shape) _input = ct.ImageType( name="input_1", shape=input_tensor.shape, bias=[-0.485/0.229, -0.456/0.224, -0.406/0.225], scale=1./(255*0.226) ) # Convert the model to Core ML format mlmodel = ct.convert( trace, inputs=[_input], minimum_deployment_target=ct.target.iOS14 # Optional, set deployment target ) # Set model type as 'imageClassifier' for the Preview tab mlmodel.type = 'imageClassifier' # Correct structure for preview parameters** (assuming two classes: 'lemon' and 'lime') labels_json = { "imageClassifier": { "labels": ["lemon", "lime"], "input": { "shape": list(input_tensor.shape), # Provide the actual input shape "mean": [0.485, 0.456, 0.406], # Match normalization mean "std": [0.229, 0.224, 0.225] # Match normalization std }, "output": { "shape": [1, 2] # Output shape for your model (2 classes) } } } # Setting up the metadata with correct 'preview' params mlmodel.user_defined_metadata['com.apple.coreml.model.preview.params'] = json.dumps(labels_json) # Save the model as .mlmodel mlmodel.save("LemonClassifierGemini.mlmodel") mlmodel = ct.convert( trace, inputs=[_input], minimum_deployment_target=ct.target.iOS14 # Optional, set deployment target ) # Set model type as 'imageClassifier' for the Preview tab** mlmodel.type = 'imageClassifier' # Correct structure for preview parameters** (assuming two classes: 'lemon' and 'lime') labels_json = { "imageClassifier": { "labels": ["lemon", "lime"], "input": { "shape": list(input_tensor.shape), # Provide the actual input shape "mean": [0.485, 0.456, 0.406], # Match normalization mean "std": [0.229, 0.224, 0.225] # Match normalization std }, "output": { "shape": [1, 2] # Output shape for your model (2 classes) } } } # Setting up the metadata with correct 'preview' params** mlmodel.user_defined_metadata['com.apple.coreml.model.preview.params'] = json.dumps(labels_json) # Save the model as .mlmodel mlmodel.save("LemonClassifierGemini.mlmodel") My model is : Input batch shape: torch.Size([32, 3, 192, 192]) Labels batch shape: torch.Size([32]) Validation Loss: None, Validation Metric: None Predictions shape: torch.Size([63, 2]) Targets shape: torch.Size([63]) Code for the model : searches = 'lemon','lime' path = Path('lemon_or_not') for o in searches: dest = (path/o) dest.mkdir(exist_ok=True, parents=True) download_images(dest, urls=search_images(f'{o} photo')) time.sleep(5) resize_images(path/o, max_size=400, dest=path/o) dls = DataBlock( blocks=(ImageBlock, CategoryBlock), get_items=get_image_files, splitter=RandomSplitter(valid_pct=0.2, seed=42), get_y=parent_label, item_tfms=[Resize(192, method='squish')] ).dataloaders(path, bs=32) dls.show_batch(max_n=6) learn = vision_learner(dls, resnet18, metrics=error_rate) learn.fine_tune(3) is_lemon,_,probs = learn.predict(PILImage.create('lemon.jpg')) print(f"This is a: {is_lemon}.") print(f"Probability it's a lemon: {probs[0]:.4f}") This is a: lemon. Probability it's a lemon: 1.0000 learn.export('lemonmodel.pkl') I am stuck to why it doest show the Preview Tab.
3
0
732
Dec ’24
MLModel crashes when it is released on some iOS systems
We use MLModel in our app, which uses two file formats: mlmodel and mlpackage. We find that when the model is released, models using mlmodel format have a certain probability of crashing. And these crashes account for the majority (over 85%) in the iOS 16.x system. Here is the crash stack: Exception Type: SIGTRAP Exception Codes: TRAP_BRKPT at 0x1b48e855c Crashed Thread: 5 Thread 5 Crashed: 0 libdispatch.dylib 0x00000001b48e855c _dispatch_semaphore_dispose.cold.1 + 40 1 libdispatch.dylib 0x00000001b48b2b28 _dispatch_semaphore_signal_slow 2 libdispatch.dylib 0x00000001b48b0e58 _dispatch_dispose + 208 3 AppleNeuralEngine 0x00000001ef07b51c -[_ANEProgramForEvaluation .cxx_destruct] + 32 4 libobjc.A.dylib 0x00000001a67ed4a4 object_cxxDestructFromClass(objc_object*, objc_class*) + 116 5 libobjc.A.dylib 0x00000001a67f221c objc_destructInstance + 80 6 libobjc.A.dylib 0x00000001a67fb9d0 _objc_rootDealloc + 80 7 AppleNeuralEngine 0x00000001ef079e04 -[_ANEProgramForEvaluation dealloc] + 72 8 AppleNeuralEngine 0x00000001ef07ca70 -[_ANEModel .cxx_destruct] + 44 9 libobjc.A.dylib 0x00000001a67ed4a4 object_cxxDestructFromClass(objc_object*, objc_class*) + 116 10 libobjc.A.dylib 0x00000001a67f221c objc_destructInstance + 80 11 libobjc.A.dylib 0x00000001a67fb9d0 _objc_rootDealloc + 80 12 AppleNeuralEngine 0x00000001ef07bd7c -[_ANEModel dealloc] + 136 13 CoreFoundation 0x00000001ad4563cc cow_cleanup + 168 14 CoreFoundation 0x00000001ad49044c -[__NSDictionaryM dealloc] + 148 15 Espresso 0x00000001bb19c7a4 Espresso::ANERuntimeEngine::compiler::reset() + 1340 16 Espresso 0x00000001bb19cac8 Espresso::ANERuntimeEngine::compiler::~compiler() + 108 17 Espresso 0x00000001bacd69e4 std::__1::__shared_weak_count::__release_shared() + 84 18 Espresso 0x00000001ba944d00 std::__1::__hash_table<std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>, std::__1::__unordered_map_hasher<Espresso::platform, std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>, std::__1::hash<Espresso::platform>, std::__1::equal_to<Espresso::platform>, true>, std::__1::__unordered_map_equal<Espresso::platform, std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>, std::__1::equal_to<Espresso::platform>, std::__1::hash<Espresso::platform>, true>, std::__1::allocator<std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>>>::__deallocate_node(std::__1::__hash_node_base<std::__1::__hash_node<std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>, void*>*>*) + 40 19 Espresso 0x00000001ba8ea640 std::__1::__hash_table<std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>, std::__1::__unordered_map_hasher<Espresso::platform, std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>, std::__1::hash<Espresso::platform>, std::__1::equal_to<Espresso::platform>, true>, std::__1::__unordered_map_equal<Espresso::platform, std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>, std::__1::equal_to<Espresso::platform>, std::__1::hash<Espresso::platform>, true>, std::__1::allocator<std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>>>::~__hash_table() + 28 20 Espresso 0x00000001ba8e5750 Espresso::net::~net() + 396 21 Espresso 0x00000001bacd69e4 std::__1::__shared_weak_count::__release_shared() + 84 22 Espresso 0x00000001bad750e4 std::__1::__vector_base<std::__1::shared_ptr<Espresso::net>, std::__1::allocator<std::__1::shared_ptr<Espresso::net>>>::clear() + 52 23 Espresso 0x00000001ba902448 std::__1::__vector_base<std::__1::shared_ptr<Espresso::net>, std::__1::allocator<std::__1::shared_ptr<Espresso::net>>>::~__vector_base() + 36 24 Espresso 0x00000001ba8ed99c std::__1::unique_ptr<EspressoLight::espresso_plan::priv_t, std::__1::default_delete<EspressoLight::espresso_plan::priv_t>>::reset(EspressoLight::espresso_plan::priv_t*) + 188 25 Espresso 0x00000001ba95b7fc EspressoLight::espresso_plan::~espresso_plan() + 72 26 Espresso 0x00000001ba902078 EspressoLight::espresso_plan::~espresso_plan() + 16 27 Espresso 0x00000001ba8e690c espresso_plan_destroy + 372 28 CoreML 0x00000001c48c45cc -[MLNeuralNetworkEngine _deallocContextAndPlan] + 40 29 CoreML 0x00000001c48c43bc -[MLNeuralNetworkEngine dealloc] + 40 30 libobjc.A.dylib 0x00000001a67ed4a4 object_cxxDestructFromClass(objc_object*, objc_class*) + 116 31 libobjc.A.dylib 0x00000001a67f221c objc_destructInstance + 80 32 libobjc.A.dylib 0x00000001a67fb9d0 _objc_rootDealloc + 80 ~~~~ Our code that release the MLModel object ~~~~ Moreover, we use a synchronization mechanism to ensure that the release of the MLModel and the data processing of the model (by calling [model predictionFromFeatures]) do not occur simultaneously. What could be the possible causes of the problem, and how can we prevent it from happening? Any advice would be appreciated.
1
0
591
Dec ’24
CoreML takes forever to load when using neural engine
I am using the depthAnything v2 provided by Apple on the developer website. On my iPhone 15 Pro, if I choose all or cpuAndNeuralEngine, it will stuck in loading models. let config = MLModelConfiguration() config.computeUnits = .cpuAndGPU//normal when not using neuralEngine. let model = try await DepthModel.load(configuration: config) with following error: E5RT encountered an STL exception. msg = MILCompilerForANE error: failed to compile ANE model using ANEF. Error=无法与帮助程序通信。. E5RT: MILCompilerForANE error: failed to compile ANE model using ANEF. Error=无法与帮助程序通信。 (11)
2
1
681
Dec ’24
Loading multifunction models on iOS causes a crash
I used the multifunction models feature introduced in iOS 18 to merge three VAE Encoder models with different resolutions into a single model. However, loading this merged model on iOS causes a crash with the error EXC_BAD_ACCESS (code=1, address=0x0). In contrast, merging VAE Decoder models using the same method does not result in crashes. Additionally, merging only two VAE Decoder models with different resolutions also leads to a crash when loaded on iOS. As for the Stable Diffusion Unet model, merging two or even three models does not cause any crashes, and it successfully generates images as expected. I use the following code to load the model: let config = MLModelConfiguration() config.computeUnits = .cpuAndNeuralEngine config.functionName = "test" try MLModel(contentsOf: url, configuration: config)
4
0
675
Dec ’24
Does Apple Intelligence Extensions Have an API?
Hi everyone, On the "Apple Intelligence & Siri" settings there's a section titled "Extensions" that specifically mentions ChatGPT. This got me curious—does Apple provide an API or SDK for developers to create custom integrations or use Apple Intelligence Extensions? Or is this currently limited to the Apple/OpenAI partnership? I appreciate any insights or links to relevant documentation. Here's a screenshot of what I mean: https://imgur.com/a/4MuQkIJ
1
2
841
Dec ’24
macOS 15.x crashes in MetalPerformanceShadersGraph
In our app we use CoreML. But ever since macOS 15.x was released we started to get a great bunch of crashes like this: Incident Identifier: 424041c3-884b-4e50-bb5a-429a83c3e1c8 CrashReporter Key: B914246B-1291-4D44-984D-EDF84B52310E Hardware Model: Mac14,12 Process: <REMOVED> [1509] Path: /Applications/<REMOVED> Identifier: com.<REMOVED> Version: <REMOVED> Code Type: arm64 Parent Process: launchd [1] Date/Time: 2024-11-13T13:23:06.999Z Launch Time: 2024-11-13T13:22:19Z OS Version: Mac OS X 15.1.0 (24B83) Report Version: 104 Exception Type: SIGABRT Exception Codes: #0 at 0x189042600 Crashed Thread: 36 Thread 36 Crashed: 0 libsystem_kernel.dylib 0x0000000189042600 __pthread_kill + 8 1 libsystem_c.dylib 0x0000000188f87908 abort + 124 2 libsystem_c.dylib 0x0000000188f86c1c __assert_rtn + 280 3 Metal 0x0000000193fdd870 MTLReportFailure.cold.1 + 44 4 Metal 0x0000000193fb9198 MTLReportFailure + 444 5 MetalPerformanceShadersGraph 0x0000000222f78c80 -[MPSGraphExecutable initWithMPSGraphPackageAtURL:compilationDescriptor:] + 296 6 Espresso 0x00000001a290ae3c E5RT::SharedResourceFactory::GetMPSGraphExecutable(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, NSDictionary*) + 932 . . . 43 CoreML 0x0000000192d263bc -[MLModelAsset modelWithConfiguration:error:] + 120 44 CoreML 0x0000000192da96d0 +[MLModel modelWithContentsOfURL:configuration:error:] + 176 45 <REMOVED> 0x000000010497b758 -[<REMOVED> <REMOVED>] (<REMOVED>) No similar crashes on macOS 12-14! MetalPerformanceShadersGraph.log Any clue what is causing this? Thanks! :)
2
1
856
Dec ’24
Making onscreen content available to Siri not requesting my Transferable
Howdy, I'm following along with this sample: https://developer.apple.com/documentation/appintents/making-onscreen-content-available-to-siri-and-apple-intelligence I've got everything up and building. I can confirm that the userActivity modifier is associating my App Intent via EntityIdentifier but my custom Transferable representation (text) is never being called and when Siri is doing the ChatGPT handoff, it's just offering to send a screenshot which is what it does when it has no custom representation. What could I doing wrong? Where should I be looking?
3
0
869
Dec ’24
In iOS 18 beta, the SoundAnalysis framework reports an error when the iPhone is locked
I use SoundAnalysis to analyze background sounds and have enabled background permissions. It worked well in previous iOS systems, but a warning appeared in the new iOS18beta version and sound analysis was stopped. Warning List: Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background) [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted); code=7 status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1). CoreML prediction failed with Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 0 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 0 in pipeline, NSUnderlyingError=0x30330e910 {Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 1 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 1 in pipeline, NSUnderlyingError=0x303307840 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}}}
16
8
2.4k
Dec ’24