Post not yet marked as solved
54
Views
Xcode Version: 12.4 (Used Xcode 12.5 to create encryption key as 12.4 is bugged and will not let you)
I am bundling a ML model into my app, and have encrypted ML model via Xcode 12 encryption method shown here. Model loads and works fine without encryption. After adding model encryption key and setting it as compiler flag, the model encryption/decryption works fine in a demo app, but using the same load method in my main app fails with this error message:
Thread 1: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Unable to load model at /path_to_model/my_model.mlmodelc/ with error: failed to invoke mremap_encrypted with result = -1, error = 12"
I have tried all variations of the load method available within my swift generated model file, all produce the same result above.
Post not yet marked as solved
117
Views
Hi.
I'm attempting to add the YOLOv3 MLModel from the main apple.com domain to my project. I downloaded the .mlmodel file and added it to my project, but whenever I go to make an MLModel object of it, it keeps coming back with the following error
Error Domain=com.apple.CoreML Code=3 "Error reading protobuf spec. validator error: Model specification version field missing or corrupt." UserInfo={NSLocalizedDescription=Error reading protobuf spec. validator error: Model specification version field missing or corrupt.}
This is just by calling try MLModel.compileModel(at: url), where url is determined by Bundle.main.url(forResource: "YOLOv3", withExtension: "mlmodelc")
I don't understand why this is happening.
Post marked as solved
122
Views
Hi, I'm new to ARKit development. Can anyone suggest ways to perform automatic wall or corner detection with ARKit?
Post not yet marked as solved
91
Views
how can i use handpose classification?
Post marked as solved
314
Views
Hello everybody,
I am trying to run inference on a CoreML Model created by me using CreateML. I am following the sample code provided by Apple on the CoreML documentation page and every time I try to classify an image I get this error: "Could not create Espresso context".
Has this ever happened to anyone? How did you solve it?
Here is my code:
import Foundation
import Vision
import UIKit
import ImageIO
final class ButterflyClassification {
var classificationResult: Result?
lazy var classificationRequest: VNCoreMLRequest = {
do {
let model = try VNCoreMLModel(for: ButterfliesModel_1(configuration: MLModelConfiguration()).model)
return VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in
self?.processClassification(for: request, error: error)
})
}
catch {
fatalError("Failed to lead model.")
}
}()
func processClassification(for request: VNRequest, error: Error?) {
DispatchQueue.main.async {
guard let results = request.results else {
print("Unable to classify image.")
return
}
let classifications = results as! [VNClassificationObservation]
if classifications.isEmpty {
print("No classification was provided.")
return
}
else {
let firstClassification = classifications[0]
self.classificationResult = Result(speciesName: firstClassification.identifier, confidence: Double(firstClassification.confidence))
}
}
}
func classifyButterfly(image: UIImage) - Result? {
guard let ciImage = CIImage(image: image) else {
fatalError("Unable to create ciImage")
}
DispatchQueue.global(qos: .userInitiated).async {
let handler = VNImageRequestHandler(ciImage: ciImage, options: [:])
do {
try handler.perform([self.classificationRequest])
}
catch {
print("Failed to perform classification.\n\(error.localizedDescription)")
}
}
return classificationResult
}
}
Thank you for your help!
Post not yet marked as solved
60
Views
Hello everybody,
I am new to Machine Learning but I want to get started with developing CoreML models to try them out in a few apps of my own.
What is the best way to build a dataset from Apple Watch data to build an activity model?
Do I build an iPhone app that works with the Apple Watch in order to get the data I need, or is there a more direct way to do it through Xcode, maybe?
Thank you for for help.
Best regards,
Tomás
Post not yet marked as solved
49
Views
Any one knows a dataset for Japanese characters [kanji, hiragana e katakana] to create an ML for OCR? Or any one knows how can I make ocr with Japanese text using Apple's framework VisionKit?
Post not yet marked as solved
73
Views
Hi, I am trying to translate a Transformer Model (from hugging face) to a coreml model and the only piece I have left is to figure out the shape of the input.
#Here is the code
from transformers import BertModel, BertTokenizer, BertConfig
import torch
import coremltools as ct
import torch
import torchvision
enc = BertTokenizer.from_pretrained("bert-base-uncased")
# Tokenizing input text
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
tokenized_text = enc.tokenize(text)
# Masking one of the input tokens
masked_index = 8
tokenized_text[masked_index] = '[MASK]'
indexed_tokens = enc.convert_tokens_to_ids(tokenized_text)
segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]
# Creating a dummy input
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
dummy_input = [tokens_tensor, segments_tensors]
# Initializing the model with the torchscript flag
# Flag set to True even though it is not necessary as this model does not have an LM Head.
config = BertConfig(vocab_size_or_config_json_file=32000, hidden_size=768,
num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, torchscript=True)
# Instantiating the model
model = BertModel(config)
# The model needs to be in evaluation mode
model.eval()
# If you are instantiating the model with `from_pretrained` you can also easily set the TorchScript flag
model = BertModel.from_pretrained("bert-base-uncased", torchscript=True)
# Creating the trace
traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
torch.jit.save(traced_model, "traced_bert.pt")
## Get a pytorch model and save it as a *.pt file
#model = torchvision.models.mobilenet_v2()
#model.eval()
#example_input = torch.rand(1, 3, 224, 224)
#traced_model = torch.jit.trace(model, example_input)
#traced_model.save("torchvision_mobilenet_v2.pt")
#from torchsummary import summary
#summary(traced_model, ) #I think torchsummary can tell me this but i'm not sure.
# Convert the saved PyTorch model to Core ML
breakpoint()
mlmodel = ct.convert("traced_bert.pt", inputs=[???]) #Error is here I don't know how to determine the shape of the input.
(gist of above is at https://gist.github.com/zitterbewegung/589a869f59ae54b32c964c08c2f7bb80)
Post not yet marked as solved
73
Views
Newbie question. I've been trying to convert this PyTorch model into CoreML model. I've followed the guide here but couldn't make it work. I tried tracing and scripting but faced errors which hint that there might be an operation not supported in TorchScript:
Error on torch.jit.trace: RuntimeError: PyTorch convert function for op 'pythonop' not implemented
Error on torch.jit.script: RuntimeError: Python builtin <built-in method apply of FunctionMeta object at 0x7fa37e2ad600> is currently not supported in Torchscript
I suspect that it just might not be possible to convert any PyTorch model into CoreML one. Is this the case? Can I somehow overcome the errors without diving deep into PyTorch operations and layers?
My python script just in case (model is loaded locally):
import warnings
import torch
import torch.nn as nn
import coremltools as ct
from efficientnet_pytorch import EfficientNet
from torchvision import datasets, models, transforms
from PIL import Image
# Simple loading the model
# model = torch.load('food308_efnetb2_91.31.pth', map_location=torch.device('cpu'))
# ends up with RuntimeError("Could not get name of python class object")
# Load the model
model = EfficientNet.from_pretrained('efficientnet-b2')
num_ftrs = model._fc.in_features
model._fc = nn.Linear(num_ftrs, 308)
prev_state = torch.load('food308_efnetb2_91.31.pth', map_location=torch.device('cpu'))
model.load_state_dict(prev_state)
model.eval()
# Model tracing
example_input = torch.rand(1, 3, 224, 224)
traced_model = torch.jit.trace(model, example_input)
mlmodel = ct.convert(
traced_model,
inputs=[ct.TensorType(name="input", shape=(1, 3, 64, 64))],
)
# Model scripting
scripted_model = torch.jit.script(model)
mlmodel2 = ct.convert(
scripted_model,
inputs=[ct.TensorType(name="input", shape=(1, 3, 64, 64))],
)
Post not yet marked as solved
71
Views
Is there a Machine Learning API that can take handwriting (either as a bitmap or as a list of points) and convert it to text?
I know Scribble can be used to allow handwriting input into text fields, but in this API it is Scribble which controls the rendering of the handwriting. Is there an API where my app can render the handwriting and get information about the text content?
In the Keynote demo Craig was able to get text content from a photo of a whiteboard. Are there APIs which would allow an app developer to create something similar?
Post marked as solved
125
Views
Hi I try to safe x, y, width and height values from a Objekt detection AI in a Array.
rectangle = CGRect(x: boundingBox.minX*image.size.width, y: (1-boundingBox.minY-boundingBox.height)*image.size.height, width: boundingBox.width*image.size.width, height: boundingBox.height*image.size.height)
var XPoitions: [Double] = Array()
XPoitions.append(rectangle.origin.x)
The Error say "No exact matches in call to instance method 'append'". And i am not sure how to fix it......
Post not yet marked as solved
91
Views
Hi,
I've been looking for a way to get my CoreML object detection models into a macOS application for days now.
PS: I am a complete beginner and the only thing I have managed to do is to get the models into an IOS app. But that just looks like crap when you run it for macOS.
Does anyone have an idea where I could look? Or if it just doesn't work?
Post not yet marked as solved
146
Views
A .mlmodel file that was created using the Turicreate - https://github.com/apple/turicreate library in python does not have a preview tab. But a .mlmodel file created using the CreateML app has a preview tab.
How do I enable the preview tab for a StyleTransfer model using the Turicreate library? 🤔
See the following for details...
https://gist.github.com/dkambam/4c50d90cf59f860c51456f36419dfc6c
The following link seems to hint that adding metadata might help
https://coremltools.readme.io/docs/introductory-quickstart#set-the-model-metadata
But I'm not sure what metadata needs to be set and how to set it. 😅
Post not yet marked as solved
209
Views
I'm updating a machine learning iOS app. When I archive and validate the app with a ***.mlmodel file, the validation fails with following error message.
"App Store Connect Operation Error"
"Unable to process app at this time due to a general error"
I found that if I un-check the *.mlmodel file from the target membership, it gets successfully validated.
Is there anything I missed? Or is there any workaround?
XCode version 12.5
Create ML Version 2.0
Post marked as solved
118
Views
I have .mlmodel file and I need to get the version of this file (from the metadata) programmatically.
I know that we can do it using for .mlmodelc file usingMLModel instance, but what about .mlmodel?
Thanks!