On accessing the CoreML Model Deployment dashboard with my developer account,
the page gives a bad request error saying "Your request was invalid".
Also, when I try to create a Model Collection it gives an error saying "One of the fields was invalid".
Core ML
RSS for tagIntegrate machine learning models into your app using Core ML.
Post
Replies
Boosts
Views
Activity
Hi,
I have a custom object detection CoreML model and I notice something strange when using the model with the Vision framework.
I have tried two different approaches as to how to process an image and do inference on the CoreML model.
The first one is using the CoreML "raw": initialising the model, getting the input image ready and using the model's .prediction() function to get the models output.
The second one is using Vision to wrap the CoreML model in a VNCoreMLModel, creating a VNCoreMLRequest and using the VNImageRequestHandler to actually perform the model inference. The result of the VNCoreMLRequest is of type VNRecognizedObjectObservation.
The issue I now face is in the difference in the output of both methods. The first method gives back the raw output of the CoreML model: confidence and coordinates. The confidence is an array with size equal to the number of classes in my model (3 in my case). The second method gives back the boundingBox, confidence and labels. However here the confidence is only the confidence for the most likely class (so size is equal to 1). But the confidence I get from the second approach is quite different from the confidence I get during the first approach.
I can use either one of the approaches in my application. However, I really want to find out what is going on and understand how this difference occurred.
Thanks!
I have tried many times. When I change the file or re-create it, it shows 404 error
{
"code": 400,
"message": "InvalidArgumentError: Unable to unzip MLArchive",
"reason": "There was a problem with your request.",
"detailedMessage": "InvalidArgumentError: Unable to unzip MLArchive",
"requestUuid": "699afb97-8328-4a83-b186-851f797942aa"
}
I have tried multiple playgrounds and consistently get the same error in any playground I create. There is a tabular data playground that does work but I see nothing I am not doing.
Here is the code that fails with
Error: cannot find 'MLDataTable' in scope
/* code start */
import CoreML
import Foundation
import TabularData
let jsonFile = Bundle.main.url(forResource: "sentiment_analysis", withExtension: "json")!
let tempTable = try DataTable
let dataTable = try MLDataTable(contentsOf: jsonFile)
print(dataTable)
/* code end */
I have a dataset with 3 columns "item_id", "user_id", "rating". I created a coreML MLRecommender model from this dataset.
I want to use this model to get the top 10 predictions for a new user (not in the original dataset) but who has rated a subset of the items in the dataset.
I don't see any API in the Apple docs to do this. Both the recommendations APIs only seem to accept an existing user-id and get recommendations for that user.
The WWDC tutorial talks about a prediction API to achieve this. But I dont see this in the Apple API documentation and code below from WWDC tutorial cannot be used since it does not give details on how to create the HikingRouteRecommenderInput class it passes into the prediction API.
let hikes : [String : Double] = ["Granite Peak" : 5, "Wildflower Meadows" : 4] let input = HikingRouteRecommenderInput(items: hikes, k: 5) // Get results as sequence of recommended items let results = try model.prediction(input: input)
Any pointers on how to get predictions for new user would be greatly appreciated.
Hello.
I am manually constructing some models with the CoreML protobuf format. When the model has flexible input shapes, I am seeing unexpected output shapes in some cases after running prediction(from:).
The model is a single matrix multiplication, A*B (one innerProduct layer), and the dynamic dimension is the first dimension of the only input A (B is constant).
What I observe is that sometimes there are additional leading ones in the output shape.
Some test program output showing the shapes:
running model: dynamic_shape.mlmodel
A shape: [1, 2]
Y shape: [1, 1, 1, 1, 4]
running model: dynamic_shape.mlmodel
A shape: [2, 2]
Y shape: [1, 1, 1, 2, 4]
running model: dynamic_input_shape.mlmodel
A shape: [1, 2]
Y shape: [1, 4]
running model: dynamic_input_shape.mlmodel
A shape: [2, 2]
Y shape: [1, 1, 1, 2, 4]
running model: static_shape.mlmodel
A shape: [1, 2]
Y shape: [1, 4]
I've put the model generation and test code below. Am I specifying the dynamic input/output shapes correctly when creating the .mlmodel? Is the output shape given by CoreML expected, and if so, why are there leading ones? Would appreciate any input.
Python script to generate .mlmodel files. coremltools version is 6.3.0.
from coremltools.proto.Model_pb2 import Model
from coremltools.proto.FeatureTypes_pb2 import ArrayFeatureType
from coremltools.proto.NeuralNetwork_pb2 import EXACT_ARRAY_MAPPING
def build_model(with_dynamic_input_shape: bool, with_dynamic_output_shape: bool):
model = Model()
model.specificationVersion = 4
input = model.description.input.add()
input.name = "A"
input.type.multiArrayType.shape[:] = [1, 2]
input.type.multiArrayType.dataType = ArrayFeatureType.FLOAT32
if with_dynamic_input_shape:
range = input.type.multiArrayType.shapeRange.sizeRanges.add()
range.upperBound = -1
range = input.type.multiArrayType.shapeRange.sizeRanges.add()
range.lowerBound = 2
range.upperBound = 2
output = model.description.output.add()
output.name = "Y"
output.type.multiArrayType.shape[:] = [1, 4]
output.type.multiArrayType.dataType = ArrayFeatureType.FLOAT32
if with_dynamic_output_shape:
range = output.type.multiArrayType.shapeRange.sizeRanges.add()
range.upperBound = -1
range = output.type.multiArrayType.shapeRange.sizeRanges.add()
range.lowerBound = 4
range.upperBound = 4
layer = model.neuralNetwork.layers.add()
layer.name = "MatMul"
layer.input[:] = ["A"]
layer.output[:] = ["Y"]
layer.innerProduct.inputChannels = 2
layer.innerProduct.outputChannels = 4
layer.innerProduct.weights.floatValue[:] = [0.0, 4.0, 1.0, 5.0, 2.0, 6.0, 3.0, 7.0]
model.neuralNetwork.arrayInputShapeMapping = EXACT_ARRAY_MAPPING
return model
if __name__ == "__main__":
model = build_model(with_dynamic_input_shape=True, with_dynamic_output_shape=True)
with open("dynamic_shape.mlmodel", mode="wb") as f:
f.write(model.SerializeToString(deterministic=True))
model = build_model(with_dynamic_input_shape=True, with_dynamic_output_shape=False)
with open("dynamic_input_shape.mlmodel", mode="wb") as f:
f.write(model.SerializeToString(deterministic=True))
model = build_model(with_dynamic_input_shape=False, with_dynamic_output_shape=False)
with open("static_shape.mlmodel", mode="wb") as f:
f.write(model.SerializeToString(deterministic=True))
Swift program to run the models and print the output shape.
import Foundation
import CoreML
func makeFloatShapedArray(shape: [Int]) -> MLShapedArray<Float> {
let size = shape.reduce(1, *)
let values = (0 ..< size).map { Float($0) }
return MLShapedArray(scalars: values, shape: shape)
}
func runModel(model_path: URL, m: Int) throws {
print("running model: \(model_path.lastPathComponent)")
let compiled_model_path = try MLModel.compileModel(at: model_path)
let model = try MLModel(contentsOf: compiled_model_path)
let a = MLMultiArray(makeFloatShapedArray(shape: [m, 2]))
print("A shape: \(a.shape)")
let inputs = try MLDictionaryFeatureProvider(dictionary: ["A": a])
let outputs = try model.prediction(from: inputs)
let y = outputs.featureValue(for: "Y")!.multiArrayValue!
print("Y shape: \(y.shape)")
}
func modelUrl(_ model_file: String) -> URL {
return URL(filePath: "/path/to/models/\(model_file)")
}
try runModel(model_path: modelUrl("dynamic_shape.mlmodel"), m: 1)
try runModel(model_path: modelUrl("dynamic_shape.mlmodel"), m: 2)
try runModel(model_path: modelUrl("dynamic_input_shape.mlmodel"), m: 1)
try runModel(model_path: modelUrl("dynamic_input_shape.mlmodel"), m: 2)
try runModel(model_path: modelUrl("static_shape.mlmodel"), m: 1)
I need to convert a Super Resolution model to a mlmodel, but the input shape of the model is designed in the format [batch_size, height, width, 3]. Then I will convert with the following code
model = hub.load("https://tfhub.dev/captain-pool/esrgan-tf2/1")
tf.saved_model.save(model, "esrgan_saved_model")
input_type = ct.ImageType(shape=(1 , 192, 192, 3),color_layout=ct.colorlayout.RGB)
output_type = ct.ImageType(color_layout=ct.colorlayout.RGB)
mlmodel = ct.convert(
'./esrgan_saved_model',
inputs=[input_type],
outputs=[output_type],
source="tensorflow")
mlmodel.save('esrgan.mlmodel')
I got an error
Shape of the RGB/BGR image output, must be of kind (1, 3, H, W), i.e., first two dimensions must be (1, 3)
ImageType only seems to support input and output from [batch_size, 3, height, width]. What should I do to convert the model of format [batch_size, height, width, 3] to mlmodel?
Hi does anyone have a good link for ml inference times on the m2 chip.posted the coral edge chip as a good format model ,how much data was used in training model size with accuracy.
Just hard to find the info or I’m looking in the wrong place just find it good to have like a cheat sheet of public optimised models there use case and model parameters for that’s a perfect fit for my problem x.
thanks for your time
Hey all, we are currently training a Hand Pose model with the current release of CreateML, and during the feature extraction phase, we get the following error:
Assertion failed: (/AppleInternal/Library/BuildRoots/d9889869-120b-11ee-b796-7a03568b17ac/Library/Caches/com.apple.xbs/Sources/TuriCore/turicreate_oss/src/core/storage/DataTable_data/DataColumn_v2_block_manager.cpp:105): seg->blocks.size()>column_id [0 > 0]
We have tried to search this online and mitigate the issue, but we are getting nowhere - has anyone else experienced this issue?
There seems to be a new MLE5Engine in iOS 17 and macOS 14, that causes issues with our style transfer models:
The output is wrong (just gray pixels) and not the same as on iOS 16.
There is a large memory leak. The memory consumption is increasing rapidly with each new frame.
Concerning 2): There are a lot of CVPixelBuffers leaking during prediction. Those buffers somehow have references to themselves and are not released properly. Here is a stack trace of how the buffers are created:
0 _malloc_zone_malloc_instrumented_or_legacy
1 _CFRuntimeCreateInstance
2 CVObject::alloc(unsigned long, _CFAllocator const*, unsigned long, unsigned long)
3 CVPixe Buffer::alloc(_CFAllocator const*)
4 CVPixelBufferCreate
5 +[MLMultiArray(ImageUtils) pixelBufferBGRA8FromMultiArrayCHW:channelOrderIsBGR:error:]
6 MLE5OutputPixelBufferFeatureValueByCopyingTensor
7 -[MLE5OutputPortBinder _makeFeatureValueFromPort:featureDescription:error:]
8 -[MLE5OutputPortBinder _makeFeatureValueAndReturnError:]
9 __36-[MLE5OutputPortBinder featureValue]_block_invoke
10 _dispatch_client_callout
11 _dispatch_lane_barrier_sync_invoke_and_complete
12 -[MLE5OutputPortBinder featureValue]
13 -[MLE5OutputPort featureValue]
14 -[MLE5ExecutionStreamOperation outputFeatures]
15 -[MLE5Engine _predictionFromFeatures:options:usingStream:operation:error:]
16 -[MLE5Engine _predictionFromFeatures:options:error:]
17 -[MLE5Engine predictionFromFeatures:options:error:]
18 -[MLDelegateModel predictionFromFeatures:options:error:]
19 StyleModel.prediction(input:options:)
When manually disabling the use of the MLE5Engine, the models run as expected.
Is this an issue caused by our model, or is it a bug in Core ML?
Hey, I'm a web developer developing a macos app for the first time. I need a vector database where data will be stored on the user's machine.
I'm familiar with libraries like FAISS, but am aware that it does not have Swift bindings and from a brief look, appears fairly annoying to attempt to get working with a macos app. I'm wondering if Apple has a similar library available in their dev kit? I don't need much, just something to store the vectors in a database, do a cosine sim search on them and maybe add some additional metadata to each vector embedding.
If not, is bridging libraries like this a common thing to do when developing ios/macos apps?
Hi,
As the MLModelCollection is deprecated, I have created mlarchive from mlpackage files which I can upload to cloud storage and download them on the device at runtime.
But how do I use the mlarchive to create an instance of MLModel.
If this is not possible then please guide me in what form can I upload an mlpackage to a cloud storage and then consume it in the app at runtime. I don't want to bundle the mlpackage inside the app as it increases the app size which is not acceptable to us.
I'm trying to get the WWDC2020 Sports Analysis code running. It's the project named BuildingAFeatureRichAppForSportsAnalysis. It seems that now the boardDetectionRequest fails when trying to run the code in the simulator. The main error that I get is
Error Domain=com.apple.vis Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x6000024991d0 {Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 0 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 0 in pipeline}}}.
The problem is that I can't tell why the VNImageRequestHandler is failing when trying to detect the board. It doesn't say that it got a bad image. It doesn't say that it didn't detect a board. I'm running the code against the sample movie provided. I believe this used to work. The other error that I see is upon initialization in Common.warmUpVisionPipeline when trying to load I get 2023-09-07 12:58:59.239614-0500 ActionAndVision[3499:34083] [coreml] Failed to get the home directory when checking model path. From what I can tell in the debugger though the board detection model did load.
Thanks.
annotation.js file
[
{
"filename": "image1.jpg",
"annotations": ["terminal airport", "two people"]
},
{
"filename": "image2.jpg",
"annotations": ["airport", "two people"]
},
{
"filename": "image3.jpg",
"annotations": ["airport", "one person"]
},
{
"filename": "image4.jpg",
"annotations": ["airport", "two people", "more people"]
},
{
"filename": "image5.jpg",
"annotations": ["airport", "one person"]
}
]
Hi.
A17 Pro Neural Engine has 35 TOPS computational power.
But many third-party benchmarks and articles suggest that it has a little more power than A16 Bionic.
Some references are,
Geekbench ML
Core ML performance benchmark, 2023 edition
How do we use the maximum power of A17 Pro Neural Engine?
For example, I guess that logical devices of ANE on A17 Pro may be two, not one, so we may need to instantiate two Core ML models simultaneously for the purpose.
Please let me know any technical hints.
I'm trying to convert a PyTorch forward Transformer model to CoreML but am running into several issues, like these errors:
"For mlprogram, inputs with infinite upper_bound is not allowed. Please set upper.
bound"
570
• to a positive value in "RangeDim)" for the "inputs" param in ct.convert().'
raise NotImplementedError (
259
"inplace_ops pass doesn't yet support append op inside conditional"
Are there any more samples besides https://developer.apple.com/videos/play/tech-talks/10154
The sample in that video an imageType is used as input but in my model text is the input (and the output).
I also get warned that converting "torch script" is experimental but in the video it says it a torch script is required to convert (though I know the video is a few years old).
Hi there.
We use a core ML model for image processing, and because loading core ml model take long time (~10 sec), we preload core ML model when app start time.
but in some device, loading core ml model fails with such error.
we download core ML model from server then load model from local storage.
loading code looks like this. typical.
MLModel.load(contentsOf: compliedUrl, configuration: config)
once this error happen, it keeps fails until we restart the device.
(+) In this article, I saw that it is related some "limitation of decrypt session" : https://developer.apple.com/forums/thread/707622
but it also happens to in-house test flight builds which are used only under 5 people.
Can I know why this happens?
hello! I have converted a single grid_sample opration in pytorch to mlpackage using your coremltools, and open it with xcode for benchmarking. there is only one op which is called resample. and I run it with my mac m1 pro .but I found that it is only run on cpu, so the latency is not in my demand.
can you support the resample with gpu, or can i implement it with metal by myself?
I converted a decoder model into CoreML using following way:
input_1 = ct.TensorType(name="input_1", shape=ct.Shape((1, ct.RangeDim(lower_bound=1, upper_bound=50), 512)), dtype=np.float32)
input_2 = ct.TensorType(name="input_2", shape=ct.Shape((1, ct.RangeDim(lower_bound=1, upper_bound=50), 512)), dtype=np.float32)
decoder_iOS2 = ct.convert(decoder_layer,
inputs=[input_1, input_2]
)
But if load the model in Xcode it gives me two errors:
Error1:
MLE5Engine is not currently supported for models with range shape inputs that try to utilize the Neural Engine.
Q1: As having a Flexible Input shape is nature of the Decoder, I can ignore this error message, right? This is the things that can't be fixed.?
Erro2:
doUnloadModel:options:qos:error:: model=_ANEModel: { modelURL=file:///var/containers/Bundle/Application/CB2207C5-B549-4868-AEB5-FFA7A3E24397/Photo2ASCII.app/Deocder_iOS_test2.mlmodelc/model.mil : sourceURL= (null) : key={"isegment":0,"inputs":{"input_1":{"shape":[512,1,1,1,1]},"input_2":{"shape":[512,1,1,1,1]}},"outputs":{"Identity":{"shape":[512,1,1,1,1]}}} : identifierSource=0 : cacheURLIdentifier=A93CE297F87F752D426002C8D1CE79094E614BEA1C0E96113228C8D3F06831FA_F055BF0F9A381C4C6DC99CE8FCF5C98E7E8B83EA5BF7CFD0EDC15EF776B29413 : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=6885927629810 : intermediateBufferHandle=6885928772758 : queueDepth=127 } : state=3 : programHandle=6885927629810 : intermediateBufferHandle=6885928772758 : queueDepth=127 : attr={
ANEFModelDescription = {
ANEFModelInput16KAlignmentArray = (
);
ANEFModelOutput16KAlignmentArray = (
);
ANEFModelProcedures = (
{
ANEFModelInputSymbolIndexArray = (
0,
1
);
ANEFModelOutputSymbolIndexArray = (
0
);
ANEFModelProcedureID = 0;
}
);
kANEFModelInputSymbolsArrayKey = (
"input_1",
"input_2"
);
kANEFModelOutputSymbolsArrayKey = (
"Identity@output"
);
kANEFModelProcedureNameToIDMapKey = {
net = 0;
};
};
NetworkStatusList = (
{
LiveInputList = (
{
BatchStride = 1024;
Batches = 1;
Channels = 1;
Depth = 1;
DepthStride = 1024;
Height = 1;
Interleave = 1;
Name = "input_1";
PlaneCount = 1;
PlaneStride = 1024;
RowStride = 1024;
Symbol = "input_1";
Type = Float16;
Width = 512;
},
{
BatchStride = 1024;
Batches = 1;
Channels = 1;
Depth = 1;
DepthStride = 1024;
Height = 1;
Interleave = 1;
Name = "input_2";
PlaneCount = 1;
PlaneStride = 1024;
RowStride = 1024;
Symbol = "input_2";
Type = Float16;
Width = 512;
}
);
LiveOutputList = (
{
BatchStride = 1024;
Batches = 1;
Channels = 1;
Depth = 1;
DepthStride = 1024;
Height = 1;
Interleave = 1;
Name = "Identity@output";
PlaneCount = 1;
PlaneStride = 1024;
RowStride = 1024;
Symbol = "Identity@output";
Type = Float16;
Width = 512;
}
);
Name = net;
}
);
} : perfStatsMask=0} was not loaded by the client.
Q2: Is that I can ignore this error message, if I'm gonna use CPU/GPU when running the model?
I have converted an UIImage to MLShapedArray and by default this is NCHW format. I need to permute it into NCWH to prepare it for an ML model. What is the recommended way to achieve this ?