Post not yet marked as solved
I have a file of a few thousand sentences I'd like to convert into something that NLEmbedding can use. (In searching around it looks like none of the sample code for "Make Apps Smarter with Natural Language" from WWDC 2020 was ever released.) Can anyone put me on the right track on how to do this?
Post not yet marked as solved
This video example can not find all the code, some details have doubts, I would like to ask you to help me, thank you very much!
email: wu.shaopeng@aol.
Please forgive me, this mailbox also has (com)
Post not yet marked as solved
I am trying to deploy our first Core ML model through the dashboard at https://ml.developer.apple.com/. Immediately after I go into the site, I get a message that reads "Your request is invalid.".
I click "+" to add a model collection, I enter a collection ID, a description and a single model ID and then tap on "Create". I get a message that reads "One of the fields was invalid.".
I have tried changing the IDs and the description, but there is no way to make it work.
I have been trying this for hours. Could you please guide me on how to make it work?
Hi everyone,
i am not pretty new anymore on swift but still have not many skills on it.
I am facing some difficult issues through this tutorial from apple-docs.
https://developer.apple.com/documentation/createml/creating_an_image_classifier_model/#overview
I have successfully created many mlmodel already now as given in the tutorial. However when I come to the step to integrate it to Xcode, I am facing the Issue that I don't get any predictions from my mlmodel.
I followed all steps in this tutorial, and downloaded the example code.
The tutorial said "just change this model line, with your model and it works" but indeed it doesn't.
With doesn't work, I mean that I don't get any predictions back when I use this example.
I can start the application and test it with the iPhone simulator. But the only output that I get is "no predictions, please check console log".
I searched down the code an could find out, that this is an error-message which appears from the code of
MainViewController.swift (99:103)
private func imagePredictionHandler(_ predictions: [ImagePredictor.Prediction]?) {
guard let predictions = predictions else {
updatePredictionLabel("No predictions. (Check console log.)")
return
}
As i understand the code, it return the message when no predictions come back from the mlmodel.
If I use mlmodel given by apple (as MobileNetV2 etc.) the example code is working every time (give predictions back).
Thats why I am pretty sure, the issue has to be anywhere on my side but I can't figure it out.
The mlmodel is trained with images from fruits360-dataset and self added some images of charts. To equal the values I tooked 70 pictures of each class. If I try this model in createML-Preview I can see the model is able to predict my validation-pictures. But when I integrate the model in Xcode it isn't able to give me that predictions for the exact same image.
Do anyone know how to get this issue done?
Im using the latest Xcode version.
Thanks in advance
Post not yet marked as solved
Is it possible to use Core ML on video files? My goal is to have a recorded video and process it for text in the video. When that text or video is found mark it and then extract clips form the video based on where the text was found. Making highlights basically. I am kind of lost as to where to start.
Post not yet marked as solved
I have set up a project using DeepLabV3 for image semantic segmentation using the pre-trained DeepLabV3 model from Apple.
This works consistently well for delineating (creating a black/white mask) the main object in a given image, but only as long as that main object is a person.
When I try other random objects (coffee cups, tractor, stapler), with very clear outline/contrast, I get very poor results, if not a completely black mask.
Is the pre-trained DeepLabV3 model just for people?
If I need to do something (re-training, config, etc), what is it?
Thanks,
Neal
Post not yet marked as solved
Hi there,
I am trying to create a CoreML Custom layer that runs on the GPU, using Objective-C for CoreML setup and Metal for GPU programming.
I have created the CoreML model with the custom layer and can successfully execute on the GPU, I wish to create an MTLBuffer from an input MTLTexture in my setup actual GPU execution, although I can't seem to do so, or get access to the memory address to the MTLTexture memory.
When defining a custom layer in CoreML to run on the GPU, the following function needs to be defined, with the given prototype;
(BOOL) encodeToCommandBuffer:(id<MTLCommandBuffer>)commandBuffer inputs:(NSArray<id<MTLTexture>> *)inputs outputs:(NSArray<id<MTLTexture>> *)outputs error:(NSError *__autoreleasing _Nullable *)error{
// GPU Setup, moving data, encoding, execution and so on here
}
Here, the inputs are passed as an NSArray of MTLTexture's, we then pass these texture's on to the Metal Shader for computation. My problem is that I want to pass an MTLBuffer to the Metal Shader, which points to the input data, say inputs[0], but I am having troubling copying the input MTLTexture to an MTLBuffer.
I have tried using the MTLBlitCommandEncoder to copy the data from the MTLTexture to an MTLBuffer like so;
id<MTLBuffer> test_buffer = [command_PSO.device newBufferWithLength:(8) options:MTLResourceStorageModeShared];
id <MTLBlitCommandEncoder> blitCommandEncoder = [commandBuffer blitCommandEncoder];
[blitCommandEncoder copyFromTexture:inputs[0]
sourceSlice:0
sourceLevel:0
sourceOrigin:MTLOriginMake(0, 0, 0)
sourceSize:MTLSizeMake(1, 1, 1)
toBuffer:test_buffer
destinationOffset:0
destinationBytesPerRow:8
destinationBytesPerImage:8];
[blitCommandEncoder endEncoding];
The above example should copy a single pixel from the MTLTexture, inputs[0], to the MTLBuffer, test_buffer, but this is not the case.
MTLTextures, getBytes also doesn't work as the inputs have MTLResourceStorageModePrivate set.
When I inspect the input MTLTexture I note that the attribute buffer = <null> and I'm wondering if this could be an issue since the texture was not created from a buffer, and perhaps doesn't store the address to memory easily, but surely we should be able to get the memory address somewhere?
For further reference, here is the input MTLTexture definition;
<CaptureMTLTexture: 0x282469500> -> <AGXA14FamilyTexture: 0x133d9bb00>
label = <none>
textureType = MTLTextureType2DArray
pixelFormat = MTLPixelFormatRGBA16Float
width = 8
height = 1
depth = 1
arrayLength = 1
mipmapLevelCount = 1
sampleCount = 1
cpuCacheMode = MTLCPUCacheModeDefaultCache
storageMode = MTLStorageModePrivate
hazardTrackingMode = MTLHazardTrackingModeTracked
resourceOptions = MTLResourceCPUCacheModeDefaultCache MTLResourceStorageModePrivate MTLResourceHazardTrackingModeTracked
usage = MTLTextureUsageShaderRead MTLTextureUsageShaderWrite
shareable = 0
framebufferOnly = 0
purgeableState = MTLPurgeableStateNonVolatile
swizzle = [MTLTextureSwizzleRed, MTLTextureSwizzleGreen, MTLTextureSwizzleBlue, MTLTextureSwizzleAlpha]
isCompressed = 0
parentTexture = <null>
parentRelativeLevel = 0
parentRelativeSlice = 0
buffer = <null>
bufferOffset = 0
bufferBytesPerRow = 0
iosurface = 0x0
iosurfacePlane = 0
allowGPUOptimizedContents = YES
label = <none>
Post not yet marked as solved
hello,
When I used xcode to generate the model encryption key, an error was reported, the error was 'Failed to Generate Encryption Key and Sign in with you Apple ID in the Apple ID pane in System Preferences and retry '.But I have logged in my apple id in the system preferences, and this error still occurs.I reinstalled xcode and re-logged in to my apple id. This error still exists.
Xcode Version 12.4
macOS Catalina 10.15.7
thanks
Post not yet marked as solved
Following the guide found here, I've been able to preview image classification in Create ML and Xcode. However, when I swap out the MobileNet model for my own and try running it as an app, images are not classified accurately.
When I check the same images using my model in its Xcode preview tab, the guesses are accurate.
I've tried changing this line to the different available options, but it doesn't seem to help:
imageClassificationRequest.imageCropAndScaleOption = .centerCrop
Does anyone know why a model would work well in preview but not while running in the app? Thanks in advance.
Post not yet marked as solved
I converted PyTorch model to CoreML model and want to use it in iOS device. I wrote the swift code, but the code works normally in the simulator but not in the real device. My CoreML model information and experimental environment are as follows.
Model Type: Neural Network /
Size: 60.8 MB /
Document Type: Core ML Model /
Availability: iOS 14.0+ /
torch == 1.8.1 /
coremltools == 5.2.0 /
Experimental iOS version: 15.0 /
Device: iPhone 12 Pro /
And the error message is:
2022-04-19 10:36:24.404508+0900 test[2488:1369794] [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Invalid blob shape": generic_elementwise_kernel: cannot broadcast:
(1, 256, 128, 130)
(1, 256, 128, 132)
status=-7
2022-04-19 10:36:24.404716+0900 test[2488:1369794] [coreml] Error in adding network -7.
2022-04-19 10:36:24.405000+0900 test[2488:1369794] [coreml] MLModelAsset: load failed with error Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.}
2022-04-19 10:36:24.405029+0900 test[2488:1369794] [coreml] MLModelAsset: modelWithError: load failed with error Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.}
test/ContentView.swift:16: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.}
2022-04-19 10:36:24.405408+0900 test[2488:1369794] test/ContentView.swift:16: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.}
(lldb)
I would appreciate it if you could let me know if you know the solution or if you know the related references.
Thank you for reading it.
Post not yet marked as solved
I try to transfer this Xcode sample project to the Playground app project.
However, when I move all swift files and the CoreML file called 'ExerciseClassifier.mlmodel' on the folder to the Playground app project, it shows the error, "Cannot find type 'ExerciseClassifier' in scope".
What could I do to remove the error and make a proper working project on Playground?
Post not yet marked as solved
Dear Teams,
I try to transfer this Xcode sample project to the Playground app project.
However, when I move all swift files and the CoreML file called 'ExerciseClassifier.mlmodel' on the folder to the Playground app project, it shows the error, "Cannot find type 'ExerciseClassifier' in scope".
What could I do to remove the error and make a proper working project on Playground?
Thanks.
Post not yet marked as solved
How can I gather data from a watch for an activity classifier model? Are there any tools to help facilitate this?
Post not yet marked as solved
The release version of my app will Not collect user data. I wanted to know if I can ask for users permission to collect trained ml models based on beta users data on test flight after presenting them the necessary documentations like privacy policy.
I also don’t want the user to be able to use the beta app if they don’t agree to it.
are these two things possible?
Thanks
Post not yet marked as solved
I created a style transfer model using CreateML and can not save the generated styled image to tempDirectory, unsure if it is to do with the way I create the pixelBuffer? (below):
import Vision
import CoreML
import CoreVideo
let model = style1()
// set input size of the model
var modelInputSize = CGSize(width: 512, height: 512)
// create a cvpixel buffer
var pixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
CVPixelBufferCreate(kCFAllocatorDefault,
Int(modelInputSize.width),
Int(modelInputSize.height),
kCVPixelFormatType_32BGRA,
attrs,
&pixelBuffer)
// put bytes into pixelBuffer
let context = CIContext()
let argPathUrl = "file:///pathhere"
let modelImageUrl: URL = URL(string: argPathUrl)!;
guard let CiImageData = CIImage(contentsOf: modelImageUrl) else { return }
context.render(CiImageData, to: pixelBuffer!)
// predict image
let output = try? model.prediction(image: pixelBuffer!)
let predImage = CIImage(cvPixelBuffer: (output?.stylizedImage)!)
let context2 = CIContext()
let format = kCIFormatRGBA16
try! context2.writePNGRepresentation(of: predImage, to: FileManager.default.temporaryDirectory.appendingPathComponent("testcgi.png"), format: format, colorSpace: CGColorSpace(name: CGColorSpace.sRGB)!, options: [:])
let saveUrl = "testcgi.png"
return;
Post not yet marked as solved
Hi, while I was able to successfully retrieve MLModelCollection with a list of model identifiers from Apple's CoreML Deployment Dashboard, loading encrypted models from a collection results in the following error:
NSUnderlyingError=0x281ffb810 {Error Domain=com.apple.CoreML Code=3 "failed to invoke mremap_encrypted with result = -1, error = 12" UserInfo={NSLocalizedDescription=failed to invoke mremap_encrypted with result = -1, error = 12}}}
I use the same MLModel.load(contentsOf:configuration:completionHandler:)
method with model URLs (from MLModelCollection) which works just fine for non–encrypted models.
Is there any workaround for this issue?
Post not yet marked as solved
Is there a Machine Learning API that can take handwriting (either as a bitmap or as a list of points) and convert it to text?
I know Scribble can be used to allow handwriting input into text fields, but in this API it is Scribble which controls the rendering of the handwriting. Is there an API where my app can render the handwriting and get information about the text content?
In the Keynote demo Craig was able to get text content from a photo of a whiteboard. Are there APIs which would allow an app developer to create something similar?
Post not yet marked as solved
Hi everyone I'm trying to use createML hand action classifier to detect some simple actions, I'm having some trouble because the model only detects one hand at a time in the scene(even in the preview of the model, without any coding) and I would need both hands, is it a bug or am I doing something wrong?
Thank you in advance
Post not yet marked as solved
Hi,I have made several ML models previously with more than one feature column, always worked great.But now in a new project I have a set of data (with multiple csv files) with a single feature column. Create ML loads the data well, but when I try to train the model I keep getting this error message:"Feature column Y is empty on row 0 of input data table"Y is the name of the column, that's correct. But I have no row 0. I tried to convert CSV files to JSON, didn't work, I keep getting the same error.I would appreciate any help 🙂Best Regards,Joel Balagué
Post not yet marked as solved
I am using a CoreML model from https://github.com/PeterL1n/RobustVideoMatting.
I have an M1Macbook13 16G and an M1Max Macbook 16 64G.
When "computeUnits" using .all or default, M1Max 16 is much slower than M1 13, finish one prediction time is 0.202 and 0.155.
Using .cpuOnly, M1Max 16 is fast a little, time is 0.129 and 0.146.
Using .cpuAndGPU, M1Max 16 is much fast than M1 13, time is 0.057 and 0.086.
And when I use .all or default, M1Max will appear error messages like this:
H11ANEDevice::H11ANEDeviceOpen IOServiceOpen failed result= 0xe00002e2
H11ANEDevice::H11ANEDeviceOpen kH11ANEUserClientCommand_DeviceOpen call failed result=0xe00002bc
Error opening LB - status=0xe00002bc.. Skipping LB and retrying
But M1 13 doesn't have any errors.
So I want to know is this a bug of CoreML or M1Max?
My Codes is like this:
let config = MLModelConfiguration()
config.computeUnits = .all
let model = try rvm_mobilenetv3_1920x1080_s0_25_int8_ANE(configuration: config)
let image1 = NSImage(named: "test1")?.cgImage(forProposedRect: nil, context: nil, hints: nil)
let input = try? rvm_mobilenetv3_1920x1080_s0_25_int8_ANEInput(srcWith:image1!, r1i: MLMultiArray(), r2i: MLMultiArray(), r3i: MLMultiArray(), r4i: MLMultiArray())
_ = try? model.prediction(input: input!)