Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

The answer of "apple" goes to guardrailViolation?
I have been using "apple" to test foundation models. I thought this is local, but today the answer changed - half way through explanation, suddenly guardrailViolation error was activated! And yesterday, all reference to "Apple II", "Apple III" now refers me to consult apple.com! Does foundation models connect to Internet for answer? Using beta 3.
3
0
146
Jul ’25
Broken compatibility in tensorflow-metal with tensorflow 2.18
Issue type: Bug TensorFlow metal version: 1.1.1 TensorFlow version: 2.18 OS platform and distribution: MacOS 15.2 Python version: 3.11.11 GPU model and memory: Apple M2 Max GPU 38-cores Standalone code to reproduce the issue: import tensorflow as tf if __name__ == '__main__': gpus = tf.config.experimental.list_physical_devices('GPU') print(gpus) Current behavior Apple silicone GPU with tensorflow-metal==1.1.0 and python 3.11 works fine with tensorboard==2.17.0 This is normal output: /Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/bin/python /Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] Process finished with exit code 0 But if I upgrade tensorflow to 2.18 I'll have error: /Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/bin/python /Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py Traceback (most recent call last): File "/Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py", line 1, in <module> import tensorflow as tf File "/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/__init__.py", line 437, in <module> _ll.load_library(_plugin_dir) File "/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN3tsl8internal10LogMessageC1EPKcii Referenced from: <D2EF42E3-3A7F-39DD-9982-FB6BCDC2853C> /Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow-plugins/libmetal_plugin.dylib Expected in: <2814A58E-D752-317B-8040-131217E2F9AA> /Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so Process finished with exit code 1
3
3
1.5k
Feb ’25
Selecting an output language with Foundation Models
When using Foundation Models, is it possible to ask the model to produce output in a specific language, apart from giving an instruction like "Provide answers in ." ? (I tried that and it kind of worked, but it seems fragile.) I haven't noticed an API to do so and have a use-case where the output should be in a user-selectable language that is not the current system language.
3
1
460
Jul ’25
Converting FastAI Cat vs Dog Model into Core ML
FB:FB16079804 Hello, I've made the FastAI's Cat vs Dog model into model that distinguishes lemons from limes and it all works fine in a notebook. I am now looking to transform this model into Core ML for my iOS app using TorchScript and Apple official guidelines for coremltools. Model converts but I cannot see the Preview Tab in. Xcode. Have anyone of you tried to convert to Core ML? I guess my input types are not matching with coremltools expectations for preview but I am stuck . Here is my code. import torch import coremltools as ct from fastai.vision.all import * import json from torchvision import transforms # Load your Fastai model (replace with your actual path) learn = load_learner('lemonmodel.pkl') # Example input image (you can use any image from your dataset) input_image = PILImage.create('example.jpg') # Preprocess the image (assuming you used these transforms during training) to_tensor = transforms.ToTensor() normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) input_tensor = to_tensor(input_image) input_tensor = normalize(input_tensor) # Apply normalization # Add a batch dimension input_tensor = input_tensor.unsqueeze(0) # Ensure float32 type input_tensor = input_tensor.float() # Trace the model trace = torch.jit.trace(learn.model, input_tensor) # Define the Core ML input type (considering your model's input shape) _input = ct.ImageType( name="input_1", shape=input_tensor.shape, bias=[-0.485/0.229, -0.456/0.224, -0.406/0.225], scale=1./(255*0.226) ) # Convert the model to Core ML format mlmodel = ct.convert( trace, inputs=[_input], minimum_deployment_target=ct.target.iOS14 # Optional, set deployment target ) # Set model type as 'imageClassifier' for the Preview tab mlmodel.type = 'imageClassifier' # Correct structure for preview parameters** (assuming two classes: 'lemon' and 'lime') labels_json = { "imageClassifier": { "labels": ["lemon", "lime"], "input": { "shape": list(input_tensor.shape), # Provide the actual input shape "mean": [0.485, 0.456, 0.406], # Match normalization mean "std": [0.229, 0.224, 0.225] # Match normalization std }, "output": { "shape": [1, 2] # Output shape for your model (2 classes) } } } # Setting up the metadata with correct 'preview' params mlmodel.user_defined_metadata['com.apple.coreml.model.preview.params'] = json.dumps(labels_json) # Save the model as .mlmodel mlmodel.save("LemonClassifierGemini.mlmodel") mlmodel = ct.convert( trace, inputs=[_input], minimum_deployment_target=ct.target.iOS14 # Optional, set deployment target ) # Set model type as 'imageClassifier' for the Preview tab** mlmodel.type = 'imageClassifier' # Correct structure for preview parameters** (assuming two classes: 'lemon' and 'lime') labels_json = { "imageClassifier": { "labels": ["lemon", "lime"], "input": { "shape": list(input_tensor.shape), # Provide the actual input shape "mean": [0.485, 0.456, 0.406], # Match normalization mean "std": [0.229, 0.224, 0.225] # Match normalization std }, "output": { "shape": [1, 2] # Output shape for your model (2 classes) } } } # Setting up the metadata with correct 'preview' params** mlmodel.user_defined_metadata['com.apple.coreml.model.preview.params'] = json.dumps(labels_json) # Save the model as .mlmodel mlmodel.save("LemonClassifierGemini.mlmodel") My model is : Input batch shape: torch.Size([32, 3, 192, 192]) Labels batch shape: torch.Size([32]) Validation Loss: None, Validation Metric: None Predictions shape: torch.Size([63, 2]) Targets shape: torch.Size([63]) Code for the model : searches = 'lemon','lime' path = Path('lemon_or_not') for o in searches: dest = (path/o) dest.mkdir(exist_ok=True, parents=True) download_images(dest, urls=search_images(f'{o} photo')) time.sleep(5) resize_images(path/o, max_size=400, dest=path/o) dls = DataBlock( blocks=(ImageBlock, CategoryBlock), get_items=get_image_files, splitter=RandomSplitter(valid_pct=0.2, seed=42), get_y=parent_label, item_tfms=[Resize(192, method='squish')] ).dataloaders(path, bs=32) dls.show_batch(max_n=6) learn = vision_learner(dls, resnet18, metrics=error_rate) learn.fine_tune(3) is_lemon,_,probs = learn.predict(PILImage.create('lemon.jpg')) print(f"This is a: {is_lemon}.") print(f"Probability it's a lemon: {probs[0]:.4f}") This is a: lemon. Probability it's a lemon: 1.0000 learn.export('lemonmodel.pkl') I am stuck to why it doest show the Preview Tab.
3
0
712
Dec ’24
Run Time Issues with Swift/Core ML
Hello! I have a swift program that tracks the location of a ball (through the back camera). It seems to be working fine, but the only issue is the run time, particularly my concatenate, normalize, and argmax functions, which are meant to be a 1 to 1 copy of the PyTorch argmax function and the following python lines: imgs = np.concatenate((img, img_prev, img_preprev), axis=2) imgs = imgs.astype(np.float32)/255.0 imgs = np.rollaxis(imgs, 2, 0) inp = np.expand_dims(imgs, axis=0) # used to pass into model However, I need my program to run in real time and in an ideal world, I want it to run way under real time. Below is a run down of the run times that result from my code: Starting model inference Setup took: 0.0 seconds Resize took: 0.03741896152496338 seconds Concatenation took: 0.3359949588775635 seconds Normalization took: 0.9906361103057861 seconds Model prediction took: 0.3425499200820923 seconds Argmax took: 28.17007803916931 seconds Postprocess took: 0.054128050804138184 seconds Model inference took 29.934185028076172 seconds Here are the concatenateBuffers, normalizeBuffers, and argmax functions that I use: func concatenateBuffers(_ buffers: [CVPixelBuffer?]) -> CVPixelBuffer? { guard buffers.count == 3, let first = buffers[0] else { return nil } let width = CVPixelBufferGetWidth(first) let height = CVPixelBufferGetHeight(first) let targetChannels = 9 var concatenated: CVPixelBuffer? let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue] as CFDictionary CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32BGRA, attrs, &concatenated) guard let output = concatenated else { return nil } CVPixelBufferLockBaseAddress(output, []) defer { CVPixelBufferUnlockBaseAddress(output, []) } guard let outputData = CVPixelBufferGetBaseAddress(output) else { return nil } let outputPtr = UnsafeMutablePointer<UInt8>(OpaquePointer(outputData)) // Lock all input buffers at once buffers.forEach { buffer in guard let buffer = buffer else { return } CVPixelBufferLockBaseAddress(buffer, .readOnly) } defer { buffers.forEach { CVPixelBufferUnlockBaseAddress($0!, .readOnly) } } // Process each input buffer for (frameIdx, buffer) in buffers.enumerated() { guard let buffer = buffer, let inputData = CVPixelBufferGetBaseAddress(buffer) else { continue } let inputPtr = UnsafePointer<UInt8>(OpaquePointer(inputData)) let bytesPerRow = CVPixelBufferGetBytesPerRow(buffer) let totalPixels = width * height // Process all pixels in one go for this frame for i in 0..<totalPixels { let y = i / width let x = i % width let inputOffset = y * bytesPerRow + x * 4 let outputOffset = i * targetChannels + frameIdx * 3 // BGR order to match numpy outputPtr[outputOffset] = inputPtr[inputOffset + 2] // B outputPtr[outputOffset + 1] = inputPtr[inputOffset + 1] // G outputPtr[outputOffset + 2] = inputPtr[inputOffset] // R } } return output } func normalizeBuffer(_ buffer: CVPixelBuffer?) -> MLMultiArray? { guard let input = buffer else { return nil } let width = CVPixelBufferGetWidth(input) let height = CVPixelBufferGetHeight(input) let channels = 9 CVPixelBufferLockBaseAddress(input, .readOnly) defer { CVPixelBufferUnlockBaseAddress(input, .readOnly) } guard let inputData = CVPixelBufferGetBaseAddress(input) else { return nil } let shape = [1, NSNumber(value: channels), NSNumber(value: height), NSNumber(value: width)] guard let output = try? MLMultiArray(shape: shape, dataType: .float32) else { return nil } let inputPtr = inputData.assumingMemoryBound(to: UInt8.self) let bytesPerRow = CVPixelBufferGetBytesPerRow(input) let ptr = UnsafeMutablePointer<Float>(OpaquePointer(output.dataPointer)) let totalSize = width * height for c in 0..<channels { for idx in 0..<totalSize { let h = idx / width let w = idx % width let inputIdx = h * bytesPerRow + w * channels + c ptr[c * totalSize + idx] = Float(inputPtr[inputIdx]) / 255.0 } } return output } func argmax(_ array: MLMultiArray) -> MLMultiArray? { let shape = array.shape.map { $0.intValue } guard shape.count == 3, shape[0] == 1, shape[1] == 256, shape[2] == 230400 else { return nil } guard let output = try? MLMultiArray(shape: [1, NSNumber(value: 230400)], dataType: .int32) else { return nil } let ptr = UnsafePointer<Float>(OpaquePointer(array.dataPointer)) let outputPtr = UnsafeMutablePointer<Int32>(OpaquePointer(output.dataPointer)) let channelSize = 230400 for pos in 0..<230400 { var maxValue = -Float.infinity var maxIndex: Int32 = 0 for channel in 0..<256 { let value = ptr[channel * channelSize + pos] if value > maxValue { maxValue = value maxIndex = Int32(channel) } } outputPtr[pos] = maxIndex } return output } Are there any glaring areas of inefficiencies that can be reduced to allow for under real time processing whilst following the same logic as found in the python code exactly? Would using Obj-C speed things up for some reason? Are there any tools I can use so I don't have to write these functions myself? Additionally, in the classes init, function, I tried to check the compute units being used since I feel 0.34 seconds for a singular model prediction is also far too long, but no print statements are showing for some reason: init() { guard let loadedModel = try? BallTrackerModel() else { fatalError("Could not load model") } let config = MLModelConfiguration() config.computeUnits = .all guard let configuredModel = try? BallTrackerModel(configuration: config) else { fatalError("Could not configure model") } self.model = configuredModel print("model loaded with compute units \(config.computeUnits.rawValue)") } Thanks!
3
0
672
Feb ’25
Feasibility of Real-Time Object Detection in Live Video with Core ML on M1 Pro and A-Series Chips
Hello, I am exploring real-time object detection, and its replacement/overlay with another shape, on live video streams for an iOS app using Core ML and Vision frameworks. My target is to achieve high-speed, real-time detection without noticeable latency, similar to what’s possible with PageFault handling and Associative Caching in OS, but applied to video processing. Given that this requires consistent, real-time model inference, I’m curious about how well the Neural Engine or GPU can handle such tasks on A-series chips in iPhones versus M-series chips (specifically M1 Pro and possibly M4) in MacBooks. Here are a few specific points I’d like insight on: Hardware Suitability: How feasible is it to perform real-time object detection with Core ML on the Neural Engine (i.e., can it maintain low latency)? Would the M-series chips (e.g., M1 Pro or newer) offer a tangible benefit for this type of task compared to the A-series in mobile devices? Which A- and M- chips would be minimum feasible recommendation for such task. Performance Expectations: For continuous, live video object detection, what would be the expected frame rate or latency using an optimized Core ML model? Has anyone benchmarked such applications, and is the M-series required to achieve smooth, real-time processing? Differences Across Apple Hardware: How does performance scale between the A-series Neural Engine and M-series GPU and Neural Engine? Is the M-series vastly superior for real-time Core ML tasks like object detection on live video feeds? If anyone has attempted live object detection on these chips, any insights on real-time performance, limitations, or optimizations would be highly appreciated. Please refer: Apple APIs Thank you in advance for your help!
3
0
765
Dec ’24
Crash inside of Vision predictWithCVPixelBuffer - Crashed: com.apple.VN.detectorSyncTasksQueue.VNCoreMLTransformer
Hello, We have been encountering a persistent crash in our application, which is deployed exclusively on iPad devices. The crash occurs in the following code block: let requestHandler = ImageRequestHandler(paddedImage) var request = CoreMLRequest(model: model) request.cropAndScaleAction = .scaleToFit let results = try await requestHandler.perform(request) The client using this code is wrapped inside an actor, following Swift concurrency principles. The issue has been consistently reproduced across multiple iPadOS versions, including: iPad OS - 18.4.0 iPad OS - 18.4.1 iPad OS - 18.5.0 This is the crash log - Crashed: com.apple.VN.detectorSyncTasksQueue.VNCoreMLTransformer 0 libobjc.A.dylib 0x7b98 objc_retain + 16 1 libobjc.A.dylib 0x7b98 objc_retain_x0 + 16 2 libobjc.A.dylib 0xbf18 objc_getProperty + 100 3 Vision 0x326300 -[VNCoreMLModel predictWithCVPixelBuffer:options:error:] + 148 4 Vision 0x3273b0 -[VNCoreMLTransformer processRegionOfInterest:croppedPixelBuffer:options:qosClass:warningRecorder:error:progressHandler:] + 748 5 Vision 0x2ccdcc __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_5 + 132 6 Vision 0x14600 VNExecuteBlock + 80 7 Vision 0x14580 __76+[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:]_block_invoke + 56 8 libdispatch.dylib 0x6c98 _dispatch_block_sync_invoke + 240 9 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16 10 libdispatch.dylib 0x11728 _dispatch_lane_barrier_sync_invoke_and_complete + 56 11 libdispatch.dylib 0x7fac _dispatch_sync_block_with_privdata + 452 12 Vision 0x14110 -[VNControlledCapacityTasksQueue dispatchSyncByPreservingQueueCapacity:] + 60 13 Vision 0x13ffc +[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:] + 324 14 Vision 0x2ccc80 __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_4 + 336 15 Vision 0x14600 VNExecuteBlock + 80 16 Vision 0x2cc98c __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_3 + 256 17 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16 18 libdispatch.dylib 0x6ab0 _dispatch_block_invoke_direct + 284 19 Vision 0x2cc454 -[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 632 20 Vision 0x2cd14c __111-[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke + 124 21 Vision 0x14600 VNExecuteBlock + 80 22 Vision 0x2ccfbc -[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 340 23 Vision 0x125410 __swift_memcpy112_8 + 4852 24 libswift_Concurrency.dylib 0x5c134 swift::runJobInEstablishedExecutorContext(swift::Job*) + 292 25 libswift_Concurrency.dylib 0x5d5c8 swift_job_runImpl(swift::Job*, swift::SerialExecutorRef) + 156 26 libdispatch.dylib 0x13db0 _dispatch_root_queue_drain + 364 27 libdispatch.dylib 0x1454c _dispatch_worker_thread2 + 156 28 libsystem_pthread.dylib 0x9d0 _pthread_wqthread + 232 29 libsystem_pthread.dylib 0xaac start_wqthread + 8 We found an issue similar to us - https://developer.apple.com/forums/thread/770771. But the crash logs are quite different, we believe this warrants further investigation to better understand the root cause and potential mitigation strategies. Please let us know if any additional information would help diagnose this issue.
3
0
294
Jul ’25
Getting FoundationsModel running in Simulator
I have a mac (M4, MacBook Pro) running Tahoe 26.0 beta. I am running Xcode beta. I can run code that uses the LLM in a #Preview { }. But when I try to run the same code in the simulator, I get the 'device not ready' error and I see the following in the Settings app. Is there anything I can do to get the simulator to past this point and allowing me to test on it with Apple's LLM?
3
0
320
Jul ’25
Random crash from AVFAudio library
Hi everyone ! I'm getting random crashes when I'm using the Speech Recognizer functionality in my app. This is an old bug (for 8 years on Apple Forums) and I will really appreciate if anyone from Apple will be able to find a fix for this crashes. Can anyone also help me please to understand what could I do to keep the Speech Recognizer functionality still available in my app, but to avoid this crashes (if there is any other native library available or a CocoaPod library). Here is my code and also the crash log for it. Code: func startRecording() { startStopRecordBtn.setImage(UIImage(#imageLiteral(resourceName: "microphone_off")), for: .normal) if UserDefaults.standard.bool(forKey: Constants.darkTheme) { commentTextView.textColor = .white } else { commentTextView.textColor = .black } commentTextView.isUserInteractionEnabled = false recordingLabel.text = Constants.recording if recognitionTask != nil { recognitionTask?.cancel() recognitionTask = nil } let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(AVAudioSession.Category.record) try audioSession.setMode(AVAudioSession.Mode.measurement) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) } catch { showAlertWithTitle(message: Constants.error) } recognitionRequest = SFSpeechAudioBufferRecognitionRequest() let inputNode = audioEngine.inputNode guard let recognitionRequest = recognitionRequest else { fatalError(Constants.error) } recognitionRequest.shouldReportPartialResults = true recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in var isFinal = false if result != nil { self.commentTextView.text = result?.bestTranscription.formattedString isFinal = (result?.isFinal)! } if error != nil || isFinal { self.audioEngine.stop() inputNode.removeTap(onBus: 0) self.recognitionRequest = nil self.recognitionTask = nil self.startStopRecordBtn.isEnabled = true } }) let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) {[weak self] (buffer: AVAudioPCMBuffer, when: AVAudioTime) in // CRASH HERE self?.recognitionRequest?.append(buffer) } audioEngine.prepare() do { try audioEngine.start() } catch { showAlertWithTitle(message: Constants.error) } } Here is the crash log: Thanks for very much for reading this !
3
0
1.1k
Sep ’24
The CoreML MultiArray Float16 input is not supported for running on the NPU, and this issue only occurs on the iPhone 11.
Xcode Version: Version 15.2 (15C500b) com.github.apple.coremltools.source: torch==1.12.1 com.github.apple.coremltools.version: 7.2 Compute: Mixed (Float16, Int32) Storage: Float16 The input to the mlpackage is MultiArray (Float16 1 × 1 × 544 × 960) The flexibility is: 1 × 1 × 544 × 960 | 1 × 1 × 384 × 640 | 1 × 1 × 736 × 1280 | 1 × 1 × 1088 × 1920 I tested this on iPhone XR, iPhone 11, iPhone 12, iPhone 13, and iPhone 14. On all devices except the iPhone 11, the model runs correctly on the NPU. However, on the iPhone 11, the model runs on the CPU instead. Here is the CoreMLTools conversion code I used: mlmodel = ct.convert(trace, inputs=[ct.TensorType(shape=input_shape, name="input", dtype=np.float16)], outputs=[ct.TensorType(name="output", dtype=np.float16, shape=output_shape)], convert_to='mlprogram', minimum_deployment_target=ct.target.iOS16 )
3
0
875
Sep ’24
Getting ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.
I am working on the neural network classifier provided on the coremltools.readme.io in the updatable->neural network section(https://coremltools.readme.io/docs/updatable-neural-network-classifier-on-mnist-dataset). I am using the same code but I get an error saying that the coremltools.converters.keras.convert does not exist. But this I know can be coreml version issue. Right know I am using coremltools version 6.2. I converted this model to mlmodel with .convert only. It got converted successfully. But I face an error in the make_updatable function saying the loss layer must be softmax output. Even the coremlt package API reference there I found its because the layer name is softmaxND but it should be softmax. Now the problem is when I convert the model from Keras sequential model to coreml model. the layer name and type change. And the softmax changes to softmaxND. Does anyone faced this issue? if I execute this builder.inspect_layers(last=4) I get this output [Id: 32], Name: sequential/dense_1/Softmax (Type: softmaxND) Updatable: False Input blobs: ['sequential/dense_1/MatMul'] Output blobs: ['Identity'] [Id: 31], Name: sequential/dense_1/MatMul (Type: batchedMatmul) Updatable: False Input blobs: ['sequential/dense/Relu'] Output blobs: ['sequential/dense_1/MatMul'] [Id: 30], Name: sequential/dense/Relu (Type: activation) Updatable: False Input blobs: ['sequential/dense/MatMul'] Output blobs: ['sequential/dense/Relu'] In the make_updatable function when I execute builder.set_categorical_cross_entropy_loss(name='lossLayer', input='Identity') I get this error ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.
2
0
1.4k
Nov ’24
Foundation Model - Change LLM
Almost everywhere else you see Apple Intelligence, you get to select whether it's on device, private cloud compute, or ChatGPT. Is there a way to do that via code in the Foundation Model? I searched through the docs and couldn't find anything, but maybe I missed it.
2
0
121
Jul ’25
A Summary of the WWDC25 Group Lab - Apple Intelligence
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Apple Intelligence. Can I integrate writing tools in my own text editor? UITextView, NSTextView, and SwiftUI TextEditor automatically get Writing Tools on devices that support Apple Intelligence. For custom text editors, check out Enhancing your custom text engine with Writing Tools. Given that Foundation Models are on-device, how will Apple update the models over time? And how should we test our app against the model updates? Model updates are in sync with OS updates. As for testing with updated models, watch our WWDC session about prompt engineering and safety, and read the Human Interface Guidelines to understand best practices in prompting the on-device model. What is the context size of a session in Foundation Models Framework? How to handle the error if a session runs out of the context size? Currently the context size is about 4,000 tokens. If it’s exceeded, developers can catch the .exceededContextWindowSize error at runtime. As discussed in one of our WWDC25 sessions, when the context window is exceeded, one approach is to trim and summarize a transcript, and then start a new session. Can I do image generation using the Foundation Models Framework or is only text generation supported? Foundation Models do not generate images, but you can use the Foundation Models framework to generate prompts for ImageCreator in the Image Playground framework. Developers can also take advantage of Tools in Foundation Models framework, if appropriate for their app. My app currently uses a third party server-based LLM. Can I use the Foundation Models Framework as well in the same app? Any guidance here? The Foundation Models framework is optimized for a subset of tasks like summarization, extraction, classification, and tagging. It’s also on-device, private, and free. But at 3 billion parameters it isn’t designed for advanced reasoning or world knowledge, so for some tasks you may still want to use a larger server-based model. Should I use the AFM for my language translation features given it does text translation, or is the Translation API still the preferred approach? The Translation API is still preferred. Foundation Models is great for tasks like text summarization and data generation. It’s not recommended for general world knowledge or translation tasks. Will the TranslationSession class introduced in ios18 get any new improvments in performance or reliability with the new live translation abilities in ios/macos/ipados 26? Essentially, will we get access to live translation in a similar way and if so, how? There's new API in LiveCommunicationKit to take advantage of live translation in your communication apps. The Translate framework is using the same models as used by Live Communication and can be combined with the new SpeechAnalyzer API to translate your own audio. How do I set a default value for an App Intent parameter that is otherwise required? You can implement a default value as part of your parameter declaration by using the @Parameter(defaultValue:) form of the property wrapper. How long can an App Intent run? On macOS there is no limit to how long app intents can run. On iOS, there is a limit of 30 seconds. This time limit is paused when waiting for user interaction. How do I vary the options for a specific parameter of an App Intent, not just based on the type? Implement a DynamicOptionsProvider on that parameter. You can add suggestedEntities() to suggest options. What if there is not a schema available for what my app is doing? If an app intent schema matching your app’s functionality isn’t available, take a look to see if there’s a SiriKit domain that meets your needs, such as for media playback and messaging apps. If your app’s functionality doesn’t match any of the available schemas, you can define a custom app intent, and integrate it with Siri by making it an App Shortcut. Please file enhancement requests via Feedback Assistant for new App intent schemas that would benefit your app. Are you adding any new app intent domains this year? In addition to all the app intent domains we announced last year, this year at WWDC25 we announced that Visual Intelligence will be added to iOS 26 and macOS Tahoe. When my App Intent doesn't show up as an action in Shortcuts, where do I start in figuring out what went wrong? App Intents are statically extracted. You can check the ExtractMetadata info in Xcode's build log. What do I need to do to make sure my App Intents work well with Spotlight+? Check out our WWDC25 sessions on App Intents, including Explore new advances in App Intents and Develop for Shortcuts and Spotlight with App Intents. Mostly, make sure that your intent can run from the parameter summary alone, no required parameters without default values that are not already in the parameter summary. Does Spotlight+ on macOS support App Shortcuts? Not directly, but it shows the App Intents your App Shortcuts are sitting on top of. I’m wondering if the on-device Foundation Models framework API can be integrated into an app to act strictly as an app in-universe AI assistant, responding only within the boundaries of the app’s fictional context. Is such controlled, context-limited interaction supported? FM API runs inside the process of your app only and does not automatically integrate with any remaining part of the system (unless you choose to implement your own tool and utilize tool calling). You can provide any instructions and prompts you want to the model. If a country does not support Apple Intelligence yet, can the Foundation Models framework work? FM API works on Apple Intelligence-enabled devices in supported regions and won’t work in regions where Apple Intelligence is not yet supported
2
0
237
Jul ’25
The Vision request does not work in simulator with Error "Could not create inference context"
When I use VNGenerateForegroundInstanceMaskRequest to generate the mask in the simulator by SwiftUI, there is an error "Could not create inference context". Then I add the code to make the vision by CPU: let request = VNGenerateForegroundInstanceMaskRequest() let handler = VNImageRequestHandler(ciImage: inputImage) #if targetEnvironment(simulator) if #available(iOS 18.0, *) { let allDevices = MLComputeDevice.allComputeDevices for device in allDevices { if(device.description.contains("MLCPUComputeDevice")){ request.setComputeDevice(.some(device), for: .main) break } } } else { // Fallback on earlier versions request.usesCPUOnly = true } #endif do { try handler.perform([request]) if let result = request.results?.first { let mask = try result.generateScaledMaskForImage(forInstances: result.allInstances, from: handler) return CIImage(cvPixelBuffer: mask) } } catch { print(error) } Even I force the simulator to run the code by CPU, but it still have the error: "Could not create inference context"
2
1
1k
Sep ’24
Can't apply compression techniques on my CoreML Object Detection model.
import coremltools as ct from coremltools.models.neural_network import quantization_utils # load full precision model model_fp32 = ct.models.MLModel(modelPath) model_fp16 = quantization_utils.quantize_weights(model_fp32, nbits=16) model_fp16.save("reduced-model.mlmodel") I'm testing it with the model from one of Apple's source codes(GameBoardDetector), and it works fine, reduces the model size by half. But there are several problems with my model(trained on CreateML app using Full Network): Quantizing to float 16 does not work(new file gets created with reduced only 0.1mb). Quantizing to below 16 values cause errors, and no file gets created. Here are additional metadata and precisions of models. Working model's additional metadata and precision: Mine's additional metadata and precision:
2
0
553
Jan ’25
Correct JSON format for CoreMotion data for ActivityClassification purposes
I’m developing an activity classifier that I’d like to input using the JSON format of CoreMotion data. I am getting the error: Unable to parse /Users/DewG/Downloads/Testing/Step1/Testing.json. It does not appear to be in JSON record format. A SequenceType of dictionaries is expected I've verified that the format I am using is JSON via various JSON validators, so I am expecting I'm just holding it wrong. Is there an example of a JSON file with CoreMotion data that I can model after?
2
0
86
Jul ’25