Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

A Summary of the WWDC25 Group Lab - Machine Learning and AI Frameworks
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Machine Learning and AI Frameworks. What are you most excited about in the Foundation Models framework? The Foundation Models framework provides access to an on-device Large Language Model (LLM), enabling entirely on-device processing for intelligent features. This allows you to build features such as personalized search suggestions and dynamic NPC generation in games. The combination of guided generation and streaming capabilities is particularly exciting for creating delightful animations and features with reliable output. The seamless integration with SwiftUI and the new design material Liquid Glass is also a major advantage. When should I still bring my own LLM via CoreML? It's generally recommended to first explore Apple's built-in system models and APIs, including the Foundation Models framework, as they are highly optimized for Apple devices and cover a wide range of use cases. However, Core ML is still valuable if you need more control or choice over the specific model being deployed, such as customizing existing system models or augmenting prompts. Core ML provides the tools to get these models on-device, but you are responsible for model distribution and updates. Should I migrate PyTorch code to MLX? MLX is an open-source, general-purpose machine learning framework designed for Apple Silicon from the ground up. It offers a familiar API, similar to PyTorch, and supports C, C++, Python, and Swift. MLX emphasizes unified memory, a key feature of Apple Silicon hardware, which can improve performance. It's recommended to try MLX and see if its programming model and features better suit your application's needs. MLX shines when working with state-of-the-art, larger models. Can I test Foundation Models in Xcode simulator or device? Yes, you can use the Xcode simulator to test Foundation Models use cases. However, your Mac must be running macOS Tahoe. You can test on a physical iPhone running iOS 18 by connecting it to your Mac and running Playgrounds or live previews directly on the device. Which on-device models will be supported? any open source models? The Foundation Models framework currently supports Apple's first-party models only. This allows for platform-wide optimizations, improving battery life and reducing latency. While Core ML can be used to integrate open-source models, it's generally recommended to first explore the built-in system models and APIs provided by Apple, including those in the Vision, Natural Language, and Speech frameworks, as they are highly optimized for Apple devices. For frontier models, MLX can run very large models. How often will the Foundational Model be updated? How do we test for stability when the model is updated? The Foundation Model will be updated in sync with operating system updates. You can test your app against new model versions during the beta period by downloading the beta OS and running your app. It is highly recommended to create an "eval set" of golden prompts and responses to evaluate the performance of your features as the model changes or as you tweak your prompts. Report any unsatisfactory or satisfactory cases using Feedback Assistant. Which on-device model/API can I use to extract text data from images such as: nutrition labels, ingredient lists, cashier receipts, etc? Thank you. The Vision framework offers the RecognizeDocumentRequest which is specifically designed for these use cases. It not only recognizes text in images but also provides the structure of the document, such as rows in a receipt or the layout of a nutrition label. It can also identify data like phone numbers, addresses, and prices. What is the context window for the model? What are max tokens in and max tokens out? The context window for the Foundation Model is 4,096 tokens. The split between input and output tokens is flexible. For example, if you input 4,000 tokens, you'll have 96 tokens remaining for the output. The API takes in text, converting it to tokens under the hood. When estimating token count, a good rule of thumb is 3-4 characters per token for languages like English, and 1 character per token for languages like Japanese or Chinese. Handle potential errors gracefully by asking for shorter prompts or starting a new session if the token limit is exceeded. Is there a rate limit for Foundation Models API that is limited by power or temperature condition on the iPhone? Yes, there are rate limits, particularly when your app is in the background. A budget is allocated for background app usage, but exceeding it will result in rate-limiting errors. In the foreground, there is no rate limit unless the device is under heavy load (e.g., camera open, game mode). The system dynamically balances performance, battery life, and thermal conditions, which can affect the token throughput. Use appropriate quality of service settings for your tasks (e.g., background priority for background work) to help the system manage resources effectively. Do the foundation models support languages other than English? Yes, the on-device Foundation Model is multilingual and supports all languages supported by Apple Intelligence. To get the model to output in a specific language, prompt it with instructions indicating the user's preferred language using the locale API (e.g., "The user's preferred language is en-US"). Putting the instructions in English, but then putting the user prompt in the desired output language is a recommended practice. Are larger server-based models available through Foundation Models? No, the Foundation Models API currently only provides access to the on-device Large Language Model at the core of Apple Intelligence. It does not support server-side models. On-device models are preferred for privacy and for performance reasons. Is it possible to run Retrieval-Augmented Generation (RAG) using the Foundation Models framework? Yes, it is possible to run RAG on-device, but the Foundation Models framework does not include a built-in embedding model. You'll need to use a separate database to store vectors and implement nearest neighbor or cosine distance searches. The Natural Language framework offers simple word and sentence embeddings that can be used. Consider using a combination of Foundation Models and Core ML, using Core ML for your embedding model.
1
0
754
Jun ’25
Random crash from AVFAudio library
Hi everyone ! I'm getting random crashes when I'm using the Speech Recognizer functionality in my app. This is an old bug (for 8 years on Apple Forums) and I will really appreciate if anyone from Apple will be able to find a fix for this crashes. Can anyone also help me please to understand what could I do to keep the Speech Recognizer functionality still available in my app, but to avoid this crashes (if there is any other native library available or a CocoaPod library). Here is my code and also the crash log for it. Code: func startRecording() { startStopRecordBtn.setImage(UIImage(#imageLiteral(resourceName: "microphone_off")), for: .normal) if UserDefaults.standard.bool(forKey: Constants.darkTheme) { commentTextView.textColor = .white } else { commentTextView.textColor = .black } commentTextView.isUserInteractionEnabled = false recordingLabel.text = Constants.recording if recognitionTask != nil { recognitionTask?.cancel() recognitionTask = nil } let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(AVAudioSession.Category.record) try audioSession.setMode(AVAudioSession.Mode.measurement) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) } catch { showAlertWithTitle(message: Constants.error) } recognitionRequest = SFSpeechAudioBufferRecognitionRequest() let inputNode = audioEngine.inputNode guard let recognitionRequest = recognitionRequest else { fatalError(Constants.error) } recognitionRequest.shouldReportPartialResults = true recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in var isFinal = false if result != nil { self.commentTextView.text = result?.bestTranscription.formattedString isFinal = (result?.isFinal)! } if error != nil || isFinal { self.audioEngine.stop() inputNode.removeTap(onBus: 0) self.recognitionRequest = nil self.recognitionTask = nil self.startStopRecordBtn.isEnabled = true } }) let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) {[weak self] (buffer: AVAudioPCMBuffer, when: AVAudioTime) in // CRASH HERE self?.recognitionRequest?.append(buffer) } audioEngine.prepare() do { try audioEngine.start() } catch { showAlertWithTitle(message: Constants.error) } } Here is the crash log: Thanks for very much for reading this !
3
0
1.1k
Sep ’24
Install jax on macOS 15.1 Beta (24B5046f)
Following this instruction to install jax (https://developer.apple.com/metal/jax/), I still encountered this error: RuntimeError: This version of jaxlib was built using AVX instructions, which your CPU and/or operating system do not support. This error is frequently encountered on macOS when running an x86 Python installation on ARM hardware. In this case, try installing an ARM build of Python. Otherwise, you may be able work around this issue by building jaxlib from source. How to fix it?
1
0
1.6k
Sep ’24
Apple AI / Data Protection & Processing
Where does the processing power to enact certain AI capabilities come from? Is it hosted on the originating device? Or does the device send contents of originating information to Apple assets to process and give product to end user? e.g. If I ask AI to summarize an email will it send the contents of the email to an Apple AI asset to process it and give the summary to the originating device.
0
0
572
Sep ’24
Tensorflow-metal: Problems with Keras 3.0
The following code taken from keras.io produces the error InternalError: Exception encountered when calling GPT2Tokenizer.call(). ... 2 root error(s) found. (0) INTERNAL: stream cannot wait for itself Macos on Macbook, M2 Max. Setting the optimizer to "Adam" does not help. import keras_nlp # version 0.15 causal_lm = keras_nlp.models.GPT2CausalLM.from_preset("gpt2_base_en") causal_lm.compile(sampler="greedy") # the next call produces the error causal_lm.generate(["Keras is a"])
1
0
937
Sep ’24
The CoreML MultiArray Float16 input is not supported for running on the NPU, and this issue only occurs on the iPhone 11.
Xcode Version: Version 15.2 (15C500b) com.github.apple.coremltools.source: torch==1.12.1 com.github.apple.coremltools.version: 7.2 Compute: Mixed (Float16, Int32) Storage: Float16 The input to the mlpackage is MultiArray (Float16 1 × 1 × 544 × 960) The flexibility is: 1 × 1 × 544 × 960 | 1 × 1 × 384 × 640 | 1 × 1 × 736 × 1280 | 1 × 1 × 1088 × 1920 I tested this on iPhone XR, iPhone 11, iPhone 12, iPhone 13, and iPhone 14. On all devices except the iPhone 11, the model runs correctly on the NPU. However, on the iPhone 11, the model runs on the CPU instead. Here is the CoreMLTools conversion code I used: mlmodel = ct.convert(trace, inputs=[ct.TensorType(shape=input_shape, name="input", dtype=np.float16)], outputs=[ct.TensorType(name="output", dtype=np.float16, shape=output_shape)], convert_to='mlprogram', minimum_deployment_target=ct.target.iOS16 )
3
0
875
Sep ’24
iOS18 using VNRecognizeTextRequest2 but VNRecognizeTextRequest3 used
VNRecognizeTextRequest2 did not recognize the upside down text of English text. VNRecognizeTextRequest3 can recognize the text even if English text is upside down. Till iOS 17, I can select VNRecognizeTextRequest2 or VNRecognizeTextRequest3 in my code which is minimum build is iOS16 when I need upside down text detection required.. But on iOS18, even if I set the VNRecognizeTextRequest2 in my code, result seems to be based on the VNRecognizeTextRequest3 because upside down text is detected. VNRecognizeTextRequest2 was deplicant on iOS18, I know. How can I recognize the observation result is upside down or not? Are there any solution with VNRecognizeTextRequest3?
1
0
566
Sep ’24
Image Playground API
Does the new Image Playground API allow programmatically generating images? Can the app generate and use them without the API's UI or would that require using another generative image model?
3
12
4.5k
Sep ’24
CoreML, Invalid indexing on GPU
i believe i am encountering a bug in the MPS backend of CoreML. i believe there is an invalid conversion of a slice_by_index + gather operation resulting in indexing the wrong values on GPU execution. the following is a python program using the coremltools library illustrating the issue: from coremltools.converters.mil import Builder as mb from coremltools.converters.mil.mil import types dB = 20480 shapeI = (2, dB) shapeB = (dB, 22) @mb.program(input_specs=[mb.TensorSpec(shape=shapeI, dtype=types.int32), mb.TensorSpec(shape=shapeB)]) def prog(i, b): lslice = mb.slice_by_index(x=i, begin=[0, 0], end=[1, dB], end_mask=[False, True], squeeze_mask=[True, False], name='slice_left') rslice = mb.slice_by_index(x=i, begin=[1, 0], end=[2, dB], end_mask=[False, True], squeeze_mask=[True, False], name='slice_right') ldata = mb.gather(x=b, indices=lslice) rdata = mb.gather(x=b, indices=rslice) # actual bug in optimization of gather+slice x = mb.add(x=ldata, y=rdata) # dummy ops to make a bigger graph to run on GPU x = mb.mul(x=x, y=2.) x = mb.mul(x=x, y=.5) x = mb.mul(x=x, y=2.) x = mb.mul(x=x, y=.5) x = mb.mul(x=x, y=2.) x = mb.mul(x=x, y=.5) x = mb.mul(x=x, y=2.) x = mb.mul(x=x, y=.5) x = mb.mul(x=x, y=2.) x = mb.mul(x=x, y=.5) x = mb.mul(x=x, y=2.) x = mb.mul(x=x, y=.5) x = mb.mul(x=x, y=2.) x = mb.mul(x=x, y=.5) x = mb.mul(x=x, y=1., name='result') return x input_types = [ ct.TensorType(name="i", shape=shapeI, dtype=np.int32), ct.TensorType(name="b", shape=shapeB, dtype=np.float32), ] with tempfile.TemporaryDirectory() as tmpdirname: model_cpu = ct.convert(prog, inputs=input_types, compute_precision=ct.precision.FLOAT32, compute_units=ct.ComputeUnit.CPU_ONLY, package_dir=tmpdirname + 'model_cpu.mlpackage') model_gpu = ct.convert(prog, inputs=input_types, compute_precision=ct.precision.FLOAT32, compute_units=ct.ComputeUnit.CPU_AND_GPU, package_dir=tmpdirname + 'model_gpu.mlpackage') inputs = { "i": torch.randint(0, shapeB[0], shapeI, dtype=torch.int32), "b": torch.rand(shapeB, dtype=torch.float32), } cpu_output = model_cpu.predict(inputs) gpu_output = model_gpu.predict(inputs) # equivalent to prog expected = inputs["b"][inputs["i"][0]] + inputs["b"][inputs["i"][1]] # what actually happens on GPU actual = inputs["b"][inputs["i"][0]] + inputs["b"][inputs["i"][0]] print(f"diff expected vs cpu: {np.sum(np.absolute(expected - cpu_output['result']))}") print(f"diff expected vs gpu: {np.sum(np.absolute(expected - gpu_output['result']))}") print(f"diff actual vs gpu: {np.sum(np.absolute(actual - gpu_output['result']))}") the issue seems to occur in the slice_right + gather operations when executed on GPU. the wrong items in input "i" are selected. the program outpus diff expected vs cpu: 0.0 diff expected vs gpu: 150104.015625 diff actual vs gpu: 0.0 this behavior has been tested on MacBook Pro 14inches 2023, (M2 pro) on mac os 14.7, using coremltools 8.0b2 with python 3.9.19
3
0
651
Sep ’24
Can Writing Tools be accessed In UITableView contextMenu?
I’m currently developing an app that features a main view with a UITableView. When users select a row, they are navigated to a detail view that contains a UITextField. This UITextField already supports Writing Tools. My question is: When a user long-presses a UITableView cell, is it possible to add a Writing Tools option to the Context Menu, allowing users to interact with the Writing Tools more conveniently?like Summary detail text
0
0
454
Sep ’24
Apple Music EQ settings
Was just wondering, not sure if anyone else had thought about this. but different sound output device have different mechanism of sound throw. can we not put in something which can go into bluetooth settings and overseeing if it is a music device connected would automatically set the EQ differently( as per user requirement) So its somewhat like each music device would have specific music EQ stored for the same which can be recognized via bluetooth.
1
0
535
Sep ’24
The Vision request does not work in simulator with Error "Could not create inference context"
When I use VNGenerateForegroundInstanceMaskRequest to generate the mask in the simulator by SwiftUI, there is an error "Could not create inference context". Then I add the code to make the vision by CPU: let request = VNGenerateForegroundInstanceMaskRequest() let handler = VNImageRequestHandler(ciImage: inputImage) #if targetEnvironment(simulator) if #available(iOS 18.0, *) { let allDevices = MLComputeDevice.allComputeDevices for device in allDevices { if(device.description.contains("MLCPUComputeDevice")){ request.setComputeDevice(.some(device), for: .main) break } } } else { // Fallback on earlier versions request.usesCPUOnly = true } #endif do { try handler.perform([request]) if let result = request.results?.first { let mask = try result.generateScaledMaskForImage(forInstances: result.allInstances, from: handler) return CIImage(cvPixelBuffer: mask) } } catch { print(error) } Even I force the simulator to run the code by CPU, but it still have the error: "Could not create inference context"
2
1
1k
Sep ’24
Issue with OCR on Swift iOS App: Roboflow API Bounding Boxes Missing After Response
Hi everyone, I'm working on an iOS app built in Swift using Xcode, where I'm integrating Roboflow's object detection API to extract items from grocery receipts. My goal is to identify key information (like items, total, tax, etc.) from the images of these receipts. I'm successfully sending images to the Roboflow API and receiving predictions with bounding box data, but when I attempt to extract text from the detected regions (bounding boxes), it appears that the text extraction is failing—no text is being recognized. The issue seems to be that the bounding boxes are either not properly being handled or something is going wrong in the way I process the API response. Here's a brief breakdown of what I'm doing: The image is captured, converted to base64, and sent to the Roboflow API. The API response comes back with bounding boxes for the detected elements (items, date, subtotal, etc.). The problem occurs when I try to extract the text from the image using the bounding box data—it seems like the bounding boxes are being found, but no text is returned. I suspect the issue might be happening because the app’s segue to the results view controller is triggered before the OCR extraction completes, or there might be a problem in my code handling the bounding box response. Response Data: { "inference_id": "77134cce-91b5-4600-a59b-fab74350ca06", "time": 0.09240847699993537, "image": { "width": 370, "height": 502 }, "predictions": [ { "x": 163.5, "y": 250.5, "width": 313.0, "height": 127.0, "confidence": 0.9357666373252869, "class": "Item", "class_id": 1, "detection_id": "753341d5-07b6-42a1-8926-ecbc61128243" }, { "x": 52.5, "y": 417.5, "width": 89.0, "height": 23.0, "confidence": 0.8819760680198669, "class": "Date", "class_id": 0, "detection_id": "b4681149-d538-47b1-8700-d9528bf1daa0" }, ... ] } And the log showing bounding boxes: Prediction: ["width": 313, "y": 250.5, "x": 163.5, "detection_id": 753341d5-07b6-42a1-8926-ecbc61128243, "class": Item, "height": 127, "confidence": 0.9357666373252869, "class_id": 1] No bounding box found in prediction. I've double-checked the bounding box coordinates, and everything seems fine. Does anyone have experience with using OCR alongside object detection APIs in Swift? Any help on how to ensure the bounding boxes are properly processed and used for OCR would be greatly appreciated! Also, would it help to delay the segue to the results view controller until OCR is complete? Thank you!
0
0
582
Sep ’24
[NewbQs] Is this possible with AppIntentDomains?
As a user, when viewing a photo or image, I want to be able to tell Siri, “add this to ”, similar to example from the WWDC presentation where a photo is added to a note in the notes app. Is this... possible with app domains as they are documented? I see domains like open-file and open-photo, but I don't know if those are appropriate for this kind of functionality?
1
0
567
Sep ’24
Genmoji developer support
Trying to experiment with Genmoji per the WWDC documentation and samples, but I don't seem to get Genmoji keyboard. I see this error in my log: Received port for identifier response: <(null)> with error:Error Domain=RBSServiceErrorDomain Code=1 "Client not entitled" UserInfo={RBSEntitlement=com.apple.runningboard.process-state, NSLocalizedFailureReason=Client not entitled, RBSPermanent=false} elapsedCPUTimeForFrontBoard couldn't generate a task port Is anything presently supported for developers? All I have done here is a simple app with a UITextView and code for: textView.supportsAdaptiveImageGlyph = true Any thoughts?
0
0
606
Sep ’24
Training data "isn't in the correct format"
Hi folks, I'm trying to import data to train a model and getting the above error. I'm using the latest Xcode, have double checked the formatting in the annotations file, and used jpgrepair to remove any corruption from the data files. Next step is to try a different dataset, but is this a particular known error? (Or am I doing something obviously wrong?) 2019 Intel Mac, Xcode 15.4, macOS Sonoma 14.1.1 Thanks
1
0
494
Oct ’24