Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Using RAG on local documents from Foundation Model
I am watching a few WWDC sessions on Foundation Model and its usage and it looks pretty cool. I was wondering if it is possible to perform RAG on the user documents on the devices and entuallly on iCloud... Let's say I have a lot of pages documents about me and I want the Foundation model to access those information on the documents to answer questions about me that can be retrieved from the documents. How can this be done ? Thanks
4
2
329
Jun ’25
Does Apple Intelligence Extensions Have an API?
Hi everyone, On the "Apple Intelligence & Siri" settings there's a section titled "Extensions" that specifically mentions ChatGPT. This got me curious—does Apple provide an API or SDK for developers to create custom integrations or use Apple Intelligence Extensions? Or is this currently limited to the Apple/OpenAI partnership? I appreciate any insights or links to relevant documentation. Here's a screenshot of what I mean: https://imgur.com/a/4MuQkIJ
1
2
824
Dec ’24
Playground (early access)
Is it just me or is early access image playground not available, been waiting for a little over 24hrs and still no access. (no rush for the team if there’s smth wrong) they might be busy rolling out the first few apple intelligence features (ios 18.1) public release.
4
2
1.8k
Oct ’24
Detection of balls about 6-10ft Away not detecting
I used Yolo5-11 and while performing great detecting balls lets say 5-10ft away in 1920 resolution and even in 640 it really is taking toll on my app performance. When I use Create ML it outputs all in 415x which is probably the reason why it does not detect objects from far. What can I do to preserve some energy ? My model is used with about 1K pictures 200 each test and validate, and from close up and far.
0
2
66
Apr ’25
Making a model in MLLinearRegressor works with Sonoma, but on upgrading to 15.3.1 it no longer does "anything"
I was generating models using the code:- import Foundation import CreateML import TabularData import CoreML .... func makeTheModel(columntopredict:String,training:DataFrame,colstouse:[String],numberofmodels:Int) -> [MLLinearRegressor] { var returnmodels = [MLLinearRegressor]() var result = 0.0 for i in 0...numberofmodels { let pms = MLLinearRegressor.ModelParameters(validation: .split(strategy: .automatic)) do { let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict) returnmodels.append(tm) } catch let error as NSError { print("Error: \(error.localizedDescription)") } } return returnmodels } Which worked absolutely fine with Sonoma, but upon upgrading the OS to 15.3.1, it does absolutely nothing. I get no error messages, I get nothing, the code just pauses. If I look at CPU usage, as soon as it hits the line let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict) the CPU usage drops to 0% What am I doing wrong? Is there a flag I need to set somewhere in Xcode? This is on an M1 MacBook Pro Any help would be greatly appreciated
2
1
428
Mar ’25
The Vision request does not work in simulator with Error "Could not create inference context"
When I use VNGenerateForegroundInstanceMaskRequest to generate the mask in the simulator by SwiftUI, there is an error "Could not create inference context". Then I add the code to make the vision by CPU: let request = VNGenerateForegroundInstanceMaskRequest() let handler = VNImageRequestHandler(ciImage: inputImage) #if targetEnvironment(simulator) if #available(iOS 18.0, *) { let allDevices = MLComputeDevice.allComputeDevices for device in allDevices { if(device.description.contains("MLCPUComputeDevice")){ request.setComputeDevice(.some(device), for: .main) break } } } else { // Fallback on earlier versions request.usesCPUOnly = true } #endif do { try handler.perform([request]) if let result = request.results?.first { let mask = try result.generateScaledMaskForImage(forInstances: result.allInstances, from: handler) return CIImage(cvPixelBuffer: mask) } } catch { print(error) } Even I force the simulator to run the code by CPU, but it still have the error: "Could not create inference context"
2
1
1k
Sep ’24
[18.2b2] How do I test an OpenIntent?
So, I've declared an AppIntent that indicates my app can "Open files" that conform to UTType.Image. I've got a @AssistantEntity(schema: .files.file) and a @AssistantIntent(schema: .files.openFile) declared. So I navigate to the files app, quicklook an image, and open type-to-siri. I tell siri "open this in " and all it does is act like "open ". No breakpoint is hit in my intent's perform method. Am I doing something wrong? How can I test these cross-app behaviors? Are they... not actually possible? Does an "OpenIntent" only work on my app's own URLs and not on file URLs from other apps?
2
1
471
Nov ’24
Issues with using ClassifyImageRequest() on an Xcode simulator
Hello, I am developing an app for the Swift Student challenge; however, I keep encountering an error when using ClassifyImageRequest from the Vision framework in Xcode: VTEST: error: perform(_:): inside 'for await result in resultStream' error: internalError("Error Domain=NSOSStatusErrorDomain Code=-1 \"Failed to create espresso context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}") It works perfectly when testing it on a physical device, and I saw on another thread that ClassifyImageRequest doesn't work on simulators. Will this cause problems with my submission to the challenge? Thanks
5
1
730
Feb ’25
Apple Intelligence stuck on "preparing" for 6 days.
On the October 10/28 release day of Apple Intelligence I opted in. My iPhone and iPad immediately went to "waitlist" and within 2 to 3 hours were ready to initialize Apple Intelligence. My MacBook Pro 14" with M3 Pro processor and 18 GB or RAM has been stuck on "preparing" since release day (6 days now). I've tried numerous workarounds that I found on forums as well as talking to Apple support, who basically had me repeat the workarounds that I found on forums. I've tried changing region to an area that does not have Apple Intelligence and then back to the US, I've changed Siri language to an unsupported one and back to a supported one, and I have tried disabling background/startup Apps, I've disabled and reenabled Siri. Oh, I've restarted a bunch and let the Mac alone for hours at a time. I've noticed that my selected Siri voice seems to not download. Finally, after several chats and calls with Apple support, I was told that it's Beta software, they can't help me, and I should try the developer forums.... so here I am. Any advice?
9
1
2.9k
Feb ’25
The yolo11 object detection model I exported to coreml stopped working in macOS15.2 beta.
After updating to macOS15.2beta, the Yolo11 object detection model exported to coreml outputs incorrect and abnormal bounding boxes. It also doesn't work in iOS apps built on a 15.2 mac. The same model worked fine on macOS14.1. When training a Yolo11 custom model in Python, exporting it to coreml, and testing it in the preview tab of mlpackage on macOS15.2 and Xcode16.0, the above result is obtained.
6
1
1.3k
Feb ’25
ImageCreator fails with GenerationError Code=11 on Apple Intelligence-enabled device
When I ran the following code on a physical iPhone device that supports Apple Intelligence, I encountered the following error log. What does this internal error code mean? Image generation failed with NSError in a different domain: Error Domain=ImagePlaygroundInternal.ImageGeneration.GenerationError Code=11 “(null)”, returning a generic error instead let imageCreator = try await ImageCreator() let style = imageCreator.availableStyles.first ?? .animation let stream = imageCreator.images(for: [.text("cat")], style: style, limit: 1) for try await result in stream { // error: ImagePlayground.ImageCreator.Error.creationFailed _ = result.cgImage }
0
1
215
Jul ’25
Missing parameter prompt for search Assistant Intents
Issue When triggering an App Intent using assistant schemas from Apple Intelligence (voice or text) the App opens without prompting for search criteria. How to repeat This can be repeated in the example provided by Apple here: https://developer.apple.com/documentation/appintents/making-your-app-s-functionality-available-to-siri Download the sample code Build and run on Xcode 16.1 beta 3 Target iPhone 15 Pro Max on iOS 18.1 beta 7 Trigger Apple Intelligence Enter prompt: "Search AssistantSchemasExample" Expected behaviour Apple Intelligence should prompt the user for the criteria and provide this to the App so that the experience is seamless for the end-user. Otherwise Assistant Intents are nothing more than deep links to search screens. Notes The example uses @AssistantIntent(schema: .photos.search) intent. And I've found the issue is also present in other search intents: @AssistantIntent(schema: .system.search) @AssistantIntent(schema: .browser.search) Questions Has anyone managed to get the prompt to appear? Will this only function on iOS 18.2?
3
1
533
Oct ’24
Is it possible to read japanese tategaki with vision framework
We are building an app which can reads texts. It can read english and Japanese normal texts successfully. But in some cases, we need to read Japanese tategaki (vertically aligned texts). But in that times, the same code gives no output. So, is there any need to change any configuration to read Japanese tategaki? Or is it really possible to read Japanese tategaki using vision framework? lazy var detectTextRequest = VNRecognizeTextRequest { request, error in self.resStr="\n" self.words = [:] // Get OCR result guard let res = request.results as? [VNRecognizedTextObservation] else { return } // separate the words by space let text = res.compactMap({$0.topCandidates(1).first?.string}).joined(separator: " ") var n = 0 self.wordArr=[[]] self.xs = 1 self.ys = 1 var hs = 0.0 // To compare the heights of the words // To get the original axis (top most word's axis), only once for r in res { var word = r.topCandidates(1).first?.string self.words[word ?? ""] = [r.topLeft.x, r.topLeft.y] if(self.cartLabelType == 1){ if(word?.components(separatedBy: CharacterSet(charactersIn: "//")).count ?? 0>2){ self.xs = r.topLeft.x self.ys = r.topLeft.y } } } } }
2
1
576
Jan ’25