How reliable is the Models, to use as a comparison, such as a cholesterol test, to inform, for example, whether it is worth it to go see a doctor?
I would like to use Tool to attach the simple blood test data to the session and with this the Model can analyse and made a simple suggestion if is necessary to see a doctor etc.. ?
ps.: Local model
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Is it possible to expose a custom VirtIO device to a Linux guest running inside a VM — likely using QEMU backed by Hypervisor.framework. The guest would see this device as something like /dev/npu0, and it would use a kernel driver + userspace library to submit inference requests.
On the macOS host, these requests would be executed using CoreML, MPSGraph, or BNNS. The results would be passed back to the guest via IPC.
Does the macOS allow this kind of "fake" NPU / GPU
Hello Apple Team,
Thank you for the recent Group Lab and for your continued work on advancing Xcode and developer tools.
I’d like to submit a feature request:
Are there any plans to introduce support for Agentic AI Mode (MCP protocol) in future versions of iOS or Xcode?
As developer tools evolve toward more intelligent and context-aware environments, the integration of agentic AI capabilities could significantly enhance productivity and unlock new creative workflows.
Looking forward to your consideration, and thank you again for the excellent session.
Best regards
I'm a bit new to the LLM stuff and with Foundation Models. My understanding is that there is a token limit of around 4K.
I want to process the contents of files which may be quite large. I first tried going the Tool route but that didn't work out so I then tried manually chunking the text to keep things under the limit.
It mostly works except that every now and then it'll exceed the limit. This happens even when the chunks are less than 100 characters. Instructions themselves are about 500 characters but still overall, well below 1000 characters per prompt, all told, which, in my limited understanding, should not result in 4K tokens being parsed.
Any ideas on what is going on here?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi Apple product owners.
I am missing a unified concept which might be derived from the use cases for mail categories and mail spam for the app "Mail" on Mac.
I need a recommendation on how to use categories in combination with the spam filter to get most out of it.
So I was looking for the use cases for the 2 functionality areas in order to figure out how to organise my mails by using as much automation as possible before I start creating intelligent folders in addition.
What can you recommend where I get this information from? I don't want to guess or read a lot of forum contributions which are based on guesses.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I have a mac (M4, MacBook Pro) running Tahoe 26.0 beta. I am running Xcode beta.
I can run code that uses the LLM in a #Preview { }.
But when I try to run the same code in the simulator, I get the 'device not ready' error and I see the following in the Settings app.
Is there anything I can do to get the simulator to past this point and allowing me to test on it with Apple's LLM?
Hi Apple team,
When using AppShortcutsProvider, I hit the hard limit:
Each app may have at most 10 App Shortcuts.
This feels limiting for apps that offer multiple workflows and would benefit from deeper Siri integration.
Could this cap be raised — ideally to 30 — to support broader use of AppIntents, enhance Siri automation, and unlock more system-level capabilities?
AppShortcuts are a fantastic tool. Increasing the limit would make them even more powerful.
Thanks!
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Shortcuts
App Intents
Apple Intelligence
I've tried creating a Lora adapter using the example dataset, scripts as part of the adapter_training_toolkit_v26_0_0 (last available) on MacOs 26 Beta 6.
import SwiftUI
import FoundationModels
import Playgrounds
#Playground {
// The absolute path to your adapter.
let localURL = URL(filePath: "/Users/syl/Downloads/adapter_training_toolkit_v26_0_0/train/test-lora.fmadapter")
// Initialize the adapter by using the local URL.
let adapter = try SystemLanguageModel.Adapter(fileURL: localURL)
// An instance of the the system language model using your adapter.
let customAdapterModel = SystemLanguageModel(adapter: adapter)
// Create a session and prompt the model.
let session = LanguageModelSession(model: customAdapterModel)
let response = try await session.respond(to: "hello")
}
I get Adapter assets are invalid error.
I've added the entitlements
Is adapter_training_toolkit_v26_0_0 up to date?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Was just wondering why the foundation model documentation is no longer available, thanks!
https://developer.apple.com/documentation/FoundationModels
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm developing a macOS application using the FoundationModels framework
(LanguageModelSession) and encountering issues with the content sanitizer
blocking legitimate text input.
** Issue Description:**
The content sanitizer is flagging text strings that contain certain
substrings, even when they represent legitimate technical content. For
example:
F_SEEL_SEX1S.wav (sE Electronics SEX1S microphone model)
Technical product identifiers
Serial numbers and version codes
** Broader Concern:**
The content sanitizer appears to be applying restrictions that seem
inappropriate for user-owned content. Even if a filename were something
like "human sex.wav", users should have the right to process their own
legitimate files on their own devices without content filtering
interference.
** Error Messages:**
SensitiveContentSettings: Sanitizer model found unsafe content in value
FoundationModels.LanguageModelSession.GenerationError error 2
** Questions:**
Is there a way to disable content sanitization for processing
user-owned content?
2. What's the recommended approach for applications that need to handle
arbitrary user text?
3. Are there APIs to process personal content without filtering
restrictions?
** Environment:**
macOS 26.0
FoundationModels framework
LanguageModelSession
Any guidance would be appreciated.
From tensorflow-metal example:
Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )
I know that Apple silicon uses UMA, and that memory copies are typical of CUDA, but wouldn't the GPU memory still be faster overall?
I have an iMac Pro with a Radeon Pro Vega 64 16 GB GPU and an Intel iMac with a Radeon Pro 5700 8 GB GPU.
But using tensorflow-metal is still WAY faster than using the CPUs. Thanks for that. I am surprised the 5700 is twice as fast as the Vega though.
Hello,
My app fully relies on the new Foundation Models. Since Foundation Models require Apple Intelligence, I want to ensure that only devices capable of running Apple Intelligence can install my app.
When checking the UIRequiredDeviceCapabilities property for a suitable value, I found that iphone-performance-gaming-tier seems the closest match. Based on my research:
On iPhone, this effectively limits installation to iPhone 15 Pro or later.
On iPad, it ensures M1 or newer devices.
This exactly matches the hardware requirements for Apple Intelligence.
However, after setting iphone-performance-gaming-tier, I noticed that on iPad, Game Mode (Game Overlay) is automatically activated, and my app is treated as a game.
My questions are:
Is there a more appropriate UIRequiredDeviceCapabilities value that would enforce the same Apple Intelligence hardware requirements without triggering Game Mode?
If not, is there another way to restrict installation to devices meeting Apple Intelligence requirements?
Is there a way to prevent Game Mode from appearing for my app while still using this capability restriction?
Thanks in advance for your help.
Hi everyone! 👋
I'm working on a C++ project using TensorFlow Lite and was wondering if anyone has a prebuilt TensorFlow Lite C++ library (libtensorflowlite) for macOS (Apple Silicon M1/M2) that they’d be willing to share.
I’m looking specifically for the TensorFlow Lite C++ API — something that lets me use tflite::Interpreter, tflite::FlatBufferModel, etc. Building it from source using Bazel on macOS has been quite challenging and time-consuming, so a ready-to-use .dylib or .a build along with the required headers would be incredibly helpful.
TensorFlow Lite version: v2.18.0 preferred
Target: macOS arm64 (Apple Silicon)
What I need:
libtensorflowlite.dylib or .a
Corresponding headers (ideally organized in a clean include/ folder)
If you have one available or know where I can find a reliable prebuilt version, I’d be super grateful. Thanks in advance! 🙏
I am trying to create a slightly different version of the content tagging code in the documentation:
https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase/contenttagging
In the playground I am getting an "Inference Provider crashed with 2:5" error.
I have no idea what that means or how to address the error. Any assistance would be appreciated.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I generate an array of random floats using the code shown below. However, I would like to do this with Double instead of Float. Are there any BNNS random number generators for double values, something like BNNSRandomFillUniformDouble? If not, is there a way I can convert BNNSNDArrayDescriptor from float to double?
import Accelerate
let n = 100_000_000
let result = Array<Float>(unsafeUninitializedCapacity: n) { buffer, initCount in
var descriptor = BNNSNDArrayDescriptor(data: buffer, shape: .vector(n))!
let randomGenerator = BNNSCreateRandomGenerator(BNNSRandomGeneratorMethodAES_CTR, nil)
BNNSRandomFillUniformFloat(randomGenerator, &descriptor, 0, 1)
initCount = n
}
I've been struggling and Siri support to an application. I have developed it kept getting this error when I run it on MacOS:
Failed to refresh AppShortcut parameters with error: Error Domain=Foundation._GenericObjCError Code=0 "(null)"
So I found AppIntentsSampleApp and downloaded and buil it and I get a similar, but larger, error:
Failed to refresh AppShortcut parameters with error: Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.private.xpc.launchd.app-server AND originator doesn't have entitlement com.apple.assertiond.system-shell AND originator doesn't have entitlement com.apple.runningboard.launchprocess)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.private.xpc.launchd.app-server AND originator doesn't have entitlement com.apple.assertiond.system-shell AND j
And it goes on and on.
What am I missing? I'm using Xcode 16. I don't see an option to add a Siri framework. I have tried adding both the intent and tap, intent frameworks, which does not seem to make a difference.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I keep getting the error “An unsupported language or locale was used.”
Is there any documentation that specifies the accepted languages or locales in Foundation model?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
While building an app with large language model inferencing on device, I got gibberish output. After carefully examining every detail, I found it's caused by the fused scaledDotProductAttention operation. I switched back to the discrete operations and problem solved. To reproduce the bug, please check https://github.com/zhoudan111/MPSGraph_SDPA_bug
Topic:
Machine Learning & AI
SubTopic:
General
Hi everyone,
I am using Xcode 16.4 in MacOS Sequoia 15.5 with Apple Intelligence turned on.
The following code gives the error message in the title:
import NaturalLanguage
@available(iOS 18.0, *)
func testSystemModel() {
let model = SystemLanguageModel.default
print(model)
}
What am I missing?
Hey guys 👋
I’ve been thinking about a feature idea for iOS that could totally change the way we interact with apps like Twitter/X.
Imagine if we could define our own recommendation algorithm, and have an AI on the iPhone that replaces the suggested tweets in the feed with ones that match our personal interests — based on public tweets, and without hacking anything.
Kinda like a personalized "AI skin" over the app that curates content you actually care about. Feels like this would make content way more relevant and less algorithmically manipulative.
Would love to know what you all think — and if Apple could pull this off 🔥
Topic:
Machine Learning & AI
SubTopic:
General