Apple Intelligence

RSS for tag

Apple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.

Posts under Apple Intelligence tag

97 Posts

Post

Replies

Boosts

Views

Activity

Siri 2.0 (suggests and future updates)
Hey dear developers! This post should be available for the future Siri updates and improvements but also for wishes in this forum so that everyone can share their opinion and idea please stay friendly. have fun! I had already thought about developing a demo app to demonstrate my idea for a better Siri. My change of many: Wish Update: Siri's language recognition capabilities have been significantly enhanced. Instead of manually setting the language, Siri can now automatically recognize the language you intend to use, making language switching much more efficient. Simply speak the language you want to communicate in, and Siri will automatically recognize it and respond accordingly. Whether you speak English, German, or Japanese, Siri will respond in the language you choose.
1
1
950
Oct ’25
I Need some clarifications about FoundationModels
Hello I’m experimenting with Apple’s on‑device language model via the FoundationModels framework in Xcode (using LanguageModelSession in my code). I’d like to confirm a few points: • Is the language model provided by FoundationModels designed and trained by Apple? Or is it based on an open‑source model? • Is this on‑device model available on iOS (and iPadOS), or is it limited to macOS? • When I write code in Xcode, is code completion powered by this same local model? If so, why isn’t the same model available in the left‑hand chat sidebar in Xcode (so that I can use it there instead of relying on ChatGPT)? • Can I grant this local model access to my personal data (photos, contacts, SMS, emails) so it can answer questions based on that information? If yes, what APIs, permission prompts, and privacy constraints apply? Thanks
3
0
665
Oct ’25
Apple on-device AI models
Hello, I am studying macOS26 Apple Intelligence features. I have created a basic swift program with Xcode. This program is sending prompts to FoundationModels.LanguageModelSession. It works fine but this model is not trained for programming or code completion. Xcode has an AI code completion feature. It is called "Predictive Code completion model". So, there are multiple on-device models on macOS26 ? Are there others ? Is there a way for me to send prompts to this "Predictive Code completion model" from my program ? Thanks
1
0
323
Oct ’25
Xcode 26.0 beta 7 - AI - Create files? Client (Xcode) timing out?
I didn't check this out specifically on earlier betas, but AI is allowed to create new .swift files, yes? I'm only seeing proposals to create files (as opposed to code which it can change automatically), and the CREATE FILE button to the right of any proposed file creation does nothing. Next I'll mention running local models, but file creation does not seem to happen for neither ChatGPT nor any local model. Also I'm experimenting with LM Studio and it is reporting my client is timing out. So I guess Xcode is not waiting long enough for a response? The local models are slow, yes. But is there a setting for the AI timeout value? I told the LLM "Every 1 minute send me an update so I know you are still working so my client does not time out" which seems to have no visible effect, it hasn't timed out yet, but I don't visibly see that message.
3
1
346
Oct ’25
How can I give my documents access to Model Foundation
I would like to write a macOS application that uses on-device AI (FoundationModels). I don’t understand how to, practically, give it access to my documents, photos, or contacts and be able to ask it a question like: “Find the document that talks about this topic.” Do I need to manually retrieve the data and provide it in the form of a prompt? Or is FoundationModels capable of accessing it on its own? Thanks
1
0
610
Oct ’25
Apple Intelligence language
I found what might be a bug with enabling Apple Intelligence when switching languages. When my iPhone's language is set to Catalan, the Apple Intelligence is disabled because it is not available for that language. Switching to Spanish doesn't activate it, and it still shows the same message of being unavailable, this time saying not available in Spanish (which is not true). However, it is enabled when the phone is rebooted. Once at this point, the bug becomes even weirder. Having the iPhone language set to Spanish and with Apple Intelligence on, I switch the language to Catalan, and the feature remains enabled. After I ask a query in Catalan, it surprisingly understands it and works, but then it gets disabled. Apart from that, as user feedback, I would love to activate Apple Intelligence in an available language other than my device's language. That's how I always used Siri (iPhone in Catalan, Siri in Spanish). Thanks!
2
1
1.2k
Sep ’25
Error trying to connect to LM Studio running on another Mac
I have Xcode 26 on one Mac, but LM Studio running on another. LM Studio does not have an api key so I have provided "dummy-api-key" but whenever I try to connect, it gives the error "Models could not be fetched with the provided account details" I have the IP and port (1234) of LM Study The server in LM Studio is running. Has anyone got this to work? And if so, what details did you enter?
1
0
255
Sep ’25
Can't turn off inline auto predictions in WKWebView on Mac
I have a WKWebView that contains a js text editor, built on top of a content editable div. Currently, inline predictions on Mac break the text editors functionality significantly every time there's a prediction. There's not a way for me to fix this on the js side unless I can know when a prediction is shown. I've tried disabling inline predictions and writing tools with the web view config, but it doesn't work: let config = WKWebViewConfiguration() config.writingToolsBehavior = .none config.allowsInlinePredictions = false let webView = WKWebView(frame: .zero, configuration: config) I've also tried disabling all spellcheck and autocorrect features in the html but that doesn't work either: <div contenteditable="true" spellcheck="false" autocomplete="off" autocorrect="off" autocapitalize="off"></div> Is there anything I can do to turn it off? Or, is it possible to know when the WebView is predicting text?
1
0
304
Sep ’25
Foundational Model - Image as Input? Timeline
Hi all, I am interested in unlocking unique applications with the new foundational models. I have a few questions regarding the availability of the following features: Image Input: The update in June 2025 mentions "image" 44 times (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates) - however I can't seem to find any information about having images as the input/prompt for the foundational models. When will this be available? I understand that there are existing Vision ML APIs, but I want image input into a multimodal on-device LLM (VLM) instead for features like "Which player is holding the ball in the image", etc (image understanding) Cloud Foundational Model - when will this be available? Thanks! Clement :)
1
0
594
Sep ’25
Model w/ Guardrails Disabled Still Frequently Refuses to Summarize Text
Foundation Models are driving me up the wall. My use case: A news app - I want to summarize news articles. Sounds like a perfect use for the added-in-beta-5 "no guardrails" mode for text-to-text transformations... ... and it's true, I don't get guardrails exceptions anymore but now, the model itself frequently refuses to summarize stuff which in a way is even worse as I have to parse the output text to figure out if it failed instead of getting an exception. I mostly worked that out with my system instructions but still, the refusing to summarize makes it really tough to use. I instructed the model to tell me why it failed if that happens. Examples of various refusals for news articles from major sources: "The article mentions "Visual Lookup" but does not provide details about how it integrates with iOS 26." "The article includes unsafe content regarding a political figure's potential influence over the Federal Reserve board, which is against my guidelines." "the article contains unsafe content." "The article is biased and opinionated and focuses on the author's opinion." (this is despite the instructions specifically asking for a neutral summary - I am asking it to not use bias in the output but it still refuses) I have tons of these. Note that if I don't use the "no guardrails" mode and use a Generable instead, some of these work fine so right now I have to do two passes on much of the content since I never know which one will work. Having a "summary mode" that often refuses to summarize current news articles (the world is not a great place, some of these stories are a bummer) is near worthless.
8
0
1k
Sep ’25
How to test for VisualIntelligence available on device?
I'm adding Visual Intelligence support to my app, and now want to add a Tip using TipKit to guide users to this feature from within my app. I want to add a Rule to my Tip which will only show this Tip on devices where Visual Intelligence is supported (ex. not iPhone 14 Pro Max). What is the best way for me to determine availability to set this TipKit rule? Here's the documentation I'm following for Visual Intelligence: https://developer.apple.com/documentation/visualintelligence/integrating-your-app-with-visual-intelligence
0
0
738
Sep ’25
Apple Intelligence in Xcode with models hosted in Azure AI Foundry
I have managed to configure Apple Intelligence in Xcode to access my deployed models in Azure AI Foundry. However, when I would try to chat with the selected model, I get the following error: Message from Azure OpenAI: Failed request due to: bad request Additional context: {"error": {"code": "BadRequest", "message": "Response API v1 is in GA, please use remove api-version or use api-version=latest."}} Is there a way to remove v1 from the endpoint used or to define an explicit api-version? Or do I have to change my deployment in Azure AI Foundry?
1
0
136
Aug ’25
TAMM toolkit v0.2.0 is for base model older than base model in macOS 26 beta 4
Problem: We trained a LoRA adapter for Apple's FoundationModels framework using their TAMM (Training Adapter for Model Modification) toolkit v0.2.0 on macOS 26 beta 4. The adapter trains successfully but fails to load with: "Adapter is not compatible with the current system base model." TAMM 2.0 contains export/constants.py with: BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1" Findings: Adapter Export Process: In export_fmadapter.py def write_metadata(...): self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE # Hardcoded value The Compatibility Check: - When loading an adapter, Apple's system compares the adapter's baseModelSignature with the current system model - If they don't match: compatibleAdapterNotFound error - The error doesn't reveal the expected signature Questions: - How is BASE_SIGNATURE derived from the base model? - Is it SHA-1 of base-model.pt or some other computation? - Can we compute the correct signature for beta 4? - Or do we need Apple to release TAMM v0.3.0 with updated signature?
1
0
658
Aug ’25
FoundationModels tool calling not working (iOS 26, beta 6)
I have a fairly basic prompt I've created that parses a list of locations out of a string. I've then created a tool, which for these locations, finds their latitude/longitude on a map and populates that in the response. However, I cannot get the language model session to see/use my tool. I have code like this passing the tool to my prompt: class Parser { func populate(locations: String, latitude: Double, longitude: Double) async { let findLatLonTool = FindLatLonTool(latitude: latitude, longitude: longitude) let session = LanguageModelSession(tools: [findLatLonTool]) { """ A prompt that populates a model with a list of locations. """ """ Use the findLatLon tool to populate the latitude and longitude for the name of each location. """ } let stream = session.streamResponse(to: "Parse these locations: \(locations)", generating: ParsedLocations.self) let locationsModel = LocationsModels(); do { for try await partialParsedLocations in stream { locationsModel.parsedLocations = partialParsedLocations.content } } catch { print("Error parsing") } } } And then the tool that looks something like this: import Foundation import FoundationModels import MapKit struct FindLatLonTool: Tool { typealias Output = GeneratedContent let name = "findLatLon" let description = "Find the latitude / longitude of a location for a place name." let latitude: Double let longitude: Double @Generable struct Arguments { @Guide(description: "This is the location name to look up.") let locationName: String } func call(arguments: Arguments) async throws -> GeneratedContent { let request = MKLocalSearch.Request() request.naturalLanguageQuery = arguments.locationName request.region = MKCoordinateRegion( center: CLLocationCoordinate2D(latitude: latitude, longitude: longitude), latitudinalMeters: 1_000_000, longitudinalMeters: 1_000_000 ) let search = MKLocalSearch(request: request) let coordinate = try await search.start().mapItems.first?.location.coordinate if let coordinate = coordinate { return GeneratedContent( LatLonModel(latitude: coordinate.latitude, longitude: coordinate.longitude) ) } return GeneratedContent("Location was not found - no latitude / longitude is available.") } } But trying a bunch of different prompts has not triggered the tool - instead, what appear to be totally random locations are filled in my resulting model and at no point does a breakpoint hit my tool code. Has anybody successfully gotten a tool to be called?
2
1
584
Aug ’25
Use apple private cloud model instead of local model
Hello, I have created this basic swift program: let session = LanguageModelSession( model: .default, instructions: "bla bla bla.") I want to understand what I can put in model parameter (instead of .default). How can I choose between on-device local model (.default I suppose) and apple private cloud model (or any other ?) Thanks
Replies
3
Boosts
0
Views
473
Activity
Oct ’25
Siri 2.0 (suggests and future updates)
Hey dear developers! This post should be available for the future Siri updates and improvements but also for wishes in this forum so that everyone can share their opinion and idea please stay friendly. have fun! I had already thought about developing a demo app to demonstrate my idea for a better Siri. My change of many: Wish Update: Siri's language recognition capabilities have been significantly enhanced. Instead of manually setting the language, Siri can now automatically recognize the language you intend to use, making language switching much more efficient. Simply speak the language you want to communicate in, and Siri will automatically recognize it and respond accordingly. Whether you speak English, German, or Japanese, Siri will respond in the language you choose.
Replies
1
Boosts
1
Views
950
Activity
Oct ’25
Xcode Version 26.0.1 (17A400) Model assets are unavailable
Hello, I was trying to test out Foundation Model however it says Model assets are unavailable. I got my MacBook M1 back in China when i was living there. is this due to region lock?
Replies
3
Boosts
1
Views
1.4k
Activity
Oct ’25
I Need some clarifications about FoundationModels
Hello I’m experimenting with Apple’s on‑device language model via the FoundationModels framework in Xcode (using LanguageModelSession in my code). I’d like to confirm a few points: • Is the language model provided by FoundationModels designed and trained by Apple? Or is it based on an open‑source model? • Is this on‑device model available on iOS (and iPadOS), or is it limited to macOS? • When I write code in Xcode, is code completion powered by this same local model? If so, why isn’t the same model available in the left‑hand chat sidebar in Xcode (so that I can use it there instead of relying on ChatGPT)? • Can I grant this local model access to my personal data (photos, contacts, SMS, emails) so it can answer questions based on that information? If yes, what APIs, permission prompts, and privacy constraints apply? Thanks
Replies
3
Boosts
0
Views
665
Activity
Oct ’25
Apple on-device AI models
Hello, I am studying macOS26 Apple Intelligence features. I have created a basic swift program with Xcode. This program is sending prompts to FoundationModels.LanguageModelSession. It works fine but this model is not trained for programming or code completion. Xcode has an AI code completion feature. It is called "Predictive Code completion model". So, there are multiple on-device models on macOS26 ? Are there others ? Is there a way for me to send prompts to this "Predictive Code completion model" from my program ? Thanks
Replies
1
Boosts
0
Views
323
Activity
Oct ’25
Xcode 26.0 beta 7 - AI - Create files? Client (Xcode) timing out?
I didn't check this out specifically on earlier betas, but AI is allowed to create new .swift files, yes? I'm only seeing proposals to create files (as opposed to code which it can change automatically), and the CREATE FILE button to the right of any proposed file creation does nothing. Next I'll mention running local models, but file creation does not seem to happen for neither ChatGPT nor any local model. Also I'm experimenting with LM Studio and it is reporting my client is timing out. So I guess Xcode is not waiting long enough for a response? The local models are slow, yes. But is there a setting for the AI timeout value? I told the LLM "Every 1 minute send me an update so I know you are still working so my client does not time out" which seems to have no visible effect, it hasn't timed out yet, but I don't visibly see that message.
Replies
3
Boosts
1
Views
346
Activity
Oct ’25
How can I give my documents access to Model Foundation
I would like to write a macOS application that uses on-device AI (FoundationModels). I don’t understand how to, practically, give it access to my documents, photos, or contacts and be able to ask it a question like: “Find the document that talks about this topic.” Do I need to manually retrieve the data and provide it in the form of a prompt? Or is FoundationModels capable of accessing it on its own? Thanks
Replies
1
Boosts
0
Views
610
Activity
Oct ’25
Beta 26.0 stuck on Apple intelligence screen after update
After updating to the newest beta on my iPhone 16 I'm stuck on this screen after accepting the terms: No Internet Connection Setup and activation of Apple Intelligence is unavailable when your device is offline. Connect to the internet and try again. I tried several different networks and 5G but no luck..
Replies
2
Boosts
1
Views
526
Activity
Oct ’25
Apple Intelligence language
I found what might be a bug with enabling Apple Intelligence when switching languages. When my iPhone's language is set to Catalan, the Apple Intelligence is disabled because it is not available for that language. Switching to Spanish doesn't activate it, and it still shows the same message of being unavailable, this time saying not available in Spanish (which is not true). However, it is enabled when the phone is rebooted. Once at this point, the bug becomes even weirder. Having the iPhone language set to Spanish and with Apple Intelligence on, I switch the language to Catalan, and the feature remains enabled. After I ask a query in Catalan, it surprisingly understands it and works, but then it gets disabled. Apart from that, as user feedback, I would love to activate Apple Intelligence in an available language other than my device's language. That's how I always used Siri (iPhone in Catalan, Siri in Spanish). Thanks!
Replies
2
Boosts
1
Views
1.2k
Activity
Sep ’25
Image Playground not available on simulator
I am using the iPhone 17 Pro simulator that was included with Xcode 26.0.1. My Mac is running macOS 26. When I started the simulator for the first time I got the "Ready for Apple Intelligence" notification but when I access Image Playground in my app it says it is not available on this iPhone. Any solution to get it working on the simulator?
Replies
1
Boosts
0
Views
502
Activity
Sep ’25
xcode 26 RC Intelligence not showing in settings
Installed xCode 26 RC. other Components: Predictive Code Completion Model is installed. Go to Settings and Intelligence should be under Apple Accounts. However, it is not. How do I get Intelligence turned on?
Replies
1
Boosts
0
Views
358
Activity
Sep ’25
Error trying to connect to LM Studio running on another Mac
I have Xcode 26 on one Mac, but LM Studio running on another. LM Studio does not have an api key so I have provided "dummy-api-key" but whenever I try to connect, it gives the error "Models could not be fetched with the provided account details" I have the IP and port (1234) of LM Study The server in LM Studio is running. Has anyone got this to work? And if so, what details did you enter?
Replies
1
Boosts
0
Views
255
Activity
Sep ’25
Can't turn off inline auto predictions in WKWebView on Mac
I have a WKWebView that contains a js text editor, built on top of a content editable div. Currently, inline predictions on Mac break the text editors functionality significantly every time there's a prediction. There's not a way for me to fix this on the js side unless I can know when a prediction is shown. I've tried disabling inline predictions and writing tools with the web view config, but it doesn't work: let config = WKWebViewConfiguration() config.writingToolsBehavior = .none config.allowsInlinePredictions = false let webView = WKWebView(frame: .zero, configuration: config) I've also tried disabling all spellcheck and autocorrect features in the html but that doesn't work either: <div contenteditable="true" spellcheck="false" autocomplete="off" autocorrect="off" autocapitalize="off"></div> Is there anything I can do to turn it off? Or, is it possible to know when the WebView is predicting text?
Replies
1
Boosts
0
Views
304
Activity
Sep ’25
Foundational Model - Image as Input? Timeline
Hi all, I am interested in unlocking unique applications with the new foundational models. I have a few questions regarding the availability of the following features: Image Input: The update in June 2025 mentions "image" 44 times (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates) - however I can't seem to find any information about having images as the input/prompt for the foundational models. When will this be available? I understand that there are existing Vision ML APIs, but I want image input into a multimodal on-device LLM (VLM) instead for features like "Which player is holding the ball in the image", etc (image understanding) Cloud Foundational Model - when will this be available? Thanks! Clement :)
Replies
1
Boosts
0
Views
594
Activity
Sep ’25
Model w/ Guardrails Disabled Still Frequently Refuses to Summarize Text
Foundation Models are driving me up the wall. My use case: A news app - I want to summarize news articles. Sounds like a perfect use for the added-in-beta-5 "no guardrails" mode for text-to-text transformations... ... and it's true, I don't get guardrails exceptions anymore but now, the model itself frequently refuses to summarize stuff which in a way is even worse as I have to parse the output text to figure out if it failed instead of getting an exception. I mostly worked that out with my system instructions but still, the refusing to summarize makes it really tough to use. I instructed the model to tell me why it failed if that happens. Examples of various refusals for news articles from major sources: "The article mentions "Visual Lookup" but does not provide details about how it integrates with iOS 26." "The article includes unsafe content regarding a political figure's potential influence over the Federal Reserve board, which is against my guidelines." "the article contains unsafe content." "The article is biased and opinionated and focuses on the author's opinion." (this is despite the instructions specifically asking for a neutral summary - I am asking it to not use bias in the output but it still refuses) I have tons of these. Note that if I don't use the "no guardrails" mode and use a Generable instead, some of these work fine so right now I have to do two passes on much of the content since I never know which one will work. Having a "summary mode" that often refuses to summarize current news articles (the world is not a great place, some of these stories are a bummer) is near worthless.
Replies
8
Boosts
0
Views
1k
Activity
Sep ’25
How to test for VisualIntelligence available on device?
I'm adding Visual Intelligence support to my app, and now want to add a Tip using TipKit to guide users to this feature from within my app. I want to add a Rule to my Tip which will only show this Tip on devices where Visual Intelligence is supported (ex. not iPhone 14 Pro Max). What is the best way for me to determine availability to set this TipKit rule? Here's the documentation I'm following for Visual Intelligence: https://developer.apple.com/documentation/visualintelligence/integrating-your-app-with-visual-intelligence
Replies
0
Boosts
0
Views
738
Activity
Sep ’25
Apple Intelligence in Xcode with models hosted in Azure AI Foundry
I have managed to configure Apple Intelligence in Xcode to access my deployed models in Azure AI Foundry. However, when I would try to chat with the selected model, I get the following error: Message from Azure OpenAI: Failed request due to: bad request Additional context: {"error": {"code": "BadRequest", "message": "Response API v1 is in GA, please use remove api-version or use api-version=latest."}} Is there a way to remove v1 from the endpoint used or to define an explicit api-version? Or do I have to change my deployment in Azure AI Foundry?
Replies
1
Boosts
0
Views
136
Activity
Aug ’25
TAMM toolkit v0.2.0 is for base model older than base model in macOS 26 beta 4
Problem: We trained a LoRA adapter for Apple's FoundationModels framework using their TAMM (Training Adapter for Model Modification) toolkit v0.2.0 on macOS 26 beta 4. The adapter trains successfully but fails to load with: "Adapter is not compatible with the current system base model." TAMM 2.0 contains export/constants.py with: BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1" Findings: Adapter Export Process: In export_fmadapter.py def write_metadata(...): self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE # Hardcoded value The Compatibility Check: - When loading an adapter, Apple's system compares the adapter's baseModelSignature with the current system model - If they don't match: compatibleAdapterNotFound error - The error doesn't reveal the expected signature Questions: - How is BASE_SIGNATURE derived from the base model? - Is it SHA-1 of base-model.pt or some other computation? - Can we compute the correct signature for beta 4? - Or do we need Apple to release TAMM v0.3.0 with updated signature?
Replies
1
Boosts
0
Views
658
Activity
Aug ’25
Ios26 beta 7 no more apple intelligence toggle?
After installing ios26 beta 7 developer on my iPhone 16e, I can’t see anymore the toggle to activate apple intelligence on my phone under settings-&gt;apple intelligence. could you please explain how to activate it ? Many thanks,
Replies
0
Boosts
0
Views
184
Activity
Aug ’25
FoundationModels tool calling not working (iOS 26, beta 6)
I have a fairly basic prompt I've created that parses a list of locations out of a string. I've then created a tool, which for these locations, finds their latitude/longitude on a map and populates that in the response. However, I cannot get the language model session to see/use my tool. I have code like this passing the tool to my prompt: class Parser { func populate(locations: String, latitude: Double, longitude: Double) async { let findLatLonTool = FindLatLonTool(latitude: latitude, longitude: longitude) let session = LanguageModelSession(tools: [findLatLonTool]) { """ A prompt that populates a model with a list of locations. """ """ Use the findLatLon tool to populate the latitude and longitude for the name of each location. """ } let stream = session.streamResponse(to: "Parse these locations: \(locations)", generating: ParsedLocations.self) let locationsModel = LocationsModels(); do { for try await partialParsedLocations in stream { locationsModel.parsedLocations = partialParsedLocations.content } } catch { print("Error parsing") } } } And then the tool that looks something like this: import Foundation import FoundationModels import MapKit struct FindLatLonTool: Tool { typealias Output = GeneratedContent let name = "findLatLon" let description = "Find the latitude / longitude of a location for a place name." let latitude: Double let longitude: Double @Generable struct Arguments { @Guide(description: "This is the location name to look up.") let locationName: String } func call(arguments: Arguments) async throws -> GeneratedContent { let request = MKLocalSearch.Request() request.naturalLanguageQuery = arguments.locationName request.region = MKCoordinateRegion( center: CLLocationCoordinate2D(latitude: latitude, longitude: longitude), latitudinalMeters: 1_000_000, longitudinalMeters: 1_000_000 ) let search = MKLocalSearch(request: request) let coordinate = try await search.start().mapItems.first?.location.coordinate if let coordinate = coordinate { return GeneratedContent( LatLonModel(latitude: coordinate.latitude, longitude: coordinate.longitude) ) } return GeneratedContent("Location was not found - no latitude / longitude is available.") } } But trying a bunch of different prompts has not triggered the tool - instead, what appear to be totally random locations are filled in my resulting model and at no point does a breakpoint hit my tool code. Has anybody successfully gotten a tool to be called?
Replies
2
Boosts
1
Views
584
Activity
Aug ’25