I’ve had the beta and been signed up since October 23rd. Still sitting on early access requested. It’s a 16PM. My son has my old 15PM he downloaded the 18.2 beta 4 days ago and requested access and he already has it. So is it a lottery system or only certain phone models at a given time? Anyone else have the same issue?
Thanks in advance,
Jeremy
Apple Intelligence
RSS for tagApple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
On 25 Oct I I updated and Requested Early access for Image Playground feature, Still its got stuck on the same for 10 days, Now new 18.2 beta 2 is available still am not get access for 1 version Old feature
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
So, I've declared an AppIntent that indicates my app can "Open files" that conform to UTType.Image.
I've got a @AssistantEntity(schema: .files.file) and a
@AssistantIntent(schema: .files.openFile) declared.
So I navigate to the files app, quicklook an image, and open type-to-siri.
I tell siri "open this in " and all it does is act like "open ". No breakpoint is hit in my intent's perform method.
Am I doing something wrong? How can I test these cross-app behaviors?
Are they... not actually possible? Does an "OpenIntent" only work on my app's own URLs and not on file URLs from other apps?
Apple intelligence not working at all. Was working ok under 18.1 but nothing under 18.2??
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I noticed that the ChatGPT is listed as an "Extension" in the Apple Intelligence settings on iOS 18.2 beta. Does this mean developers will be able to create their own extensions? Or will this be limited to larger companies to incorporate their own models into Apple Intelligence?
Hey, has anyone figured out how the “Persons” list in Genmoji/Playground actually works?
I’ve had a strange experience so far. When I first got access during Beta 2, the list randomly included about 10–15 people, even though my photo library contains many more recognizable faces. To try fixing this, I started naming faces in the Photos app, hoping they’d be added to the Genmoji/Playground list, but nothing changed.
Then, after updating to Beta 3, it added just 2–3 of the people I had named. Encouraged, I spent about an hour naming all the faces in my library. But a few hours later, the list unexpectedly removed around 10 people, leaving me with fewer than I had initially.
I’ve also read that leaving the phone locked and plugged into power should help sort people in the library, but that hasn’t worked for me yet.
Anyone else experienced this or found a way to make it work? Thanks!
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Itself been 4-5 days my Image playground has showing the “Downloading Support for Image Playground “
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I'm trying to determine the best practice for handling if Image Playground is available but not installed or simply not supported.
If ImagePlaygroundViewController.isAvailable is true, I will just display a button to start an Image Playground session. If it is false, does that mean ImagePlayground is supported but not installed?
If it's supported and not installed, instead of a button to launch it, I want to display something like "Enable Apple Intelligence in Settings" or, better yet, a button that opens the Intelligence settings. Is that possible?
But if it is on a system that doesn't support it, of course, I don't want to instruct the user to enable it. How can I determine if a device cannot install Image Playground?
I read that Apple Intelligence requires iPhone 15 Pro, iPhone 15 Pro Max, and all iPhone 16 models, and no mention of the M1 iPad Pro, yet Image Playground runs on my M1 iPad Pro. What are the hardware requirements for Image Playground?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hey I have a macbook pro M1 and I don't understand why but the download of apple intelligence since macOS 15.2 is remaining block at 100% with the same message telling me to be plug and connect to a network
Hi everyone,
I’m currently using macOS Version 15.3 Beta (24D5034f), and I’m encountering an issue with Apple Intelligence. The image generation tools seem to work fine, but everything else shows a message saying that it’s “not available at this time.”
I’ve tried restarting my Mac and double-checked my settings, but the problem persists. Is anyone else experiencing this issue on the beta version? Are there any fixes or settings I might be overlooking?
Any help or insights would be greatly appreciated!
Thanks in advance!
iOS 18.2 includes a new feature called Visual Intelligence. If I hold down the Camera Control on my iPhone, I can take a photo of an object and use Google to look up items similar to what I've photographed.
Is there a way to programmatically open this interface within my app? If so, can I see which result the user selects?
Hey dear developers!
This post should be available for the future Siri updates and improvements but also for wishes in this forum so that everyone can share their opinion and idea please stay friendly. have fun! I had already thought about developing a demo app to demonstrate my idea for a better Siri.
My change of many:
Wish Update: Siri's language recognition capabilities have been significantly enhanced. Instead of manually setting the language, Siri can now automatically recognize the language you intend to use, making language switching much more efficient. Simply speak the language you want to communicate in, and Siri will automatically recognize it and respond accordingly. Whether you speak English, German, or Japanese, Siri will respond in the language you choose.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
iPhone
Siri Event Suggestions Markup
Siri and Voice
Apple Intelligence
HI,
I've been modifying the Camera sample app found here: https://developer.apple.com/tutorials/sample-apps/capturingphotos-camerapreview ... in the processpreview images, I am calling in to the Vision APis to either detect a person or object, then I'm using the segmentation mask to extract the person and composite them onto a different background with some other filters. I am using coreimage to filter the CIImages, and converting and displaying as a SwiftUI Image. When running on my IPhone, it works fine. When running on my Iphone with the debugger, it crashes within a few seconds... Attached is a screenshot. At the top is an EXC_BAD_ACCESS in libRPAC.dylib`std::__1::__hash_table<std::__1::__hash_value_type<long, qos_info_t>, std::__1::__unordered_map_hasher<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::hash, std::__1::equal_to, true>, std::__1::__unordered_map_equal<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::equal_to, std::__1::hash, true>, std::__1::allocator<std::__1::__hash_value_type<long, qos_info_t>>>::__emplace_unique_key_args<long, std::__1::piecewise_construct_t const&, std::__1::tuple<long const&>, std::__1::tuple<>>:
This was working fine a couple of days ago.. Not sure why it's popping up now. Am I correct in interpreting this as an LLDB issue? How do I fix it?
Attempted to download the Adapter Toolkit linked to from https://developer.apple.com/apple-intelligence/foundation-models-adapter/. Failed on all attempts, with a "403 Forbidden" error. I had accepted the agreement on the first attempt. How would we get access please?
If users turn off Apple Intelligence, what happens to apps that leverage Foundation Model Framework?
Would there be a popup automatically shown to a user saying to enable Apple Intelligence if our user has the toggle turned off? Just curious about how that experience looks for both us as developers and users.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
When I ran the following code on a physical iPhone device that supports Apple Intelligence, I encountered the following error log.
What does this internal error code mean?
Image generation failed with NSError in a different domain: Error Domain=ImagePlaygroundInternal.ImageGeneration.GenerationError Code=11 “(null)”, returning a generic error instead
let imageCreator = try await ImageCreator()
let style = imageCreator.availableStyles.first ?? .animation
let stream = imageCreator.images(for: [.text("cat")], style: style, limit: 1)
for try await result in stream { // error: ImagePlayground.ImageCreator.Error.creationFailed
_ = result.cgImage
}
I'm using an iPhone 15 Pro Max and running developer beta 18.2 released today. I've already been an 'Apple Intelligence' user and now have been able to link it with my PAID ChatGPT account.
HOWEVER;
I'm searching for these Image features everyone seems to be posting about and cannot find them anywhere.
I'm apparently supposed to sign up for beta access to the Image features through some new Apple natively released app that was supposedly included in this build update, which I cannot find.
What gives??!!
I have updated the ios 18.2. The image playground is stuck at access requested and its been one and a half day. When will this be available on my phone i am using 16 pro max.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Just wanted to reach out to see if this is the norm. I see several posts saying people are still waiting for the early access playground app, what’s going on? it’s been almost 48 hours and I’ve received nothing. If this is the norm, then so be it…but even when I had to wait for the Apple Intelligence early access that was only a few hours. Hopefully, this will be resolved quickly. I mean what’s the point of being developer beta testers, if we can’t test the beta?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I just watched the October 30 MacBook Pro Announcement where they talked about on-device local LLMs for the M4s.
What developer training resources are available, where we can learn how to use custom llm models and build our Swift apps to use both Apple Intelligence and other llm models on device?
Is the guidance to follow MLX github repos, or were those experimental and now there is an approved workflow and tooling?
https://www.youtube.com/watch?v=G0cmfY7qdmY