iOS 18.2 includes a new feature called Visual Intelligence. If I hold down the Camera Control on my iPhone, I can take a photo of an object and use Google to look up items similar to what I've photographed.
Is there a way to programmatically open this interface within my app? If so, can I see which result the user selects?
Apple Intelligence
RSS for tagApple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.
Post
Replies
Boosts
Views
Activity
Hi everyone,
I’m currently using macOS Version 15.3 Beta (24D5034f), and I’m encountering an issue with Apple Intelligence. The image generation tools seem to work fine, but everything else shows a message saying that it’s “not available at this time.”
I’ve tried restarting my Mac and double-checked my settings, but the problem persists. Is anyone else experiencing this issue on the beta version? Are there any fixes or settings I might be overlooking?
Any help or insights would be greatly appreciated!
Thanks in advance!
there was a beta version. after the update it worked just like regular Siri. this message has been there for two days now but there is no loading.
Hey I have a macbook pro M1 and I don't understand why but the download of apple intelligence since macOS 15.2 is remaining block at 100% with the same message telling me to be plug and connect to a network
I've implemented the imagePlaygroundSheet modifier in my app. It eventually all works but I've consistently noticed that the first time I present it, the sheet is totally blank. I then have to pull down to dismiss it (it doesn't even have a cancel button) and present it a second time and it loads content.
Just me? This is on 18.2 final, iPhone 16 Pro Max.
Howdy,
I'm following along with this sample:
https://developer.apple.com/documentation/appintents/making-onscreen-content-available-to-siri-and-apple-intelligence
I've got everything up and building. I can confirm that the userActivity modifier is associating my App Intent via EntityIdentifier but my custom Transferable representation (text) is never being called and when Siri is doing the ChatGPT handoff, it's just offering to send a screenshot which is what it does when it has no custom representation.
What could I doing wrong? Where should I be looking?
I have an issue with AI writing tools, certain applications, such as LinkedIn, function effectively, while others, like Instagram and WhatsApp, lack the writing tools option.
Hi everyone,
On the "Apple Intelligence & Siri" settings there's a section titled "Extensions" that specifically mentions ChatGPT.
This got me curious—does Apple provide an API or SDK for developers to create custom integrations or use Apple Intelligence Extensions? Or is this currently limited to the Apple/OpenAI partnership?
I appreciate any insights or links to relevant documentation.
Here's a screenshot of what I mean: https://imgur.com/a/4MuQkIJ
Alguem me pode indicar se os devolopers que estão no espaço da união europeia, possam aceder aos serviços apple intelligence ?
Obrigado
I want to get depth map that when camera zoom in or zoom out or switch to telephoto.
I have got the depth map using ARkit that provide depth map that the colored RGB image from the wide-range camera and the depth ratings from the LiDAR scanner are fused together.
Now I want to switch camera to telephoto and hope to get new depth map.
When using the imagePlaygroundSheet modifier in SwiftUI, the system presets an image playground in a fixed size. Especially on macOS, this modal is rather small and doesn't utilize the available space efficiently.
Is there a way to make the modal bigger, or allow the user to resize the dialog? I tried presentationDetents, but this would need to be applied to the content of the sheet, which is provided by the system...
I guess this question applies to other system-provided sheets like the photo picker as well.
Hey Chat,
I'm researching personality analysis using LLMs, and I'm curious about whether Apple’s AI can be allowed access to your messages, Instagram DMs, and similar communications to perform a personality analysis based on your writing style. If anyone has insights on this, I would greatly appreciate your input. Thx a ton
I'm currently trying to add support for Image Playground to our apps. It seems that it's not working in an app that is "Designed for iPad" and runs on a Mac. The modal just shows a spinner and the following is logged to console:
Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none>
Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none>
Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none>
Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none>
GP extension could not be loaded: Extension (platform: 2) could not be found (in update)
dealloc Query controller [C32BA176-6A3E-465D-B3C5-0F8D91068B89]
ImagePlaygroundViewController.isAvailable returns true, however.
In a "real" Mac Catalyst app, it's working. Just not when the app is actually an iPad app.
Is this a bug?
I cannot find the hardware requirements for Image Playground documented anywhere. I'm also not sure if they are identical to devices that support Apple Intelligence.
On the App Store, the only requirement listed for Image Playground is iOS 18.2.
Not knowing the requirements is an issue because I need to be able to clearly state the requirements for the feature in my app description.
Also, I'm sure my mother's current iPad is too old, but I'm not sure what models support it if I were to buy her a new one.
I'm trying to determine the best practice for handling if Image Playground is available but not installed or simply not supported.
If ImagePlaygroundViewController.isAvailable is true, I will just display a button to start an Image Playground session. If it is false, does that mean ImagePlayground is supported but not installed?
If it's supported and not installed, instead of a button to launch it, I want to display something like "Enable Apple Intelligence in Settings" or, better yet, a button that opens the Intelligence settings. Is that possible?
But if it is on a system that doesn't support it, of course, I don't want to instruct the user to enable it. How can I determine if a device cannot install Image Playground?
I read that Apple Intelligence requires iPhone 15 Pro, iPhone 15 Pro Max, and all iPhone 16 models, and no mention of the M1 iPad Pro, yet Image Playground runs on my M1 iPad Pro. What are the hardware requirements for Image Playground?
So when I was on the Settings app. I couldn’t see it, but I updated it and I don’t it why don’t is this a glitch please fix it your friend Isaiah.
Hi!
I recently updated to the latest 18.2 Beta version of iOS on my iPhone 15 Pro Max. Could you please guide me on how to locate and utilize the Image Search feature powered by Apple Intelligence?
Just a little detail: I went on YouTube and the instruction was to hold the camera action button on the iPhone 16 and image search appears.
So far, I haven’t been able to replicate these results on my iPhone 15 Pro Max. This is a great capability and I’d really like to try it out.
“Live long and prosper.” -Spock
-Jordan
Hi,
I'm trying to analyze images in my Photos library with the following code:
func analyzeImages(_ inputIDs: [String])
{
let manager = PHImageManager.default()
let option = PHImageRequestOptions()
option.isSynchronous = true
option.isNetworkAccessAllowed = true
option.resizeMode = .none
option.deliveryMode = .highQualityFormat
let concurrentTasks=1
let clock = ContinuousClock()
let duration = clock.measure {
let group = DispatchGroup()
let sema = DispatchSemaphore(value: concurrentTasks)
for entry in inputIDs {
if let asset=PHAsset.fetchAssets(withLocalIdentifiers: [entry], options: nil).firstObject {
print("analyzing asset: \(entry)")
group.enter()
sema.wait()
manager.requestImage(for: asset, targetSize: PHImageManagerMaximumSize, contentMode: .aspectFit, options: option) { (result, info) in
if let result = result {
Task {
print("retrieved asset: \(entry)")
let aestheticsRequest = CalculateImageAestheticsScoresRequest()
let fingerprintRequest = GenerateImageFeaturePrintRequest()
let inputImage = result.cgImage!
let handler = ImageRequestHandler(inputImage)
let (aesthetics,fingerprint) = try await handler.perform(aestheticsRequest, fingerprintRequest)
// save Results
print("finished asset: \(entry)")
sema.signal()
group.leave()
}
}
else {
group.leave()
}
}
}
}
group.wait()
}
print("analyzeImages: Duration \(duration)")
}
When running this code, only two requests are being processed simultaneously (due to to the semaphore)... However, if I call the function with a large list of images (>100), memory usage balloons over 1.6GB and the app crashes. If I call with a smaller number of images, the loop completes and the memory is freed.
When I use instruments to look for memory leaks, it indicates no memory leaks are found, but there are 150+ VM:IOSurfaces allocated by CMPhoto, CoreVideo and CoreGraphics @ 35MB each. Shouldn't each surface be released when the task is complete?
I just got my new iPhone 16 Pro and upgraded to the 18.2 developer beta 4. I've set both Siri and the device language to English (United States), but the Apple Intelligence feature still doesn’t appear in my settings.
I click Join the Apple Intelligence Waitlist and chose Join Waitlist, but no show Joined list