Thanks, very good to know. I'll see if I can reproduce it in a test project (unfortunately the project it's in is huge) and post here and do a FB.
Post
Replies
Boosts
Views
Activity
It was more of a general question about if the API is working correctly for others which of course can be very useful information - pretty common for Apple APIs, especially SwiftUI, to behave in strange ways and sometimes just not work at all.
So that was the purpose - knowing it works fine for everyone else would be useful. That's what the 'just me' question means.
But sure, here's the code. It's basically the same as the sample in the documentation and since this modifier just uses a Bool state for presentation it didn't seem useful.
With the below:
When button pressed, sheet presents as expected.
Sheet is blank, i.e. totally white.
Wait a little bit to see what happens (nothing).
Dismiss sheet with interactive pull dismissal.
Tap button again.
Sheet appears, Image Playgrounds content loads.
Code is as follows in case something does jump out. Maybe you'll see something that I do not.
import SwiftUI
#if canImport(ImagePlayground)
import ImagePlayground
#endif
@available(iOS 18.2, *)
struct TripCustomImageGenerationView: View {
@Environment(\.supportsImagePlayground) private var supportsImagePlayground
@State private var showImagePlayground: Bool = false
@State private var data: Data?
var body: some View {
if supportsImagePlayground {
Button("Generate Custom Image") {
showImagePlayground = true
}
.imagePlaygroundSheet(isPresented: $showImagePlayground, concept: "Test Image Concept") { url in
self.data = try? Data(contentsOf: url)
}
} else {
EmptyView()
}
}
}
A little more on this... For my document implementation, I switched from plain text to generating a PDF and using a DataTransferRepresentation instead of a ProxyRepresentation (with the string) and now it works.
No idea why the basic string didn't work but works now.
Hi Ed,
I haven't split it out into a test project yet but that may be the next step. This is a complex app which always makes the extraction a bit more of a process.
One interesting note - I have this functionality allowing Siri to 'read' the screen in this way for two types of content. One is in the app's photo gallery which is very similar to the sample code. That one works - when it shares with ChatGPT, it correctly identifies the type as 'photo' not 'screenshot' and my Transferable implementation is called.
The second one that's not working is using the .reader.document schema, i.e. I want to share a text document with Siri but instead it only wants to share a screenshot.
Looking at the code, other than the schema type being different, the mechanics are basically the same which is where I'm confused and was curious if anyone else had done this and could be helpful.
Perhaps a sample is the next thing...
This is a developer forum - sounds like you might want user support. Apple does have user support forums which are probably a better fit - we talk about APIs here.
The feature you're talking about is iPhone 16 only. It requires the Camera Control button.
You need to include the CoreSpotlightContinuation key in your Info.plist - it's a boolean that you should set to YES.
https://developer.apple.com/documentation/corespotlight/csquerycontinuationactiontype
After spending more time, you can do both if you want to but you don't need to.
After spending some time talking with DTS and reviewing the updated App Intent sample code located here https://developer.apple.com/documentation/appintents/acceleratingappinteractionswithappintents, I've got my answer.
The sample provides one way to integrate Spotlight, by indexing a separate model and associating the app entity. What I discovered was that if you want to implement IndexedEntity, you can indeed provide an extended attribute set, you just need to start with the existing one, not create your own. Pretty simple:
extension HotelEntity: IndexedEntity {
var attributeSet: CSSearchableItemAttributeSet {
let existingAttributes = defaultAttributeSet // this is the key
existingAttributes.displayName = "\(name) displayName"
existingAttributes.title = "\(name) title"
existingAttributes.domainIdentifier = "\(name) domainIdentifier"
existingAttributes.identifier = "\(name) identifier"
existingAttributes.contentDescription = "\(name) contentDescription"
existingAttributes.namedLocation = "\(name) namedLocation"
return existingAttributes
}
}
With that, it's all working!
I was bored so played with this a bit - I couldn't make it work with a custom type. Works fine if you use the built-in 'todoItem' type. Seems like a bug, yeah.
Related follow-up question for this integration with Spotlight:
Do I need to implement both CSSearchableItem's new associateAppEntity AND also a custom implementation of attributeSet in my IndexedEntity conformance? It seems duplicative but I can't tell from the video if you're supposed to do both or just one or the other.
I don't think this is live in Siri yet... at least they haven't announced it and none of the other features in the 'personal context' category have shipped. Apple hasn't said when - rumors say 18.3 or 18.4 (i.e. 2025).
Yeah, it's talked about here - all of the testing is meant to be done in the Shortcuts app for now.
https://developer.apple.com/videos/play/wwdc2024/10133/?time=104
Are these features even enabled in any version of iOS 18, shipped or seeded? I thought that the @AssistantIntent and related stuff was included in the SDK for us to get ready but that these weren't active yet in Siri - I can't remember where I got that, the WWDC video I think?
No, I have not been able to get any prompt to appear.
The first seed has only been out for a week so it's not possible that you've been waiting that long. Check the Feedback app - it has a note posted about testing those features stating that users will be added over the coming weeks (i.e. multiple).
Thanks.
This has been filed as FB15340069.