Here is an AppleScript script to make it possible double-click files in Finder so that they will be opened in Vim, by Normen Hansen: https://github.com/normen/vim-macos-scripts/blob/master/open-file-in-vim.applescript (it must be used from Automator).
I try to simplify it and also to make it possible to use it without Automator. To use my own version, you only need to save it as application instead, like this:
osacompile -o open-with-vim.app open-with-vim.applescript
and then copy it to /Applications and set your .txt files to be opened using this app.
Here it is, it alsmost works:
-- https://github.com/normen/vim-macos-scripts/blob/master/open-file-in-vim.applescript
-- opens Vim with file or file list
-- sets current folder to parent folder of first file
on open theFiles
set command to {}
if input is null or input is {} or ((item 1 in input) as string) is "" then
-- no files, open vim without parameters
set end of command to "vim;exit"
else
set firstFile to (item 1 of theFiles) as string
tell application "Finder" to set pathFolderParent to quoted form of (POSIX path of ((folder of item firstFile) as string))
set end of command to "cd" & space & (pathFolderParent as string)
set end of command to ";hx" -- e.g. vi or any other command-line text editor
set fileList to {}
repeat with i from 1 to (theFiles count)
set end of fileList to space & quoted form of (POSIX path of (item i of theFiles as string))
end repeat
set end of command to (fileList as string)
set end of command to ";exit" -- if Terminal > Settings > Profiles > Shell > When the shell exits != Don't close the window
end if
set command to command as string
set myTab to null
tell application "Terminal"
if it is not running then
set myTab to (do script command in window 1)
else
set myTab to (do script command)
end if
activate
end tell
return
end open
The only problem is the if block in the very beginning:
if input is null or input is {} or ((item 1 in input) as string) is "" then
What is wrong here? How to make it work? (Assuming it already works fine if AppleScript is used using Automator.)
Automation & Scripting
RSS for tagLearn about scripting languages and automation frameworks available on the platform to automate repetitive tasks.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In my case, when two functions that start each Live Activity(not connected each other) are performed in LiveActivityIntent's perform(), it seems that only one will start.
(It's the same to start independently with two Task{})
And, set one to 'opensIntent' and separate it by opening another LiveActivityIntent, the result is same.
Also, every time I tap the Intent directly in the shortcut app, one activity will end within a matter of seconds, even if there are two for a while.
But, If openAppWhenRun to true, it seem to works without any problems.
I would appreciate it if you could give me a tip to fix this problem.
Hi
Here is my problem. I have a large number of portrait pictures in my adobe lightroom catalogue which I need to email to the subjects of the portraits.
These peoples names and email addresses are stored in my pictures metadata and I am able to export the pictures with accompanying text or CSV files containing the names & addresses.
What I want to do is export the pictures in bulk and automagically create and send individual emails for each picture, preferably with the persons name in the salutation at the start of the email.
I have done some googling trying to find a solution to this. I think that I need to use automator. (The lightroom option of mailing directly from within the programme isn't going to do the job, each mail would need to be addressed and written by hand each time). I have been playing around with it, but can see no way to populate the email address field or add the attachment.
Can anyone help me out, with even just the basic outline, of how this would be achieved please?
I'm curious, why DynamicOptionsProvider is available on watchOS? Is there any way to present options to the user? For example in Emoji Rangers project:
struct EmojiRangerSelection: AppIntent, WidgetConfigurationIntent {
static let intentClassName = "EmojiRangerSelectionIntent"
static var title: LocalizedStringResource = "Emoji Ranger Selection"
static var description = IntentDescription("Select Hero")
@Parameter(title: "Selected Hero", default: EmojiRanger.cake, optionsProvider: EmojiRangerOptionsProvider())
var hero: EmojiRanger?
struct EmojiRangerOptionsProvider: DynamicOptionsProvider {
func results() async throws -> [EmojiRanger] {
EmojiRanger.allHeros
}
}
func perform() async throws -> some IntentResult {
return .result()
}
}
On watchOS we usually use recommendations() to give the user predefined choice of configured widgets. Meanwhile in AppIntentProvider recommendations are empty:
struct AppIntentProvider: AppIntentTimelineProvider {
...
func recommendations() -> [AppIntentRecommendation<EmojiRangerSelection>] {
[]
}
}
Does it imply that there's a way to use DynamicOptionsProvider on watchOS somehow? BTW, WidgetConfiguration.promptsForUserConfiguration() is one of the methods that are not available on watchOS.
And also, the Emoji Ranger project doesn't show widgets (complications) on watchOS out of the box.
Hi,
In my application I am donating AppIntent instances that I have to the system using the donate() API. I recently came across this article that talks about deleting donations but it does not mention how to handle AppIntent instances.
I am wondering when working with dynamic AppIntents (with different properties that can change in the future), should I be worried about "outdated" donated AppIntent instances? And if yes how can I delete previously donated AppIntent instances.
I’m trying to develop a widget with a button that triggers an app intent.
I integrated the app intent into my app within a separate app framework. I tested it with Shortcuts and Siri, and it works well—it opens the app on the required screen. However, when I added a button Button(intent: MyIntent()) to my widget, it doesn’t work at all.
The only clue I found is the following message in the Xcode debug console:
“No ConnectionContext found for (some big integer)” when I tap on the widget's button.
However, I see the same message when running it through the Shortcuts app, and in that case, it works fine.
Does anyone know what might be causing this issue?
My Intent:
public struct OpenTextInputIntent: AppIntent {
public static var title: LocalizedStringResource = "Open text input"
public static var openAppWhenRun: Bool = true
@Parameter(title: "Predefined text")
public var predefinedText: String
@Dependency private var appCoordinator: AppCoordinatorProtocol
public init() { }
public func perform() async throws -> some IntentResult {
appCoordinator.openAddMessage(predefinedText: predefinedText)
return .result()
}
}
My widget's view:
struct SimpleWidgetView : View {
var entry: SimpleWidgetTimelineProvider.Entry
var body: some View {
ZStack(alignment: .leadingTop) {
button
}
}
private var button: some View {
Button(intent: OpenTextInputIntent()) {
Image(systemName: "mic.fill")
.resizable()
.aspectRatio(contentMode: .fit)
.iconFrame()
}
.buttonStyle(PlainButtonStyle())
.foregroundStyle(Color.white)
.padding(10)
.background(Circle().fill(Color.accent))
}
}
Intents Registration in the app target:
struct MyAppPackage: AppIntentsPackage {
static var includedPackages: [any AppIntentsPackage.Type] {
[FrameworkIntentsPackage.self]
}
}
struct MyAppShortcutsProvider: AppShortcutsProvider {
static var appShortcuts: [AppShortcut] {
AppShortcut(
intent: OpenTextInputIntent(),
phrases: ["Add message in \(.applicationName)"],
shortTitle: "Message input",
systemImageName: "pencil.circle.fill"
)
}
}
What I'm missing?
I’ve created several shortcuts that tell me the stock price of a given company. The shortcut queries Yahoo Finance using Get Contents of URL, with the URL
https://finance.yahoo.com/quote/TICKER SYMBOL/, for example
https://finance.yahoo.com/quote/PLTR/ for Palantir or
https://finance.yahoo.com/quote/AAPL/ for Apple, etc.
Then it uses RegEx to parse out the numbers which it then formats and displays in a notification. Simple. It works great for several stocks, but for some reason, it does not work correctly for Palantir. It shows an older “previous close” price. Oddly, when I go to the website myself, it shows me the current stock price.
So for today Mar 21 https://finance.yahoo.com/quote/PLTR/ shows me $90.96 (correct) but the shortcut, via Get Contents of URL, shows $87.39 (incorrect). This $87.39 price is listed further down in the page as a "previous close” price. I don’t get it.
Here is a link to my Palantir shortcut:
https://www.icloud.com/shortcuts/edea6ee0261245f49b078efc74d632dd
Here is a link to my Apple shortcut:
https://www.icloud.com/shortcuts/54a416393203432aa356fe76373e3f8b
So the question is, why does Get Contents of URL show an old stock price but when I go to the site myself, it shows the correct stock price … and only for Palantir? I have about six shortcuts running correctly. Palantir is the only one that does not work.
Been banging my head on this one for weeks. Any advice would be much appreciated.
Thank you,
Rob
IOS can record a key, a key video, but there is no key recording function, which is really bad👎Yeah, i hope you guys come on.💪
Topic:
App & System Services
SubTopic:
Automation & Scripting
Hello, experts!
I'm working on a VOIP application that handles audio calls and integrates with CallKit. The problem occurs when attempting to redial a previously made audio call from the system's call history. When I try to handle the NSUserActivity in the application(_:continue:restorationHandler:) method, it intercepts the INStartAudioCallIntent instead of the expected INStartCallIntent.
Background
Deprecation Warnings: I'm encountering deprecation warnings when using INStartAudioCallIntent and INStartVideoCallIntent:
'INStartAudioCallIntent' was deprecated in iOS 13.0: INStartAudioCallIntent is deprecated. Please adopt INStartCallIntent instead.
'INStartVideoCallIntent' was deprecated in iOS 13.0: INStartVideoCallIntent is deprecated. Please adopt INStartCallIntent instead.
As a result, I need to migrate to INStartCallIntent instead, but the issue is that when trying to redial a call from the system’s call history, INStartAudioCallIntent is still being triggered.
Working with Deprecated Intents: If I use INStartAudioCallIntent or INStartVideoCallIntent, everything works as expected, but I want to adopt INStartCallIntent to align with the current iOS recommendations.
Configuration:
CXProvider Configuration: The CXProvider is configured as follows:
let configuration = CXProviderConfiguration()
configuration.supportsVideo = true
configuration.maximumCallsPerCallGroup = 1
configuration.maximumCallGroups = 1
configuration.supportedHandleTypes = [.generic]
configuration.iconTemplateImageData = UIImage(asset: .callKitLogo)?.pngData()
let provider = CXProvider(configuration: configuration)
Outgoing Call Handle: When making an outgoing call, the CXHandle is created like this:
let handle = CXHandle(type: .generic, value: callId)
Info.plist Configuration: In the info.plist, the following key is defined:
<key>NSUserActivityTypes</key>
<array>
<string>INStartCallIntent</string>
</array>
Problem:
When trying to redial the audio call from the system's call history, the NSUserActivity received in the application(_:continue:restorationHandler:) method is an instance of INStartAudioCallIntent instead of INStartCallIntent. This happens even though INStartCallIntent is listed in NSUserActivityTypes in the info.plist and I want to migrate to the newer intent as recommended in iOS 13+.
Device:
iPhone 13 mini
iOS version 17.6.1
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
Foundation
CallKit
Intents
App Intents
Platform and Version
iOS
Development Environment: Xcode 16.2.0, macOS 15.3
Run-time Configuration: iOS 18.3, 17.x
Description of Problem
We have started migrating some of the app’s core functionality over to App Intents.
Our first release of App Intent support focused on two settings a user can modify on their Bose products, Audio Modes and Immersive Audio, giving users the ability to modify these settings via Siri and shortcuts. The implementation uses two separate shortcuts for each setting type, with each shortcut supporting a single phrase for Siri each: “Change my Bose mode to ” and “Change my Bose immersive audio to ”. Each shortcut uses their own App Intent, and each App Intent has support for optionally providing both a product and a setting when performing the intent. Failing to provide a device, which happens when the intent is performed via Siri, simply auto selects a currently connected Bose product. Failing to provide a setting, like in cases where a user says “Change my Bose ” without providing a setting will simply have Siri confirm the setting the user wants to change before changing the setting. We are using AppEntity to identify a Bose product for both App Intents. Because the App Intent for the Audio Modes setting has a larger number of supported values (up to 15 maximum), we are also using AppEntity to identify these settings. We are using AppEnum to identify available settings for the Immersive Audio App Intent, as only 3 static values are supported.
Our original implementation of App Intent support had quite a few phrases supported for each shortcut. We had explicit support for direct synonyms of the verb “Change” in other phrases, supporting words like “Switch” and “Set”. We also had support for words that are like the word “Change”, but not directly related, like the word “Toggle” for instance. We also had support for phrases with or without the setting in each phrase. However, early on we had a lot of trouble with phrase detection with Siri. Siri had a hard time identifying what shortcut was being requested, as well as not being able to identify what settings the user was providing for the setting parameter of each App Intent. While researching potential fixes for this issue, we found a response to a thread in the Apple forums (https://developer.apple.com/forums/thread/759909) that seemed to indicate that Siri phrase recognition was very much an aggregate process. With the total number of phrases supported combined with the available settings for each phrase further compounding the total number of phrases Siri needs to learn to recognize for each shortcut. So, to hopefully improve Siri phrase detection, we added logic to limit the amount of Audio Mode settings supported based on what Audio Modes the user had setup on their Bose products. But, more importantly, we limited the number of explicit phrases supported for each shortcut to just a single phrase. In our testing, not only did this improve phrase recognition, but support for synonyms like “Set” or “Switch” seemed to implicitly still be recognized by Siri.
The issues we ran into with Siri phrase detection above has us a bit concerned about scaling App Intent support to other settings and features for our products in the future. Our app supports the ability to modify a large number of settings on their Bose products, with support constantly expanding to new products as they are released. Our roadmap for App Intent support was initially very ambitious, supporting much more than just the two settings mentioned above. But our initial experience with App Intents has us tapering our expectations a little bit as far as how much can be supported in total for App Intents.
One thing we also noticed is less than optimal display of default shortcuts in the Shortcuts app. The default shortcuts appeared like so, with shortcuts displayed based on available settings fro each shortcut:
However, we could not find a way to indicate to users that one particular section pertained specifically to the Audio Mode setting and the other to the Immersive Audio setting. The only information the user has to make this determination for themselves is the available settings (or shortcuts) for each. This may not be immediately clear to a new customer who might be using one of our products for the first time. This display of default shortcuts in the Shortcuts app has us wondering if our shortcuts implementation is what is intended as far as support for the Shortcuts app is concerned. We did survey default shortcuts displayed by other third-party applications and they mostly dealt with navigation with a single section containing default options clearly indicating where the user can navigate with a shortcut. We couldn’t find an example of an application supporting the ability to change different setting types, with each setting type having their own available values for each.
So, to summarize the questions we have concerning App Intent support:
What can we do with our App Intents and Shortcuts implementation to guarantee optimal performance with Siri?
What is an ideal number of phrases to support for each Shortcut.
What limitations should we be placing as far as the total number of available settings for each Shortcut.
Are there phrases that might work better than others for what we’re trying to achieve with App Intent support?
i.e. Is “Change my Bose mode” or “Change my Bose immersive audio” a good phrase to use for this kind of functionality? Or should we be using different verbs or wording?
Assuming optimal support of each Shortcut above. What is a reasonable expectation as far as how many different supported shortcuts we can scale to support at the same time.
One issue we ran into early on was Siri confusing one shortcut with the other and triggering the wrong App Intent at times. While this was ultimately resolved, this outcome seems much more likely the greater the number of individual shortcuts supported.
Are there any recommendations on how to display these App Intents to customers as far as default shortcuts in the Shortcuts app is concerned?
Is what we currently display for default shortcuts in the Shortcuts app what was initially intended for third party support for App Intents?
If what we are currently displaying is expected, would it be possible to support the ability to provide additional context to each section of default shortcuts displayed? We would like to indicate to the user that one set of shortcuts pertains to the Audio Modes settings, and the other to Immersive Audio. Something along the lines of a section header like some of the first-party apps use.
Are there any recommendations or tips for supporting App Intents, particularly phrases for Siri, in other languages?
When using NSScriptCommand, is there any way to create an NSAppleEventDescriptor from an NSDictionary with arbitrary keys without using keyASUserRecordFields?
Am I correct in thinking that this constant is deprecated? I ask because there is still active documentation using it.
Is there another way to return a record where the keys aren't known at compile-time?
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
Foundation
Frameworks
macOS
AppleScript
While playing around with AppShortcuts I've been encountering some problems around getting the invocation phrase detected and/or the parameter get recognized after invocation phrase via Siri. I've found some solutions or explanations here in other posts (Siri not recognizing the parameter in the phrase & Inform iOS about AppShortcutsProvider), but I still have one issue and it's about consistency.
For context, I've defined the parameter to be an AppEntity with it's respective query conforming to the EntityStringQuery Protocol in order to be able to fetch entities with the string given by Siri
struct AnIntent: AppIntent {
// other parts hidden for clarity
@Parameter
var entity: ModelEntity
}
For an invocation phrase akin to "Do something with in ", if the user uses the phrase with a entity previously donated via suggestedEntities() the AppShortcut get executed without problems. If the user uses a phrase with no parameter, like "do something with ", if the user gets asked to input the missing parameter and inputs one, it may or may not get recognized and be asked to input a parameter again, like in a loop. This happens even if the parameter given is one that was donated.
I've found that when this happens the entities(matching string: String) function in the EntityQuery doesn't get called. The input can be of one word or sometimes two and it will not be called. So in other words entities(matching string: String) does not get called on every user parameter input
Is this behavior correct?
Do parameters have some restrictions on length or anything?
Does Siri shows the user suggested entities when asked for entity input? It doesn't on my end.
Additional question related to AppShortcuts:
On AppShortcut definition, where the summary inside the parameter presentation is used? I see that it was defined in the AppIntentsSampleApp for the GetTrailInfo Intent but didn't find where it was used
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
Siri and Voice
Shortcuts
App Intents
I'd like to display a list of items to disambiguate for a fulltext search intent. Using the Apple AppIntentsSampleApp, I added TrailSearch.swift:
import AppIntents
@AssistantIntent(schema: .system.search)
struct TrailSearch: AppIntent {
static let title: LocalizedStringResource = "Search Trail"
static let description = IntentDescription("Search trail by name.",
categoryName: "Discover",
resultValueName: "Trail")
@Parameter(title: "Trail")
var criteria: StringSearchCriteria
func perform() async throws -> some IntentResult & ReturnsValue<TrailEntity> {
if criteria.term.isEmpty {
throw $criteria.needsValueError(IntentDialog("need value"))
}
let trails = TrailDataManager.shared.trails { trail in
trail.name.contains(criteria.term)
}
if trails.count > 1 {
throw $criteria.needsDisambiguationError(among: trails.map { StringSearchCriteria(term: $0.name) })
} else if let firstTrail = trails.first {
return .result(value: TrailEntity(trail: firstTrail))
}
throw $criteria.needsValueError(IntentDialog("Nothing found"))
}
}
Now when I type "trail" which matches several trails and thus lets us enter the disambiguation code path, the Shortcut app just displays the dialog title but no disambiguation items to pick from.
Is this by design or a bug?
(filed as FB17412220)
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
Siri and Voice
Shortcuts
Intents
App Intents
The built-in Books and iMessages on the latest macOS can not handle Shortcuts properly.
If Books (no matter the Home scheme or the reading scheme) or iMessages is the current focused application, Shortcuts doesn't work. Once I move out and focus app turns to Finder or any other app, Shortcuts works properly.
An exception is that when I pin the shortcut in the Menu Bar, the Menu Bar one works, while the one in the application's menu doesn't work. I have no idea why this would happen. Could it be part of privilege control or something?
Hi Community,
I'm new on Siri intents and I'm trying to introduce into my App a Siri Intent for Car Commands. The objective is to list into the Apple Maps the Car list of my App. Currently I've created my own target with its corresponding IntentHandlings, but in the .intentdefinition file of my App, I'm not able to find the List Car Intent.
https://developer.apple.com/documentation/sirikit/car-commands
Do I need some auth?
Also I share my info.plist from the IntentExtension.
Thank you very much,
David.
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
Swift
SiriKit
Intents
App Intents
Sorry if topic is not exact.
I write Ainu in various Roman Latin scripts on English GUI Catalina ,Text Edit.
The Ainu words are similar to English ex. 'an' in Ainu is 'exist' ,Ainu Language exists 'Ne Ainu itak an ',so spell checker will not red dot many words also some Ainu words look like other foreign words.
I open LocalDictionary and find it blank ,so I open TextEdit and open show spelling grammar 100 words out of 200 are red dotted !the others are not learned, so I press' learn' and it skips to some words not Allan after 100 it stops ,then I go to LocalDictionary and see all those words alphabetical order ,! great ! but what about the rest ? why does select half of the words and /part/ of a phrase/ 'Itak a-e-yay-/han-nok-kar-a' = to study language by oneself.
Guys has anyone here used the PlayVideoIntent protocol while implementing app intents?
If yes can you please walk me though what purpose it solves and what features and functionality I can unlock with it?
Link to apple's documentation -> https://developer.apple.com/documentation/appintents/playvideointent
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
Video
App Intents
Apple Intelligence
I am creating scripts to automatically switch the wallpapers on my multiple displays. System Events exposes almost all of the options accessible in the Wallpapers pane of system settings, but not the option to "Show on all Spaces".
I want to add that option to the following script:
tell application "System Events"
set intervalSeconds to 900.0
set wpDir to POSIX file "/Path/to/Folder/"
set picture rotation of every desktop to 1
set random order of every desktop to true
set pictures folder of every desktop to wpDir
set change interval of every desktop to intervalSeconds
do shell script ("killall Dock")
end tell
Also, the foregoing script does not seem to successfully set the interval value, although it does not throw an error. Not sure why that does not work.
Any thoughts or insights would be welcome.
Thank you
I asked a question similar to this earlier, but I think this is probably the better question.
I have a food-ordering app. When the user wants to pick up food, I'd like for Apple Maps to automatically display the location of the restaurant that the user is driving to.
Calendar does something similar. If there is an event that is soon, the location in the calendar-event shows up in Apple Maps. I'd like to do the same thing.
So, when the user makes an order, they'll need to drive to the location fairly quickly. So, I'd like to launch Apple Maps, see the location of the restaurant where I'm picking up food, and then get directions to it. Bonus points if this also works when I have CarPlay.
Hello Shortcuts community!
I want to obtain a list of my notes, and well, update them, delete them if needed, and so on. These are simple actions that I can already do.
For this, I saw that shortcuts was pretty simple, and I could get what I wanted and pipe it through the terminal. However, even though I'm a programmer, there's a lot that I'm missing since I cannot pipe anything to the terminal.
I made a simple shortcut to give me some text, and I could obtain it via -shortcuts run "Example" | cat-, which well, gave me the output but with a %.
aaa**%**
Now, I guess this works, the important thing is for me to obtain something from shortcuts so that I can configure simple things like obtaining a note, a mail, run some javascript in the browser and so on while obtaining some output via the terminal.
So, I configured something like this:
While I do get a dictionary (only in the shortcuts app, not in the terminal) like:
{ "Title": "Some title" }
And actually a list of them, I don't have them in an array that I would have for my command. And for some reason I've only been able to obtain either the name or the body.
Now, I put them into a text with get text from Repeated results, but I don't think I have a valid Dictionary (JSON) array that I can use, since the terminal doesn't obtain nothing.
So far I've tried:
echo $(shortcuts run "Find Notes")
echo $(shortcuts run "Find Notes" --output-type public.utf8-plain-text -o -)
shortcuts run "Find Notes" | xargs
I wonder what am I missing. I'm not creating the array of dictionaries like I'd like, nor outputting it.
On the other hand, I have some AppleScripts that work, however, given that I cannot find munch information about the support status of AppleScript, I though to update to Shortcuts which is obtaining updates, and then I'm trying to do this simple example on shortcuts.
Thanks for taking a look!