I have:
an Automator workflow saved as an application "workflow.app"
an Apple Script saved as an application "script.app"
a PDF file residing on Desktop "test.pdf"
How do I launch workflow.app and pass test.pdf as the input for workflow from inside script.app?
Automation & Scripting
RSS for tagLearn about scripting languages and automation frameworks available on the platform to automate repetitive tasks.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I am trying to make an app that uses Spotify's web API to play songs. For the web API to work, Spotify needs to be running, and my Mac has to be recognized as an active device. For my Mac to be recognized as an active device, I have to play a song for a very short amount of time (under a second).
I want to make my app automatically do that on launch. I already wrote the AppleScript in Automator, and it worked. It successfully launched Spotify, played a song for 0.5 seconds, then hid itself. After writing the code, I tried to implement it into my app to run on startup, but I ran into a problem. The app only started the Spotify app on my mac, and gave me an error that told me it wasn't running.
What do I do? Is this an issue with the permissions of the app, or something else? I have given the app the "Apple Events" entitlement.
This is the error I am getting. Note that the app opens Spotify, after which it gives me this.
Error: { NSAppleScriptErrorAppName = Spotify; NSAppleScriptErrorBriefMessage = "Application isn\U2019t running."; NSAppleScriptErrorMessage = "Spotify got an error: Application isn\U2019t running."; NSAppleScriptErrorNumber = "-600"; NSAppleScriptErrorRange = "NSRange: {31, 8}"; }
This is the function I am trying to use to do the actions with Spotify:
func runAppleScript() {
let appleScript = """
tell application "Spotify"
activate
if player state is not playing then
play track "spotify:track:5XSKC4d0y0DfcGbvDOiL93"
delay 1
pause
end if
end tell
tell application "System Events"
tell process "Spotify"
set frontmost to true
delay 1
keystroke "h" using {command down}
end tell
end tell
"""
var error: NSDictionary?
if let scriptObject = NSAppleScript(source: appleScript) {
scriptObject.executeAndReturnError(&error)
}
if let error = error {
print("Error: \(error)")
}
}
Any help is appreciated. Thank you in advance.
Topic:
App & System Services
SubTopic:
Automation & Scripting
Hi folks,
I've got some music that I want playing on iTunes all the time on an older system, but it'll sometimes stop. I tried making a Applescript to check and play the music/playlist again if it stops, but I keep getting a timeout error.
This is the AppleScript:
repeat
tell application "iTunes"
if player state is paused then
tell application "iTunes" to play
end if
delay 30
end tell
end repeat
I get this error:
AppleEvent timed out.
iTunes got an error: AppleEvent timed out. (-1712)
I can't figure out why I'm getting a timeout error... anyone have any ideas?
We are trying to open an application "xyz.app" It worked fine until 15.1.1 versions. But facing issues with 15.2 and 15.3
The application is working fine when we navigate to xyz.app/Contents/MacOS/ and run applet in this directory. But the error "Not authorized to send Apple events to Terminal" occurs when we are trying to open the app directly.
We have tried with all the available solutions like giving full disk access to terminal and application, adding my application to automation in privacy and security tabs in settings.
Any help would be appreciated.
Thanks!
Hi everyone,
I'm trying to use Automator to batch process PDF files. I have hundreds of academic journal article PDFs whose page sizes vary from 5x7 inches to A4 format (8.27 x 11.69 inches). I want to scale all the PDFs to US Letter size (8.5 x 11.0 inches) such that smaller originals remain 100% scale but are centered on the larger page and larger originals are scaled down to fit the page.
Manually opening a PDF in Preview.app, scaling it to US Letter paper, and using the Save as PDF option is producing the desired output for me. I captured my workflow using the Watch Me Do action in Automator, then adjusted it into the following AppleScript.
-- a Get Specified Finder Items action to specify the input PDFs precedes this script
on run {input, parameters}
repeat with filePath in input
-- Open the file in Preview
tell application "Preview"
open filePath
activate
end tell
-- Give Preview some time to open the file
delay 2.0
-- Press ⌘P
set timeoutSeconds to 0.25
set uiScript to "keystroke \"p\" using command down"
my doWithTimeout(uiScript, timeoutSeconds)
-- Click the “Scale to Fit:” radio button.
delay 2.0
set timeoutSeconds to 2.0
set uiScript to "click radio button \"Scale to Fit:\" of radio group 1 of group 1 of group 2 of scroll area 2 of splitter group 1 of sheet 1 of window 1 of application process \"Preview\""
my doWithTimeout(uiScript, timeoutSeconds)
-- Click the “<fill in title>” menu button.
delay 4.0
set timeoutSeconds to 2.000000
set uiScript to "click menu button 1 of group 2 of splitter group 1 of sheet 1 of window 1 of application process \"Preview\""
my doWithTimeout( uiScript, timeoutSeconds )
-- Save as PDF…
delay 2.0
set timeoutSeconds to 2.0
set uiScript to "click menu item 2 of menu 1 of splitter group 1 of sheet 1 of window 1 of application process \"Preview\""
my doWithTimeout(uiScript, timeoutSeconds)
-- Click the “Save” button.
delay 8.0
set timeoutSeconds to 2.0
set uiScript to "click UI Element \"Save\" of sheet 1 of sheet 1 of window 1 of application process \"Preview\""
my doWithTimeout(uiScript, timeoutSeconds)
-- Press ⌘W to close the file
delay 0.25
set timeoutSeconds to 2.0
set uiScript to "keystroke \"w\" using command down"
my doWithTimeout(uiScript, timeoutSeconds)
end repeat
return input
end run
on doWithTimeout(uiScript, timeoutSeconds)
set endDate to (current date) + timeoutSeconds
repeat
try
run script "tell application \"System Events\"
" & uiScript & "
end tell"
exit repeat
on error errorMessage
if ((current date) > endDate) then
error "Can not " & uiScript
end if
end try
end repeat
end doWithTimeout
My problem arises at the Save as PDF step. When this action runs, I see the PDF menu pop open:
but the Save as PDF... menu item doesn't get clicked. Instead, I get an error:
Can anyone advise on how to overcome this error?
So Recently I’ve been making shortcuts and I noticed that I can prompt chatgpt with shortcuts to add extra functionality, I like this idea but there is 1 flaw with it and that is there is no ability to remove chats automatically, I suppose this would be fine except every time the shortcut is run it creates a new chat, usually I would have made it to remove the chat after the shortcut is finished, but since I can’t the chats would pile up as there is no way of getting rid of them automatically and it would be tedious to manually delete all the chats.
also sorry if I chose the wrong topic and subtopic, I couldn’t find a topic about shortcuts
I want to create automations when the first person comes Home.
When Setting up the Automation on the owner device everything seems to be correct. Yet the Automation doesn‘t Wort properly. my girlfriend is listed as admin.
when she has a Look at the Automation, the Location of my phone is unknown.
the issue is that this leads To the following behaviour. When i come House the Automation Checks if my girlfriend is Home. All Good. When my Grilfriend comes Home the Automation doesn‘t Check where i am But directly execut the Automation.
Hello,
Relatively new to AppleScripts in current gen (I've used it back in 2010s) and would like some help if someone can point me in the right direction.
Is AppleScript the best/only way to interact with Notes application? (I'm on Sequioa)
1.1 I've tried to use LLM to generate a Swift app, but it still calls out to AppleScripts, so I'm wondering if I'm missing something.
1.2 If I'm going down a rabbit hole, I'd like to stop since I want to finish this quick task and move on and or fall deeply in love with AppleScripts... whichever comes first.
Is There a better way to write notes? Script Editor is still a minimal IDE, I'd love to find something that will do some auto completion/suggestions because the documentation in the Script Editor is still a tad weak. (I'm used to interpreted languages like bash, ruby, etc...) where if I don't understand something I just dig into the code instead of turse documentation that just exposes public end points and does not tell you much more :(
My problem: I'd like to set up a cron that periodically checks my notes, and cleans up the shared notes. Basically it's a shared set of notes that have checklist on it and cleans up. (weekly chores etc...) I want to read the notes, find out which ones have been marked checked. Reset the ones that are done, leave unfinished ones alone and reset the old ones.
This is how far I've gotten:
let appleScript = """```
tell application "Notes"
set targetNote to note "\Test" of default account
return body of targetNote
end tell
That works like a charm, Kind of dumb because I rather use and ID of the note not the name :(
It returns the following
<div><b><span style=\\"font-size: 24px\\">Test</span></b></div>
<div><br></div>
<ul>
<li> Not Done</li>
<li>Done</li>
<li>Not Done yet</li>
</ul>
<div><br></div>
<div>Single line</div>
Which is a good start!
Issues:
There is no way to tell which Item is marked "Checked" and which one is not :(
Any helps is greatly appreciated!
I'll ask Siri: What is the weather?"
and will get a valid response
I'll ask Siri to execute a shortcut my app has created
I get "what is the order?" (a phrase nowhere in my app)
I'll repeat the question about the weather
I now get "what is the order?"
***?
We have a watchOS app that provides many configurable widgets. Those widgets are configured and installed with help of AppIntent:
public struct RectComplAppIntent: AppIntent, WidgetConfigurationIntent, CustomIntentMigratedAppIntent {
@Parameter(title: "Style")
var style: String?
....
}
However when I print WidgetInfos with getCurrentConfigurations(), I sometimes got nil for configuration. At the same time widgets are not loaded. Exact steps:
User installs the pre-cofnigured .watchface.
Complications are not loaded since configuration is missing. I print getCurrentConfigurations() and get entries like this:
WidgetInfo:
- configuration: nil
- widgetConfigurationIntent: nil
- family: accessoryRectangular
- kind: Rectangle
Then user force-touches a face and opens editing mode. Returns to watch app, prints infos:
WidgetInfo:
- configuration: <INIntent: 0x780d290> {
style = vol1Logo;
}
- widgetConfigurationIntent: nil
- family: accessoryRectangular
- kind: Rectangle
– Suddenly intent appears with the correct style and complications start to show up.
How do you think, why it happens? Why after .watchface install all the WidgetInfo has nil intent (configuration)? What helps them to load later?
You can try this face yourself: https://cdn.watchfaces.co/watchfaces/glance-minimalist.watchface
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
WatchKit
watchOS
WidgetKit
App Intents
Hi, I want to open an email message with AppleScript. Everything is working correctly, but in the Mail app, instead of focusing on targetMessage, it highlights the email after the target message.
When I use:
tell targetMessage to open
the correct email opens in new window but the wrong email is highlighted in the Mail app list.
tell application "Mail"
activate
set targetAccount to missing value
repeat with anAccount in every account
if name of anAccount is "AccountName" then
set targetAccount to anAccount
exit repeat
end if
end repeat
if targetAccount is not missing value then
set targetBox to missing value
repeat with aBox in mailboxes of targetAccount
if name of aBox is "MailboxName" then
set targetBox to aBox
exit repeat
end if
end repeat
if targetBox is not missing value then
set targetMessage to missing value
set oneWeekAgo to (current date) - (7 * days)
set filteredMessages to (every message of targetBox whose date received ≥ oneWeekAgo)
repeat with aMessage in filteredMessages
try
if message id of aMessage is "MessageID" then
set targetMessage to aMessage
exit repeat
end if
end try
end repeat
if targetMessage is not missing value then
if (count of message viewers) > 0 then
set mailViewer to message viewer 1
else
set mailViewer to make new message viewer
end if
tell mailViewer
set selected mailboxes to {targetBox}
delay 0.2
set selected messages to {targetMessage}
end tell
return "Found"
else
return "Message Not found"
end if
else
return "Folder Not found"
end if
else
return "Account Not found"
end if
end tell
Why is this behavior happening?
I have an AppIntent in one of my applications, which shows up in Shortcuts. I would like to run this intent/shortcut from my other application without jumping to Shortcuts with a deeplink. Is this possible?
I have a custom intent. When my app is unable to complete the resolution of a parameter within the app extension, I need to be able to continue within the app. I am unable to figure out what the correct objective C syntax is to enable the execution to continue with the app. Here is what I have tried:
completion([[PickWoodIntentResponse init] initWithCode:PickWoodIntentResponseCodeContinueInApp userActivity:nil]);
This results in the following error:
Implicit conversion from enumeration type 'enum PickWoodIntentResponseCode' to different enumeration type 'INAnswerCallIntentResponseCode' (aka 'enum INAnswerCallIntentResponseCode')
I have no idea why it is referring to the enum type of 'INAnswerCallIntentResponseCode' which is unrelated to my app.
I have also tried:
PickWoodIntentResponse *response = [[PickWoodIntentResponse init] initWithCode:PickWoodIntentResponseCodeContinueInApp userActivity:nil];
completion(response);
but that results in 2 errors:
Implicit conversion from enumeration type 'enum PickWoodIntentResponseCode' to different enumeration type 'INAnswerCallIntentResponseCode' (aka 'enum INAnswerCallIntentResponseCode')
and
Incompatible pointer types passing 'PickWoodIntentResponse *' to parameter of type 'INStringResolutionResult *'
The relevant autogenerated code provided to me with the creation of my intent is as follows:
@class PickWoodIntentResponse;
@protocol PickWoodIntentHandling <NSObject>
- (void)resolveVarietyForPickWood:(PickWoodIntent *)intent withCompletion:(void (^)(INStringResolutionResult *resolutionResult))completion NS_SWIFT_NAME(resolveVariety(for:with:)) API_AVAILABLE(ios(13.0), macos(11.0), watchos(6.0));
@end
typedef NS_ENUM(NSInteger, PickWoodIntentResponseCode) {
PickWoodIntentResponseCodeUnspecified = 0,
PickWoodIntentResponseCodeReady,
PickWoodIntentResponseCodeContinueInApp,
PickWoodIntentResponseCodeInProgress,
PickWoodIntentResponseCodeSuccess,
PickWoodIntentResponseCodeFailure,
PickWoodIntentResponseCodeFailureRequiringAppLaunch
}
@interface PickWoodIntentResponse : INIntentResponse
- (instancetype)init NS_UNAVAILABLE;
- (instancetype)initWithCode:(PickWoodIntentResponseCode)code userActivity:(nullable NSUserActivity *)userActivity NS_DESIGNATED_INITIALIZER;
@property (readonly, NS_NONATOMIC_IOSONLY) PickWoodIntentResponseCode code;
@end
Am I overlooking something? What would be the proper syntax to have within the completion block to satisfy the compiler?
I am trying to write a unit test for an AppIntent and override the AppDependencyManager so I can inject dependencies for the purposes of testing. When I run a test, the app crashes with:
AppIntents/AppDependencyManager.swift:120: Fatal error: AppDependency of type Int.Type was not initialized prior to access. Dependency values can only be accessed inside of the intent perform flow and within types conforming to _SupportsAppDependencies unless the value of the dependency is manually set prior to access.
App Intent:
import AppIntents
struct TestAppIntent: AppIntent {
@AppDependency var count: Int
static var title: LocalizedStringResource { "Test App Intent "}
func perform() async throws -> some IntentResult {
print("\(count)")
return .result()
}
}
extension TestAppIntent {
init(dependencyManager: AppDependencyManager) {
_count = AppDependency(manager: dependencyManager)
}
}
Unit Test
import Testing
import AppIntents
@testable import AppIntentTesting
struct TestAppIntentTests {
@Test("test")
func test() async throws {
let dependencyManager = AppDependencyManager()
dependencyManager.add(dependency: 5)
let appIntent = TestAppIntent(dependencyManager: dependencyManager)
_ = try await appIntent.perform()
}
}
I am trying to add certain shortcuts based on user eligibility for a feature, however I get an error if i try to use conditions in appShortcuts of AppShortcutsProvider. if I use static list to update the shortcuts and use static shortcut list in appShortcuts of AppShortcutsProvider - shortcuts are not displayed. Can anyone help with an example to update appShortcuts of AppShortcutsProvider dynamically. Is that supported at all?
Topic:
App & System Services
SubTopic:
Automation & Scripting
I've created an app that grabs the current URL and Title/name from the frontmost window/tab of Safari or any of a number of Chromium browsers, using NSAppleScript. The app sits in the menu bar and can be summoned by shortcut key combo.
let script = """
tell application \"Safari\"
if not (exists front window) then return {\"\", \"\"}
set theTab to current tab of front window
set theURL to URL of theTab
set theTitle to name of theTab
return {theURL, theTitle}
end tell
"""
if let appleScript = NSAppleScript(source: script) {
let output = appleScript.executeAndReturnError(&error)
if output.numberOfItems == 2 {
let url = output.atIndex(1)?.stringValue
let title = output.atIndex(2)?.stringValue
if let url = url, !url.isEmpty {
return (url, title)
}
}
}
If I sign an archived build and run it locally it works beautifully, no matter which browser I am using.
But the URL/title grabbing breaks in sandbox due to permissions.
I read and have been informed that I need to use com.apple.security.scripting-targets entitlement. The example for this is in WWDC 2012 and talks about accessing Mail compose window.
<key>com.apple.security.scripting-targets</key>
<dict>
<key>com.apple.mail</key>
<array>
<string>com.apple.mail.compose</string>
</array>
</dict>
However, I don't want to control the app or use any access groups, as I've looked through the sdef and Safari/Chrome do not provide any access groups whose contents I'm interested in.
I just want to get the property/values of a window/tab. So I think I could be quite restrictive about the read-only access to two properties or objects that I need.
That said, I'm going back and forth with TestFlight review kind of shooting in the dark. I need help!
So I figure it's time to ask: what content should my entitlement have?
Or am I on the wrong path entirely?
I know it's possible because an app called Neptunes does it to get properties from Music.app
Many thanks in advance,
matt
When we use the "Find All Reminders" shortcut, there's these two filters "Is Completed and "Is Not Completed".
When I implement this in my app, the best I could get is just "Completed" and "Not Completed", I can't figure out how to add the "Is" in front.
In my entity:
@Property(title: "Completed")
var completed : Bool
In the EntityPropertyQuery:
static var properties = QueryProperties {
Property(\GTDItemAppEntity.$list) {
EqualToComparator { NSPredicate(format: "list.uuid = %@", $0.id as NSUUID) }
}
Property(\GTDItemAppEntity.$text) {
ContainsComparator { NSPredicate(format: "text CONTAINS[cd] %@", $0) }
EqualToComparator { NSPredicate(format: "text = %@", $0) }
}
Property(\GTDItemAppEntity.$completed) {
EqualToComparator { NSPredicate(format: $0 ? "completed = YES" : "completed = NO") }
}
}
If I change the property to
@Property(title: "Is Completed")
var completed : Bool
Then it will show as "Is Completed" and "Not Is Completed" in the filter!
Reminder:
My App:
This implementation works very well for spotlight and App Shortcuts, but for voice commands by Siri, they don't work.
AppShortcutsProvider
import AppIntents
struct CustomerAppIntentProvider: AppShortcutsProvider {
@AppShortcutsBuilder static var appShortcuts: [AppShortcut] {
AppShortcut(
intent: StoresAppIntent(),
phrases: ["Mostre as lojas do (.applicationName)"],
shortTitle: LocalizedStringResource("Lojas"),
systemImageName: "storefront"
)
}
}
Ex. do AppIntent
import AppIntents
import Foundation
import Loyalty
import ResourceKit
struct StoresAppIntent: AppIntent {
static var title: LocalizedStringResource = "Mostrar as lojas"
static var description: IntentDescription? = "Este atalho mostra as lojas disponiveis no app"
static var openAppWhenRun: Bool = true
static var isDiscoverable: Bool = true
@MainActor
func perform() async throws -> some IntentResult {
if let url = URL(string: “app://path") {
UIApplication.shared.open(url, options: [:], completionHandler: { (success) in
if success {
print("Opened \(url)")
} else {
print("Failed to open \(url)")
}
})
}
return .result()
}
}
Basically that's what I did
Our apps are with a minimum target of iOS 17 and I tested it on an iPhone 11 with Portuguese language and Siri in Portuguese
Hi,
I’m developing an app, which just like Clock App, uses multiple counters.
I want to speak Siri commands, such as “Siri, count for one hour”. ‘count’ is the alternative app name.
My AppIntent has a parameter, and Siri understands if I say “Siri, count” and asks for duration in a separate step. It runs fine, but I can’t figure out how to run the command with the duration specified upfront, without any subsequent questions from Siri.
Clock App has this functionality, so it can be done.
//title
//perform()
@Parameter(title: "Duration")
var minutes: Measurement<UnitDuration>
}
I have a struct ShortcutsProvider: AppShortcutsProvider, phrases accept only parameters of type AppEnum or AppEntity.
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
Siri and Voice
SiriKit
App Intents
Hello,
I’m working on integrating SiriKit with my music app using INPlayMediaIntent. My app is live on TestFlight, and the Siri command is being recognized, but mediaItems is always empty in my Intent
Demo Project