Hi,
I'm trying to create a launch daemon that uses XPC to receive requests from an unprivileged app. Ultimately both components will be written in Go. For now I'm trying to write a PoC in Objective-C to make sure I get everything right, so I'm compiling / signing from the CLI, and writing plist files by hand -- I'm not using XCode.
My current daemon code is pretty much the same as the boilerplate code that XCode generates when creating a new 'XPC Service':
#import <stdio.h>
#include <xpc/xpc.h>
int main(int argc, char *argv[]) {
xpc_rich_error_t error;
dispatch_queue_t queue = dispatch_queue_create("com.foobar.daemon", DISPATCH_QUEUE_SERIAL);
xpc_listener_t listener = xpc_listener_create(
"com.foobar.daemon",
queue,
XPC_LISTENER_CREATE_NONE,
^(xpc_session_t _Nonnull peer) {
xpc_session_set_incoming_message_handler(peer, ^(xpc_object_t _Nonnull message) {
int64_t firstNumber = xpc_dictionary_get_int64(message, "firstNumber");
int64_t secondNumber = xpc_dictionary_get_int64(message, "secondNumber");
// Create a reply and send it back to the client.
xpc_object_t reply = xpc_dictionary_create_reply(message);
xpc_dictionary_set_int64(reply, "result", firstNumber + secondNumber);
xpc_rich_error_t replyError = xpc_session_send_message(peer, reply);
if (replyError) {
printf("Reply failed, error: %s", xpc_rich_error_copy_description(replyError));
}
});
},
&error);
if (error != NULL) {
printf("ERROR: %s\n", xpc_rich_error_copy_description(error));
exit(1);
}
printf("Created listener: %s", xpc_listener_copy_description(listener));
// Resuming the serviceListener starts this service. This method does not return.
dispatch_main();
return 0;
}
I'm compiling, signing and installing my daemon with the following commands:
build_foobar() {
clang -Wall -x objective-c -o com.foobar.daemon poc/main.m
codesign --force --verify --verbose --options=runtime \
--identifier="com.foobar.daemon" \
--sign="Mac Developer: Albin Kerouanton (XYZ)" \
--entitlements=poc/entitlements.plist \
com.foobar.daemon
}
install_foobar() {
sudo cp com.foobar.daemon /Library/PrivilegedHelperTools/com.foobar.daemon
sudo cp poc/com.foobar.daemon.plist /Library/LaunchDaemons/com.foobar.daemon.plist
sudo launchctl bootout system/com.foobar.daemon || true
sudo launchctl bootstrap system /Library/LaunchDaemons/com.foobar.daemon.plist
}
Here's the content of my entitlements.plist file:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>com.apple.application-identifier</key>
<string>ABCD.com.foobar.daemon</string>
</dict>
</plist>
And finally, here's my launchd plist file:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.foobar.daemon</string>
<key>Program</key>
<string>/Library/PrivilegedHelperTools/com.foobar.daemon</string>
<key>ProgramArguments</key>
<array>
<string>/Library/PrivilegedHelperTools/com.foobar.daemon</string>
</array>
<key>RunAtLoad</key>
<false/>
<key>StandardOutPath</key>
<string>/tmp/com.foobar.daemon.out.log</string>
<key>StandardErrorPath</key>
<string>/tmp/com.foobar.daemon.err.log</string>
<key>Debug</key>
<true/>
</dict>
</plist>
Whenever I start my service using sudo launchctl start com.foobar.daemon, it exits with the following error message:
ERROR: Unable to activate listener: failed at listener activation with error 1 - Operation not permitted
System logs don't show anything interesting -- they're just repeating the same error message. I tried to add / remove some properties from both the entitlement and the launchd plist file but to no avail.
Any idea what's going wrong?
Processes & Concurrency
RSS for tagDiscover how the operating system manages multiple applications and processes simultaneously, ensuring smooth multitasking performance.
Post
Replies
Boosts
Views
Activity
Hi,
I have requirement in iOS where application needs to run in the background
It can be a simple hello world program running in the background.
could you shed some light on what is the expected behaviour and is it allowed in iOS.
We would be creating N NWListener objects and M NWConnection objects in our process' communication subsystem to create server sockets, accepted client sockets on server and client sockets on clients.
Both NWConnection and NWListener rely on DispatchQueue to deliver state changes, incoming connections, send/recv completions etc.
What DispatchQueues should I use and why?
Global Concurrent Dispatch Queue (and which QoS?) for all NWConnection and NWListener
One custom concurrent queue (which QoS?) for all NWConnection and NWListener? (Does that anyways get targetted to one of the global queues?)
One custom concurrent queue per NWConnection and NWListener though all targetted to Global Concurrent Dispatch Queue (and which QoS?)?
One custom concurrent queue per NWConnection and NWListener though all targetted to single target custom concurrent queue?
For every option above, how am I impacted in terms of parallelism, concurrency, throughput & latency and how is overall system impacted (with other processes also running)?
Seperate questions (sorry for the digression):
Are global concurrent queues specific to a process or shared across all processes on a device?
Can I safely use setSpecific on global dispatch queues in our app?
I understand that GCD and it's underlying implementations have evolved over time. And many things have not been shared explicitly in Apple documentation.
The most concepts of DispatchQueue (serial and concurrent queues), DispatchQoS, target queue and system provided queues: main and globals etc.
I have some doubts & questions to clarify:
[Main Dispatch Queue] [Link] Because the main queue doesn't behave entirely like a regular serial queue, it may have unwanted side-effects when used in processes that are not UI apps (daemons). For such processes, the main queue should be avoided. What does it mean? Can you elaborate?
[Global Concurrent Dispatch Queues] Are they global to a process or across processes on a device. I believe it is the first case but just wanted to be sure.
[Global Concurrent Dispatch Queues] Does system create 4 (for each QoS) * 2 (over-commiting and non-overcommiting queues) = 8 queues in all. When does which type of queue comes into play?
[Custom Queue][Target Queue concept] [swift-corelibs-libdispatch/man/dispatch_queue_create.3] QUOTE The default target queue of all dispatch objects created by the application is the default priority global concurrent queue. UNQUOTE Is this stil true?
We could not find a mention of this in any latest official apple documentation (though some old forum threads (one more) and github code documentation indicate the same).
The official documentation only says:
[dispatch_set_target_queue] QUOTE If you want the system to provide a queue that is appropriate for the current object UNQUOTE
[dispatch_queue_create_with_target] QUOTE Specify DISPATCH_TARGET_QUEUE_DEFAULT to set the target queue to the default type for the current dispatch queue.UNQUOTE
[Dispatch>DispatchQueue>init] QUOTE Specify DISPATCH_TARGET_QUEUE_DEFAULT if you want the system to provide a queue that is appropriate for the current object. UNQUOTE
What is the difference between passing target queue as 'nil' vs 'DISPATCH_TARGET_QUEUE_DEFAULT' to DispatchQueue init?
[Custom Queue][Target Queue concept] [dispatch_set_target_queue] QUOTE The system doesn't allocate threads to the dispatch queue if it has a target queue, unless that target queue is a global concurrent queue. UNQUOTE
The system does allocate threads to the custom dispatch queues that have global concurrent queue as the default target.
What does that mean? Why does targetting to global concurrent queues mean in that case?
[System / GCD Thread Pool] that excutes work items from DispatchQueue: Is this thread pool per queue? or across queues per process? or across processes per device?
When using the continuation API, we're required to call resume exactly once. While withCheckedContinuation helps catch runtime issues during debugging, I'm looking for ways to catch such errors at compile time or through tools like Instruments.
Is there any tool or technique that can help enforce or detect this requirement more strictly than runtime checks? Or would creating custom abstractions around Continuation be the only option to ensure safety? Any suggestions or best practices are appreciated.
I am writing an app which mainly is used to update data used by other apps on the device. After the user initializes some values in the app, they almost never have to return to it (occasionally to add a "friend"). The app needs to run a background task at least daily, however, without the user's intervention (or even awareness, once they've given permission). My understanding of background refresh tasks is that if the user doesn't activate the app in the foreground periodically, the scheduled background tasks may never run. If this is true, do I want to use a background processing task instead, or is there a better solution (or have I misunderstood entirely)?
DESCRIPTION OF PROBLEM
Logs and data from our application indicate various errors that strongly suggest that our application is being launched in a state in which the device is likely locked. We are looking for guidance on how to identify, debug, reproduce, and fix these cases.
Our application does not use any of the common mechanisms for background activity, such as Background App Refresh, Navigation, Audio, etc.
Errors we get in our logs such as "authorization denied (code: 23)" when trying to access a file in our app's container on disk (a simple disk cache for data our application uses) strongly suggest that the device is operating in a state, such as being locked, where our application lacks the requisite permissions it would normally have during operation. Furthermore, attempts to access authentication information stored in the keychain also fails. We use kSecAttrAccessibleWhenUnlocked when accessing items we store in the keychain.
We have investigated "Prewarming", as well as our notification extension that helps process incoming push notifications, but cannot find any way to recreate this behavior.
Are there any steps Apple engineers can recommend to triage and debug this?
Some additional questions that would help us:
What are all of the symptoms that we can look for if prewarming escapes the intended execution context?
What are all of the circumstances in which we would be unauthorized to access the app’s documents/file directories even if it works correctly in normal operation?
STEPS TO REPRODUCE
Unfortunately, we are unable to forcibly reproduce this behavior in our application, so we're looking for guidance on how we might simulate this behavior in Xcode / Instruments.
Are there tools that Apple provides that would allow us to simulate certain behaviors like prewarming to verify our application's functionality?
Are there other reasons our application might be launched while the device is locked? Are there other reasons we would receive security errors when accessing the keychain or disk that are unrelated to the device being locked?
I have a video, after recording there is no sound, after a few seconds there is only audio sound that I recorded.
Hello,
My team has developed a DNS proxy for macOS. We have this set up with a system extension that interacts with the OS, and an always-running daemon that does all the heavy lifting. Communication between the two is DNS request and response packet traffic.
With this architecture what are best practices for how the system extension communicates with a daemon?
We tried making the daemon a socket server, but the system extension could not connect to it.
We tried using XPC but it did not work and we could not understand the errors that were returned.
So what is the best way to do this sort of thing?
https://idmsa.apple.com/IDMSWebAuth/signin?path=%2F%2Fforums%2Fpost%2Fquestion%3Flogin%3Dtrue&language=US-EN&instanceId=EN&appIdKey=25138a77e3499638936f018102a53961c923f72b517d4a4d6aee9f09529baca9&rv=4
Recently, after updating the Developer app to the latest version, my iPad has been unable to open this app as it crashes immediately upon launch. Prior to the update, the app functioned normally. My device is an 11-inch iPad Pro from 2021, running iPadOS 17.3. I have tried troubleshooting steps such as reinstalling the app and restarting the device, but these actions have not resolved the issue. However, I need to use this specific version of the system, iPadOS 17.3, for software testing purposes and cannot upgrade the system. Other apps on my device work normally without any issues. Is there a solution to this problem? I have attempted to contact the developer support team in China, but they were also unable to provide a resolution. This issue is reproducible 100% of the time on my iPad.
I am writing a SwiftData/SwiftUI app in which the user saves simple records, tagged with their current location. Core Location can take up to 10 seconds to retrieve the current location from its requestLocation() call.
I the main app I have wrapped the CLLocationManager calls with async implementations. I kick off a Task when a new record is created, and write the location to my @Model on the main thread when it completes.
A realistic use of the share extension doesn't give the task enough time to complete.
I can use performExpiringActivity to complete background processing after the share extension closes but this needs to be a synchronous block. Is there some way of using performExpiringActivity when relying on a delegate callback from something like Core Location?
I would like to implement an expression that pops out from the window to Immersive based on the following WWDC video.
This video introduces the new features of visionOS 2.0 in the form of refurbishing Apple's sample app BOT-anist.
https://developer.apple.com/jp/videos/play/wwdc2024/10153/?time=1252
In the video, it looks like ImmersiveSpaceAppModel is newly implemented.
However, the key code is not mentioned anywhere.
You pass appModel.robot as the from argument to the transform method of RealityViewContent.
It seems that BOT-anist has been updated once and can be downloaded from the following URL, but there is no class such as ImmersiveSpaceAppModel implemented in this app either.
https://developer.apple.com/documentation/visionos/bot-anist
Has it been further updated again?
Frankly, I'm not sure if it is possible to proceed as per the WWDC video.
Translated with DeepL.com (free version)
I am trying to migrate a WatchConnectivity App to Swift6 and I found an Issue with my replyHandler callback for sendMessageData.
I am wrapping sendMessageData in withCheckedThrowingContinuation, so that I can await the response of the reply. I then update a Main Actor ObservableObject that keeps track of the count of connections that have not replied yet, before returning the data using continuation.resume.
...
@preconcurrency import WatchConnectivity
actor ConnectivityManager: NSObject, WCSessionDelegate {
private var session: WCSession = .default
private let connectivityMetaInfoManager: ConnectivityMetaInfoManager
...
private func sendMessageData(_ data: Data) async throws -> Data? {
Logger.shared.debug("called on Thread \(Thread.current)")
await connectivityMetaInfoManager.increaseOpenSendConnectionsCount()
return try await withCheckedThrowingContinuation({
continuation in
self.session.sendMessageData(
data,
replyHandler: { data in
Task {
await self.connectivityMetaInfoManager
.decreaseOpenSendConnectionsCount()
}
continuation.resume(returning: data)
},
errorHandler: { (error) in
Task {
await self.connectivityMetaInfoManager
.decreaseOpenSendConnectionsCount()
}
continuation.resume(throwing: error)
}
)
})
}
Calling sendMessageData somehow causing the app to crash and display the debug message: Incorrect actor executor assumption.
The code runs on swift 5 with SWIFT_STRICT_CONCURRENCY = complete.
However when I switch to swift 6 the code crashes.
I rebuilt a simple version of the App. Adding bit by bit until I was able to cause the crash.
See Broken App
Awaiting sendMessageData and wrapping it in a task and adding the @Sendable attribute to continuation, solve the crash.
See Fixed App
But I do not understand why yet.
Is this intended behaviour?
Should the compiler warn you about this?
Is it a WatchConnectivity issue?
I initially posted on forums.swift.org, but was told to repost here.
UIImaged imagNamed crash
App Start Crashed
I'm try to monitor all processes by ES client. But I found the process name is different from the Activity Monitor displayed. As shown in the picture below, there are ShareSheetUI(Pages) and ShareSheetUI(Finder) processes in Activity Monitor, but I can only get the same name ShareSheetUI, I thought of many ways to display the name in parentheses, but nothing worked, so there is a way to display the process name like Activity Monitor?
I'm using flutter's inappwebview. All functions work on my simulator and phone, but when I submit for review, I get an answer saying infinite loading. How did it happen? I don't know what the problem is.
The support URL provided in App Store Connect must direct to a support page with links to a loan services privacy policy. The support page must also reference the lender or lending license.
The privacy policy provided in App Store Connect must include references to the lender.
The verified email domains associated with your Apple Developer Program account must match domains for the submitting company or partnered financial institution.
One challenging aspect of Swift concurrency is flow control, aka backpressure. I was explaining this to someone today and thought it better to post that explanation here, for the benefit of all.
If you have questions or comments, start a new thread in App & System Services > Processes & Concurrency and tag with Swift and Concurrency.
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
Why is flow control important?
In Swift concurrency you often want to model data flows using AsyncSequence. However, that’s not without its challenges. A key issue is flow control, aka backpressure.
Imagine you have a network connection with a requests property that returns an AsyncSequence of Request values. The core of your networking code might be a loop like this:
func processRequests(connection: Connection) async throws {
for try await request in connection.requests {
let response = responseForRequest(request)
try await connection.reply(with: response)
}
}
Flow control is important in both the inbound and outbound cases. Let’s start with the inbound case.
If the remote peer is generating requests very quickly, the network is fast, and responseForRequest(_:) is slow, it’s easy to fall foul of unbounded memory growth. For example, if you use AsyncStream to implement the requests property, its default buffering policy is .unbounded. So the code receiving requests from the connection will continue to receive them, buffering them in the async stream, without any bound. In the worst case scenario that might run your process out of memory. In a more typical scenario it might result in a huge memory spike.
The outbound case is similar. Imagine that the remote peer keeps sending requests but stops receiving them. If the reply(with:) method isn’t implemented correctly, this might also result in unbounded memory growth.
The solution to this problem is flow control. This flow control operates independently on the send and receive side:
On the send side, the code sending responses should notice that the network connection has asserted flow control and stop sending responses until that flow control lifts. In an async method, like the reply(with:) example shown above, it can simply not return until the network connection has space to accept the reply.
On the receive side, the code receiving requests from the connection should monitor how many are buffered. If that gets too big, it should stop receiving. That causes the requests to pile up in the connection itself. If the network connection implements flow control properly [1], this will propagate to the remote peer, which should stop generating requests.
[1] TCP and QUIC both implement flow control. Use them! If you’re tempted to implement your own protocol directly on top of UDP, consider how it should handle flow control.
Flow control and Network framework
Network framework has built-in support for flow control. On the send side, it uses a ‘push’ model. When you call send(content:contentContext:isComplete:completion:) the connection buffers the message. However, it only calls the completion handler when it’s passed that message to the network for transmission [2]. If you send a message and don’t receive this completion callback, it’s time to stop sending more messages.
On the receive side, Network framework uses a ‘pull’ model. The receiver calls a receive method, like receiveMessage(completion:), which calls a completion handler when there’s a message available. If you’ve already buffered too many messages, just stop calling this receive method.
These techniques are readily adaptable to Swift concurrency using Swift’s CheckedContinuation type. That works for both send and receive, but there’s a wrinkle. If you want to model receive as an AsyncSequence, you can’t use AsyncStream. That’s because AsyncStream doesn’t support flow control. So, you’ll need to come up with your own AsyncSequence implementation [3].
[2] Note that this doesn’t mean that the data has made it to the remote peer, or has even been sent on the wire. Rather, it says that Network framework has successfully passed the data to the transport protocol implementation, which is then responsible for getting it to the remote peer.
[3] There’s been a lot of discussion on Swift Evolution about providing such an implementation but none of that has come to fruition yet. Specifically:
The Swift Async Algorithms package provides AsyncChannel, but my understanding is that this is not yet ready for prime time.
I believe that the SwiftNIO folks have their own infrastructure for this. They’re driving this effort to build such support into Swift Async Algorithms.
Avoid the need for flow control
In some cases you can change your design to avoid the need for control. Imagine that your UI needs to show the state of a remote button. The network connection sends you a message every time the button is depressed or released. However, your UI only cares about the current state.
If you forward every messages from the network to your UI, you have to worried about flow control. To eliminate that worry:
Have your networking code translate the message to reflect the current state.
Use AsyncStream with a buffering policy of .bufferingNewest(1).
That way there’s only ever one value in the stream and, if the UI code is slow for some reason, while it might miss some transitions, it always knows about the latest state.
2024-12-13 Added a link to the MultiProducerSingleConsumerChannel PR.
2024-12-10 First posted.
All the nuances of when and whether a background task runs aside, does launching the app cancel the currently scheduled refresh task? As an example, consider the following case:
8AM - user launches app. This launch schedules a background refresh for 12 hours later, at 8PM
12PM (noon) - user launches the app, views some content, then exits the app.
Does the scheduled refresh for 8PM still exist, or does the launch at noon invalidate that task, since the refresh could conceivably be handled during that noon launch?
Hopefully this is articulated clearly enough, but I'm trying to understand the specifics of background refresh behavior, since I don't want to run that refresh every time the app is opened. However, if opening the app invalidates scheduled refreshes, I will need to include logic that will reschedule the refresh accordingly.