https://developer.apple.com/forums/thread/708877
XPC is the preferred inter-process communication (IPC) mechanism on Apple platforms. XPC has three APIs:
The high-level NSXPCConnection API, for Objective-C and Swift
The low-level Swift API, introduced with macOS 14
The low-level C API, which, while callable from all languages, works best with C-based languages
General:
Forums subtopic: App & System Services > Processes & Concurrency
Forums tag: XPC
Creating XPC services documentation
NSXPCConnection class documentation
Low-level API documentation
XPC has extensive man pages — For the low-level API, start with the xpc man page; this is the original source for the XPC C API documentation and still contains titbits that you can’t find elsewhere. Also read the xpcservice.plist man page, which documents the property list format used by XPC services.
Daemons and Services Programming Guide archived documentation
WWDC 2012 Session 241 Cocoa Interprocess Communication with XPC — This is no longer available from the Apple Developer website )-:
Technote 2083 Daemons and Agents — It hasn’t been updated in… well… decades, but it’s still remarkably relevant.
TN3113 Testing and Debugging XPC Code With an Anonymous Listener
XPC and App-to-App Communication forums post
Validating Signature Of XPC Process forums post
This forums post summarises the options for bidirectional communication
This forums post explains the meaning of privileged flag
Related tags include:
Inter-process communication, for other IPC mechanisms
Service Management, for installing and uninstalling Service Management login items, launchd agents, and launchd daemons
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
Processes & Concurrency
RSS for tagDiscover how the operating system manages multiple applications and processes simultaneously, ensuring smooth multitasking performance.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Swift Concurrency Resources:
DevForums tags: Concurrency
The Swift Programming Language > Concurrency documentation
Migrating to Swift 6 documentation
WWDC 2022 Session 110351 Eliminate data races using Swift Concurrency — This ‘sailing on the sea of concurrency’ talk is a great introduction to the fundamentals.
WWDC 2021 Session 10134 Explore structured concurrency in Swift — The table that starts rolling out at around 25:45 is really helpful.
Swift Async Algorithms package
Swift Concurrency Proposal Index DevForum post
Why is flow control important? DevForums post
Matt Massicotte’s blog
Dispatch Resources:
DevForums tags: Dispatch
Dispatch documentation — Note that the Swift API and C API, while generally aligned, are different in many details. Make sure you select the right language at the top of the page.
Dispatch man pages — While the standard Dispatch documentation is good, you can still find some great tidbits in the man pages. See Reading UNIX Manual Pages. Start by reading dispatch in section 3.
WWDC 2015 Session 718 Building Responsive and Efficient Apps with GCD [1]
WWDC 2017 Session 706 Modernizing Grand Central Dispatch Usage [1]
Avoid Dispatch Global Concurrent Queues DevForums post
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
[1] These videos may or may not be available from Apple. If not, the URL should help you locate other sources of this info.
One challenging aspect of Swift concurrency is flow control, aka backpressure. I was explaining this to someone today and thought it better to post that explanation here, for the benefit of all.
If you have questions or comments, start a new thread in App & System Services > Processes & Concurrency and tag with Swift and Concurrency.
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
Why is flow control important?
In Swift concurrency you often want to model data flows using AsyncSequence. However, that’s not without its challenges. A key issue is flow control, aka backpressure.
Imagine you have a network connection with a requests property that returns an AsyncSequence of Request values. The core of your networking code might be a loop like this:
func processRequests(connection: Connection) async throws {
for try await request in connection.requests {
let response = responseForRequest(request)
try await connection.reply(with: response)
}
}
Flow control is important in both the inbound and outbound cases. Let’s start with the inbound case.
If the remote peer is generating requests very quickly, the network is fast, and responseForRequest(_:) is slow, it’s easy to fall foul of unbounded memory growth. For example, if you use AsyncStream to implement the requests property, its default buffering policy is .unbounded. So the code receiving requests from the connection will continue to receive them, buffering them in the async stream, without any bound. In the worst case scenario that might run your process out of memory. In a more typical scenario it might result in a huge memory spike.
The outbound case is similar. Imagine that the remote peer keeps sending requests but stops receiving them. If the reply(with:) method isn’t implemented correctly, this might also result in unbounded memory growth.
The solution to this problem is flow control. This flow control operates independently on the send and receive side:
On the send side, the code sending responses should notice that the network connection has asserted flow control and stop sending responses until that flow control lifts. In an async method, like the reply(with:) example shown above, it can simply not return until the network connection has space to accept the reply.
On the receive side, the code receiving requests from the connection should monitor how many are buffered. If that gets too big, it should stop receiving. That causes the requests to pile up in the connection itself. If the network connection implements flow control properly [1], this will propagate to the remote peer, which should stop generating requests.
[1] TCP and QUIC both implement flow control. Use them! If you’re tempted to implement your own protocol directly on top of UDP, consider how it should handle flow control.
Flow control and Network framework
Network framework has built-in support for flow control. On the send side, it uses a ‘push’ model. When you call send(content:contentContext:isComplete:completion:) the connection buffers the message. However, it only calls the completion handler when it’s passed that message to the network for transmission [2]. If you send a message and don’t receive this completion callback, it’s time to stop sending more messages.
On the receive side, Network framework uses a ‘pull’ model. The receiver calls a receive method, like receiveMessage(completion:), which calls a completion handler when there’s a message available. If you’ve already buffered too many messages, just stop calling this receive method.
These techniques are readily adaptable to Swift concurrency using Swift’s CheckedContinuation type. That works for both send and receive, but there’s a wrinkle. If you want to model receive as an AsyncSequence, you can’t use AsyncStream. That’s because AsyncStream doesn’t support flow control. So, you’ll need to come up with your own AsyncSequence implementation [3].
[2] Note that this doesn’t mean that the data has made it to the remote peer, or has even been sent on the wire. Rather, it says that Network framework has successfully passed the data to the transport protocol implementation, which is then responsible for getting it to the remote peer.
[3] There’s been a lot of discussion on Swift Evolution about providing such an implementation but none of that has come to fruition yet. Specifically:
The Swift Async Algorithms package provides AsyncChannel, but my understanding is that this is not yet ready for prime time.
I believe that the SwiftNIO folks have their own infrastructure for this. They’re driving this effort to build such support into Swift Async Algorithms.
Avoid the need for flow control
In some cases you can change your design to avoid the need for control. Imagine that your UI needs to show the state of a remote button. The network connection sends you a message every time the button is depressed or released. However, your UI only cares about the current state.
If you forward every messages from the network to your UI, you have to worried about flow control. To eliminate that worry:
Have your networking code translate the message to reflect the current state.
Use AsyncStream with a buffering policy of .bufferingNewest(1).
That way there’s only ever one value in the stream and, if the UI code is slow for some reason, while it might miss some transitions, it always knows about the latest state.
2024-12-13 Added a link to the MultiProducerSingleConsumerChannel PR.
2024-12-10 First posted.
The support URL provided in App Store Connect must direct to a support page with links to a loan services privacy policy. The support page must also reference the lender or lending license.
The privacy policy provided in App Store Connect must include references to the lender.
The verified email domains associated with your Apple Developer Program account must match domains for the submitting company or partnered financial institution.
Topic:
App & System Services
SubTopic:
Processes & Concurrency
I am writing a SwiftData/SwiftUI app in which the user saves simple records, tagged with their current location. Core Location can take up to 10 seconds to retrieve the current location from its requestLocation() call.
I the main app I have wrapped the CLLocationManager calls with async implementations. I kick off a Task when a new record is created, and write the location to my @Model on the main thread when it completes.
A realistic use of the share extension doesn't give the task enough time to complete.
I can use performExpiringActivity to complete background processing after the share extension closes but this needs to be a synchronous block. Is there some way of using performExpiringActivity when relying on a delegate callback from something like Core Location?
https://idmsa.apple.com/IDMSWebAuth/signin?path=%2F%2Fforums%2Fpost%2Fquestion%3Flogin%3Dtrue&language=US-EN&instanceId=EN&appIdKey=25138a77e3499638936f018102a53961c923f72b517d4a4d6aee9f09529baca9&rv=4
Topic:
App & System Services
SubTopic:
Processes & Concurrency
I have a video, after recording there is no sound, after a few seconds there is only audio sound that I recorded.
Topic:
App & System Services
SubTopic:
Processes & Concurrency
Hello,
I recently implemented a conditional debounce publisher using Swift's Combine.
If a string with a length less than 2 is passed, the event is sent downstream immediately without delay. If a string with a length of 2 or more is passed, the event is emitted downstream with a 0.2-second delay.
While writing test logic related to this, I noticed a strange phenomenon: sometimes the publisher, which should emit events with a 0.2-second delay, does not emit an event.
The test code below should have all indices from 1 to 100 in the array, but sometimes some indices are missing, causing the assertion to fail. Even after observing completion, cancel, and output events through handleEvents, I couldn't find any cause. Am I using Combine incorrectly, or is there a bug in Combine?
I would appreciate it if you could let me know.
import Foundation
import Combine
var cancellables: Set<AnyCancellable> = []
@MainActor func text(index: Int, completion: @escaping () -> Void) {
let subject = PassthroughSubject<String, Never>()
let textToSent = "textToSent"
subject
.map { text in
if text.count >= 2 {
return Just<String>(text)
.delay(for: .seconds(0.2), scheduler: RunLoop.main)
.eraseToAnyPublisher()
} else {
return Just<String>(text)
.eraseToAnyPublisher()
}
}
.switchToLatest()
.sink {
if $0.count >= 2 {
completion()
}
}.store(in: &cancellables)
for i in 0..<textToSent.count {
let stringIndex = textToSent.index(textToSent.startIndex, offsetBy: i)
let stringToSent = String(textToSent[textToSent.startIndex...stringIndex])
subject.send(stringToSent)
}
}
var array = [Int]()
for i in 1...100 {
text(index: i) {
array.append(i)
}
}
DispatchQueue.main.asyncAfter(deadline: .now() + 5) {
for i in 1...100 {
assert(array.contains(i))
}
}
RunLoop.main.run(until: .now + 10)
When using the old withTaskCancellationHandler(operation:onCancel:isolation:) to run background tasks, you were notified that the background task gets cancelled via the handler being called. SwiftUI provides the backgroundTask(_:action:) modifier which looks quite handy. However how can I check if the background task will be cancelled to avoid being terminated by the system?
I have tried to check that via Task.isCancelled but this always returns false no matter what.
Is this not possible when using the modifier in which case I should file a bug report?
Thanks for your help
Topic:
App & System Services
SubTopic:
Processes & Concurrency
Tags:
SwiftUI
Background Tasks
WidgetKit
I'm working on implementing file moving with NSFileCoordinator. I'm using the slightly newer asynchronous API with the NSFileAccessIntents. My question is, how do I go about notifying the coordinator about the item move? Should I simply create a new instance in the asynchronous block? Or does it need to be the same coordinator instance?
let writeQueue = OperationQueue()
public func saveAndMove(data: String, to newURL: URL) {
let oldURL = presentedItemURL!
let sourceIntent = NSFileAccessIntent.writingIntent(with: oldURL, options: .forMoving)
let destinationIntent = NSFileAccessIntent.writingIntent(with: newURL, options: .forReplacing)
let coordinator = NSFileCoordinator()
coordinator.coordinate(with: [sourceIntent, destinationIntent], queue: writeQueue) { error in
if let error {
return
}
do {
// ERROR: Can't access NSFileCoordinator because it is not Sendable (Swift 6)
coordinator.item(at: oldURL, willMoveTo: newURL)
try FileManager.default.moveItem(at: oldURL, to: newURL)
coordinator.item(at: oldURL, didMoveTo: newURL)
} catch {
print("Failed to move to \(newURL)")
}
}
}
Topic:
App & System Services
SubTopic:
Processes & Concurrency
Tags:
Files and Storage
Swift
iCloud Drive
Concurrency
General:
DevForums subtopic: App & System Services > Processes & Concurrency
Processes & concurrency covers a number of different technologies:
Background Tasks Resources
Concurrency Resources — This includes Swift concurrency.
Service Management Resources
XPC Resources
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
Topic:
App & System Services
SubTopic:
Processes & Concurrency
Hello,
I am trying to implement a subscriber which specifies its own demand for how many elements it wants to receive from a publisher.
My code is below:
import Combine
var array = [1, 2, 3, 4, 5, 6, 7]
struct ArraySubscriber<T>: Subscriber {
typealias Input = T
typealias Failure = Never
let combineIdentifier = CombineIdentifier()
func receive(subscription: any Subscription) {
subscription.request(.max(4))
}
func receive(_ input: T) -> Subscribers.Demand {
print("input,", input)
return .max(4)
}
func receive(completion: Subscribers.Completion<Never>) {
switch completion {
case .finished:
print("publisher finished normally")
case .failure(let failure):
print("publisher failed due to, ", failure)
}
}
}
let subscriber = ArraySubscriber<Int>()
array.publisher.subscribe(subscriber)
According to Apple's documentation, I specify the demand inside the receive(subscription: any Subscription) method, see link.
But when I run this code I get the following output:
input, 1
input, 2
input, 3
input, 4
input, 5
input, 6
input, 7
publisher finished normally
Instead, I expect the subscriber to only "receive" elements 1, 2, 3, 4 from the array.
How can I accomplish this?
Some users of my Mac app are complaining of redrawing delays. Based on what I see in logs, my GCD timer event handlers are not being run in a timely manner although the runloop is still pumping events: sometimes 500ms pass before a 15ms timer runs. During this time, many keypresses are routed through -[NSApplication sendEvent:], which is how I know it's not locked up in synchronous code.
This issue has not been reported in older versions of macOS.
I start the timer like this:
_gcdUpdateTimer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, dispatch_get_main_queue());
dispatch_source_set_timer(_gcdUpdateTimer,
dispatch_time(DISPATCH_TIME_NOW, period * NSEC_PER_SEC),
period * NSEC_PER_SEC,
0.0005 * NSEC_PER_SEC);
dispatch_source_set_event_handler(_gcdUpdateTimer, ^{
…redraw…
});