Dispatch

RSS for tag

Execute code concurrently on multicore hardware by submitting work to dispatch queues managed by the system using Dispatch.

Posts under Dispatch tag

32 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Concurrency Resources
Swift Concurrency Resources: DevForums tags: Concurrency The Swift Programming Language > Concurrency documentation Migrating to Swift 6 documentation WWDC 2022 Session 110351 Eliminate data races using Swift Concurrency — This ‘sailing on the sea of concurrency’ talk is a great introduction to the fundamentals. WWDC 2021 Session 10134 Explore structured concurrency in Swift — The table that starts rolling out at around 25:45 is really helpful. Swift Async Algorithms package Swift Concurrency Proposal Index DevForum post Why is flow control important? DevForums post Matt Massicotte’s blog Dispatch Resources: DevForums tags: Dispatch Dispatch documentation — Note that the Swift API and C API, while generally aligned, are different in many details. Make sure you select the right language at the top of the page. Dispatch man pages — While the standard Dispatch documentation is good, you can still find some great tidbits in the man pages. See Reading UNIX Manual Pages. Start by reading dispatch in section 3. WWDC 2015 Session 718 Building Responsive and Efficient Apps with GCD [1] WWDC 2017 Session 706 Modernizing Grand Central Dispatch Usage [1] Avoid Dispatch Global Concurrent Queues DevForums post Share and Enjoy — Quinn “The Eskimo!” @ Developer Technical Support @ Apple let myEmail = "eskimo" + "1" + "@" + "apple.com" [1] These videos may or may not be available from Apple. If not, the URL should help you locate other sources of this info.
0
0
1.6k
Dec ’24
DispatchSourceTimer Not Firing in Local Push Connectivity Extension When App Is in Foreground and Device Is Locked
Hi, I’m using a Local Push Connectivity Extension and encountering an issue with DispatchSourceTimer. In my extension, I create a DispatchSourceTimer that is supposed to fire every 1 second. It works as expected at first. However, when the app is in the foreground and the device is locked, the timer eventually stops firing after 1–3 hours. The extension process is still alive, and no errors are thrown Has anyone experienced this behavior? Is this a known limitation for timers inside NEAppPushProvider, or is the extension being deprioritized silently by the system? Any insights or suggestions would be greatly appreciated. Thanks!
1
0
21
3h
XPC Error: Underlying connection interrupted
Im using the low-level C xpc api <xpc/xpc.h> and i get this error when I run it: Underlying connection interrupted. I know this error stems from the call to xpc_session_send_message_with_reply_sync(session, message, &reply_err);. I have no previous experience with xpc or dispatch and I find the xpc docs very limited and I also found next to no code examples online. Can somebody take a look at my code and tell me what I did wrong and how to fix it? Thank you in advance. Main code: #include <stdio.h> #include <xpc/xpc.h> #include <dispatch/dispatch.h> // the context passed to mainf() struct context { char* text; xpc_session_t sess; }; // This is for later implementation and the name is also rudimentary void mainf(void* c) { //char * text = ((struct context*)c)->text; xpc_session_t session = ((struct context*)c)->sess; dispatch_queue_t messageq = dispatch_queue_create("y.ddd.main", DISPATCH_QUEUE_SERIAL); xpc_object_t message = xpc_dictionary_create(NULL, NULL, 0); xpc_dictionary_set_string(message, "test", "eeeee"); if (session == NULL) { printf("Session is NULL\n"); exit(1); } __block xpc_rich_error_t reply_err = NULL; __block xpc_object_t reply; dispatch_sync(messageq, ^{ reply = xpc_session_send_message_with_reply_sync(session, message, &reply_err); if (reply_err != NULL) printf("Reply Error: %s\n", xpc_rich_error_copy_description(reply_err)); }); if (reply != NULL) printf("Reply: %s\n", xpc_dictionary_get_string(reply, "test")); else printf("Reply is NULL\n"); } int main(int argc, char* argv[]) { // Create seperate queue for mainf() dispatch_queue_t mainq = dispatch_queue_create("y.ddd.main", DISPATCH_QUEUE_CONCURRENT); dispatch_queue_t xpcq = dispatch_queue_create("y.ddd.xpc", NULL); // Create the context being sent to mainf struct context* c = malloc(sizeof(struct context)); c->text = malloc(sizeof("Hello")); strcpy(c->text, "Hello"); xpc_rich_error_t sess_err = NULL; xpc_session_t session = xpc_session_create_xpc_service("y.getFilec", xpcq, XPC_SESSION_CREATE_INACTIVE, &sess_err); if (sess_err != NULL) { printf("Session Create Error: %s\n", xpc_rich_error_copy_description(sess_err)); xpc_release(sess_err); exit(1); } xpc_release(sess_err); xpc_session_set_incoming_message_handler(session, ^(xpc_object_t message) { printf("message recieved\n"); }); c->sess = session; xpc_rich_error_t sess_ac_err = NULL; xpc_session_activate(session, &sess_ac_err); if (sess_err != NULL) { printf("Session Activate Error: %s\n", xpc_rich_error_copy_description(sess_ac_err)); xpc_release(sess_ac_err); exit(1); } xpc_release(sess_ac_err); xpc_retain(session); dispatch_async_f(mainq, (void*)c, mainf); xpc_release(session); dispatch_main(); } XPC Service code: #include <stdio.h> #include <xpc/xpc.h> #include <dispatch/dispatch.h> int main(void) { xpc_rich_error_t lis_err = NULL; xpc_listener_t listener = xpc_listener_create("y.getFilec", NULL, XPC_LISTENER_CREATE_INACTIVE, ^(xpc_session_t sess){ printf("Incoming Session: %s\n", xpc_session_copy_description(sess)); xpc_session_set_incoming_message_handler(sess, ^(xpc_object_t mess) { xpc_object_t repl = xpc_dictionary_create_empty(); xpc_dictionary_set_string(repl, "test", "test"); xpc_rich_error_t send_repl_err = xpc_session_send_message(sess, repl); if (send_repl_err != NULL) printf("Send Reply Error: %s\n", xpc_rich_error_copy_description(send_repl_err)); }); xpc_rich_error_t sess_ac_err = NULL; xpc_session_activate(sess, &sess_ac_err); if (sess_ac_err != NULL) printf("Session Activate: %s\n", xpc_rich_error_copy_description(sess_ac_err)); }, &lis_err); if (lis_err != NULL) { printf("Listener Error: %s\n", xpc_rich_error_copy_description(lis_err)); xpc_release(lis_err); } xpc_rich_error_t lis_ac_err = NULL; xpc_listener_activate(listener, &lis_ac_err); if (lis_ac_err != NULL) { printf("Listener Activate Error: %s\n", xpc_rich_error_copy_description(lis_ac_err)); xpc_release(lis_ac_err); } dispatch_main(); }
1
0
229
1w
How to implement thread-safe property wrapper notifications across different contexts in Swift?
I’m trying to create a property wrapper that that can manage shared state across any context, which can get notified if changes happen from somewhere else. I'm using mutex, and getting and setting values works great. However, I can't find a way to create an observer pattern that the property wrappers can use. The problem is that I can’t trigger a notification from a different thread/context, and have that notification get called on the correct thread of the parent object that the property wrapper is used within. I would like the property wrapper to work from anywhere: a SwiftUI view, an actor, or from a class that is created in the background. The notification preferably would get called synchronously if triggered from the same thread or actor, or otherwise asynchronously. I don’t have to worry about race conditions from the notification because the state only needs to reach eventuall consistency. Here's the simplified pseudo code of what I'm trying to accomplish: // A single source of truth storage container. final class MemoryShared<Value>: Sendable { let state = Mutex<Value>(0) func withLock(_ action: (inout Value) -> Void) { state.withLock(action) notifyObservers() } func get() -> Value func notifyObservers() func addObserver() } // Some shared state used across the app static let globalCount = MemoryShared<Int>(0) // A property wrapper to access the shared state and receive changes @propertyWrapper struct SharedState<Value> { public var wrappedValue: T { get { state.get() } nonmutating set { // Can't set directly } } var publisher: Publisher {} init(state: MemoryShared) { // ... } } // I'd like to use it in multiple places: @Observable class MyObservable { @SharedState(globalCount) var count: Int } actor MyBackgroundActor { @SharedState(globalCount) var count: Int } @MainActor struct MyView: View { @SharedState(globalCount) var count: Int } What I’ve Tried All of the examples below are using the property wrapper within a @MainActor class. However the same issue happens no matter what context I use the wrapper in: The notification callback is never called on the context the property wrapper was created with. I’ve tried using @isolated(any) to capture the context of the wrapper and save it to be called within the state in with unchecked sendable, which doesn’t work: final class MemoryShared<Value: Sendable>: Sendable { // Stores the callback for later. public func subscribe(callback: @escaping @isolated(any) (Value) -> Void) -> Subscription } @propertyWrapper struct SharedState<Value> { init(state: MemoryShared<Value>) { MainActor.assertIsolated() // Works! state.subscribe { MainActor.assertIsolated() // Fails self.publisher.send() } } } I’ve tried capturing the isolation within a task with AsyncStream. This actually compiles with no sendable issues, but still fails: @propertyWrapper struct SharedState<Value> { init(isolation: isolated (any Actor)? = #isolation, state: MemoryShared<Value>) { let (taskStream, continuation) = AsyncStream<Value>.makeStream() // The shared state sends new values to the continuation. subscription = state.subscribe(continuation: continuation) MainActor.assertIsolated() // Works! let task = Task { _ = isolation for await value in taskStream { _ = isolation MainActor.assertIsolated() // Fails } } } } I’ve tried using multiple combine subjects and publishers: final class MemoryShared<Value: Sendable>: Sendable { let subject: PassthroughSubject<T, Never> // ... var publisher: Publisher {} // ... } @propertyWrapper final class SharedState<Value> { var localSubject: Subject init(state: MemoryShared<Value>) { MainActor.assertIsolated() // Works! handle = localSubject.sink { MainActor.assertIsolated() // Fails } stateHandle = state.publisher.subscribe(localSubject) } } I’ve also tried: Using NotificationCenter Making the property wrapper a class Using NSKeyValueObserving Using a box class that is stored within the wrapper. Using @_inheritActorContext. All of these don’t work, because the event is never called from the thread the property wrapper resides in. Is it possible at all to create an observation system that notifies the observer from the same context as where the observer was created? Any help would be greatly appreciated!
2
0
300
2w
Unexpected behavior of dispatch_main on macOS
Hi! We are seeing a bit surprising behavior of dispatch_main on macOS where it seems to spawn a different thread instead of preserving the one it gets called from. Managed to reproduce it in a completely empty command-line tool project in Xcode int main(int argc, const char * argv[]) { @autoreleasepool { dispatch_main(); return 0; } } I put a breakpoint on the line with dispatch_main and see that I am on Thread 1 and inside main function. That makes sense. I resume execution and pause again. Looking at Thread output in Xcode, I can only see Thread 2. Thread 1 is gone and the executable keeps on running. So dispatch_main did what was expected (prevented the process from termination) but throws out the thread it was called from and creates a new one? Is that behavior expected or am I missing something? Just a brain teaser at this point. But we could not make sense out of it. :)
4
1
462
Jan ’25
SwiftUI crashes EXC_BREAKPOINT at _dispatch_semaphore_dispose
Based on crash reports for our app in production, we're seeing these SwiftUI crashes but couldn't figure out why it is there. These crashes are pretty frequent (>20 crashed per day). Would really appreciate it if anyone has any insight on why this happens. Based on the stacktrace, i can't really find anything that links back to our app (replaced with MyApp in the stacktrace). Thank you in advance! Crashed: com.apple.main-thread 0 libdispatch.dylib 0x39dcc _dispatch_semaphore_dispose.cold.1 + 40 1 libdispatch.dylib 0x4c1c _dispatch_semaphore_signal_slow + 82 2 libdispatch.dylib 0x2d30 _dispatch_dispose + 208 3 SwiftUICore 0x77f788 destroy for StoredLocationBase.Data + 64 4 libswiftCore.dylib 0x3b56fc swift_arrayDestroy + 196 5 libswiftCore.dylib 0x13a60 UnsafeMutablePointer.deinitialize(count:) + 40 6 SwiftUICore 0x95f374 AtomicBuffer.deinit + 124 7 SwiftUICore 0x95f39c AtomicBuffer.__deallocating_deinit + 16 8 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56 9 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160 10 SwiftUICore 0x77e53c StoredLocation.deinit + 32 11 SwiftUICore 0x77e564 StoredLocation.__deallocating_deinit + 16 12 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56 13 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160 14 MyApp 0x1673338 objectdestroyTm + 6922196 15 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56 16 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160 17 SwiftUICore 0x650290 _AppearanceActionModifier.MergedBox.__deallocating_deinit + 32 18 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56 19 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160 20 SwiftUICore 0x651b44 closure #1 in _AppearanceActionModifier.MergedBox.update()partial apply + 28 21 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56 22 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160 23 libswiftCore.dylib 0x3b56fc swift_arrayDestroy + 196 24 libswiftCore.dylib 0xa2a54 _ContiguousArrayStorage.__deallocating_deinit + 96 25 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56 26 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160 27 SwiftUICore 0x4a6c4c type metadata accessor for _ContiguousArrayStorage<CVarArg> + 120 28 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56 29 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160 30 SwiftUICore 0x4a5d88 static Update.dispatchActions() + 1332 31 SwiftUICore 0xa0db28 closure #2 in closure #1 in ViewRendererHost.render(interval:updateDisplayList:targetTimestamp:) + 132 32 SwiftUICore 0xa0d928 closure #1 in ViewRendererHost.render(interval:updateDisplayList:targetTimestamp:) + 708 33 SwiftUICore 0xa0b0d4 ViewRendererHost.render(interval:updateDisplayList:targetTimestamp:) + 556 34 SwiftUI 0x8f1634 UIHostingViewBase.renderForPreferences(updateDisplayList:) + 168 35 SwiftUI 0x8f495c closure #1 in UIHostingViewBase.requestImmediateUpdate() + 72 36 SwiftUI 0xcc700 thunk for @escaping @callee_guaranteed () -> () + 36 37 libdispatch.dylib 0x2370 _dispatch_call_block_and_release + 32 38 libdispatch.dylib 0x40d0 _dispatch_client_callout + 20 39 libdispatch.dylib 0x129e0 _dispatch_main_queue_drain + 980 40 libdispatch.dylib 0x125fc _dispatch_main_queue_callback_4CF + 44 41 CoreFoundation 0x56204 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16 42 CoreFoundation 0x53440 __CFRunLoopRun + 1996 43 CoreFoundation 0x52830 CFRunLoopRunSpecific + 588 44 GraphicsServices 0x11c4 GSEventRunModal + 164 45 UIKitCore 0x3d2eb0 -[UIApplication _run] + 816 46 UIKitCore 0x4815b4 UIApplicationMain + 340 47 SwiftUI 0x101f98 closure #1 in KitRendererCommon(_:) + 168 48 SwiftUI 0xe2664 runApp<A>(_:) + 100 49 SwiftUI 0xe5490 static App.main() + 180 50 MyApp 0x8a7828 main + 4340250664 (MyApp.swift:4340250664) 51 ??? 0x1ba496ec8 (Missing)
0
0
430
Jan ’25
What DispatchQueues should i use for my app's communication subsystem?
We would be creating N NWListener objects and M NWConnection objects in our process' communication subsystem to create server sockets, accepted client sockets on server and client sockets on clients. Both NWConnection and NWListener rely on DispatchQueue to deliver state changes, incoming connections, send/recv completions etc. What DispatchQueues should I use and why? Global Concurrent Dispatch Queue (and which QoS?) for all NWConnection and NWListener One custom concurrent queue (which QoS?) for all NWConnection and NWListener? (Does that anyways get targetted to one of the global queues?) One custom concurrent queue per NWConnection and NWListener though all targetted to Global Concurrent Dispatch Queue (and which QoS?)? One custom concurrent queue per NWConnection and NWListener though all targetted to single target custom concurrent queue? For every option above, how am I impacted in terms of parallelism, concurrency, throughput &amp; latency and how is overall system impacted (with other processes also running)? Seperate questions (sorry for the digression): Are global concurrent queues specific to a process or shared across all processes on a device? Can I safely use setSpecific on global dispatch queues in our app?
13
0
777
Jan ’25
Some fundamental doubts about DisptachQueue and GCD
I understand that GCD and it's underlying implementations have evolved over time. And many things have not been shared explicitly in Apple documentation. The most concepts of DispatchQueue (serial and concurrent queues), DispatchQoS, target queue and system provided queues: main and globals etc. I have some doubts & questions to clarify: [Main Dispatch Queue] [Link] Because the main queue doesn't behave entirely like a regular serial queue, it may have unwanted side-effects when used in processes that are not UI apps (daemons). For such processes, the main queue should be avoided. What does it mean? Can you elaborate? [Global Concurrent Dispatch Queues] Are they global to a process or across processes on a device. I believe it is the first case but just wanted to be sure. [Global Concurrent Dispatch Queues] Does system create 4 (for each QoS) * 2 (over-commiting and non-overcommiting queues) = 8 queues in all. When does which type of queue comes into play? [Custom Queue][Target Queue concept] [swift-corelibs-libdispatch/man/dispatch_queue_create.3] QUOTE The default target queue of all dispatch objects created by the application is the default priority global concurrent queue. UNQUOTE Is this stil true? We could not find a mention of this in any latest official apple documentation (though some old forum threads (one more) and github code documentation indicate the same). The official documentation only says: [dispatch_set_target_queue] QUOTE If you want the system to provide a queue that is appropriate for the current object UNQUOTE [dispatch_queue_create_with_target] QUOTE Specify DISPATCH_TARGET_QUEUE_DEFAULT to set the target queue to the default type for the current dispatch queue.UNQUOTE [Dispatch>DispatchQueue>init] QUOTE Specify DISPATCH_TARGET_QUEUE_DEFAULT if you want the system to provide a queue that is appropriate for the current object. UNQUOTE What is the difference between passing target queue as 'nil' vs 'DISPATCH_TARGET_QUEUE_DEFAULT' to DispatchQueue init? [Custom Queue][Target Queue concept] [dispatch_set_target_queue] QUOTE The system doesn't allocate threads to the dispatch queue if it has a target queue, unless that target queue is a global concurrent queue. UNQUOTE The system does allocate threads to the custom dispatch queues that have global concurrent queue as the default target. What does that mean? Why does targetting to global concurrent queues mean in that case? [System / GCD Thread Pool] that excutes work items from DispatchQueue: Is this thread pool per queue? or across queues per process? or across processes per device?
14
0
949
Feb ’25
Help Needed: How to Make iOS Timer More Stable?
I’m currently developing an iOS metronome app using DispatchSourceTimer as the timer. The interval is set very small, around 50 milliseconds, and I’m using CFAbsoluteTimeGetCurrent to calculate the elapsed time to ensure the beat is played within a ±0.003-second margin. The problem is that once the app goes to the background, the timing becomes unstable—it slows down noticeably, then recovers after 1–2 seconds. When coming back to the foreground, it suddenly speeds up, and again, it takes 1–2 seconds to return to normal. It feels like the app is randomly “powering off” and then “overclocking.” It’s super frustrating. I’ve noticed that some metronome apps in the App Store have similar issues, but there’s one called “Professional Metronome” that’s rock solid with no such problems. What kind of magic are they using? Any experts out there who can help? Thanks in advance! P.S. I’ve already enabled background audio permissions. The professional metronome that has no issues: https://link.zhihu.com/?target=https%3A//apps.apple.com/cn/app/pro-metronome-%25E4%25B8%2593%25E4%25B8%259A%25E8%258A%2582%25E6%258B%258D%25E5%2599%25A8/id477960671
2
0
402
Jan ’25
What are Dispatch workloops?
I’ve been experimenting with Dispatch, and workloops in particular. I gather that they’re similar to serial queues, except that they reorder work items by QoS. I suspect there’s more to workloops than meets the eye, though; calling dispatch_set_target_queue on them has no effect, in spite of the <dispatch/workloop.h> saying that workloops “can be passed to all APIs accepting a dispatch queue, except for functions from the dispatch_sync() family”. Workloops keep showing up in odd places like Metal and Network.framework backtraces, and <dispatch/workloop.h> includes functionality for tying workloops to os_workgroups (?!). What exactly is a workloop beyond just a serial queue with priority ordering, and why can’t I set the target queue of one?
2
0
644
Dec ’24
TaskExecutor and Swift 6 question
I have the following TaskExecutor code in Swift 6 and is getting the following error: //Error Passing closure as a sending parameter risks causing data races between main actor-isolated code and concurrent execution of the closure. May I know what is the best way to approach this? This is the default code generated by Xcode when creating a Vision Pro App using Metal as the Immersive Renderer. Renderer @MainActor static func startRenderLoop(_ layerRenderer: LayerRenderer, appModel: AppModel) { Task(executorPreference: RendererTaskExecutor.shared) { //Error let renderer = Renderer(layerRenderer, appModel: appModel) await renderer.startARSession() await renderer.renderLoop() } } final class RendererTaskExecutor: TaskExecutor { private let queue = DispatchQueue(label: "RenderThreadQueue", qos: .userInteractive) func enqueue(_ job: UnownedJob) { queue.async { job.runSynchronously(on: self.asUnownedSerialExecutor()) } } func asUnownedSerialExecutor() -&gt; UnownedTaskExecutor { return UnownedTaskExecutor(ordinary: self) } static let shared: RendererTaskExecutor = RendererTaskExecutor() }
1
0
747
Dec ’24
Crash occurs in both @MainActor classes and functions on iOS 14
Crash occurs in @MainActor class or function in iOS 14 Apps built and distributed targeting Xcode 16 version swift6 crash on iOS 14 devices. We create a static library and put it in our app's library. Crash occurs in all classes or functions of the static library (@MainActor in front). It does not occur from iOS / iPadOS 15. If you change the minimum supported version of the static library to iOS 11, a crash occurs, and if you change it to iOS 14, a crash does not occur. Is there a way to keep the minimum version of the static library at iOS 11 and prevent crashes?
1
0
561
Dec ’24
More DispatchIO problems -- cleanup handler isn't called
I create a DispatchIO object (in Swift) from a socketpair, set the low/high water marks to 1, and then call read on it. Elsewhere (multi-threaded, of course), I get data from somewhere, and write to the other side of it. Then when my data is done, I call dio?.close() The cleanup handler never gets called. What am I missing? (ETA: Ok, I can get it to work by calling dio?.close(flags: .stop) so that may be what I was missing.) (Also, I really wish it would get all the data available at once for the read, rather than 1 at a time.)
2
0
522
Nov ’24
Is it guaranteed that tasks in DispatchQueue.global() will not discard when switching back to the app later, assuming the app was not terminated
Let's say I queue some tasks on DispatchQueue.global() and then switch to another app or locking screen for a while. The app was not terminated but stayed in the background. Is there a chance that some tasks queued but not yet start could be discarded, even if the app hasn’t been terminated, after switching to another app or locking the screen for a while?
2
0
628
Nov ’24
DispatchSemaphore freeze
I'm calling the following function in a SwiftUI View modifier in Xcode 16.1: nonisolated function f -> CGFloat { let semaphore = DispatchSemaphore(value: 0) var a: CGFloat = 0 DispatchQueue.main.async { a = ... semaphore.signal() } semaphore.wait() return a } The app freezes, and code in the main queue is never executed.
2
0
625
Oct ’24
Crash with Progress type, Swift 6, iOS 18
We are getting a crash _dispatch_assert_queue_fail when the cancellationHandler on NSProgress is called. We do not see this with iOS 17.x, only on iOS 18. We are building in Swift 6 language mode and do not have any compiler warnings. We have a type whose init looks something like this: init( request: URLRequest, destinationURL: URL, session: URLSession ) { progress = Progress() progress.kind = .file progress.fileOperationKind = .downloading progress.fileURL = destinationURL progress.pausingHandler = { [weak self] in self?.setIsPaused(true) } progress.resumingHandler = { [weak self] in self?.setIsPaused(false) } progress.cancellationHandler = { [weak self] in self?.cancel() } When the progress is cancelled, and the cancellation handler is invoked. We get the crash. The crash is not reproducible 100% of the time, but it happens significantly often. Especially after cleaning and rebuilding and running our tests. * thread #4, queue = 'com.apple.root.default-qos', stop reason = EXC_BREAKPOINT (code=1, subcode=0x18017b0e8) * frame #0: 0x000000018017b0e8 libdispatch.dylib`_dispatch_assert_queue_fail + 116 frame #1: 0x000000018017b074 libdispatch.dylib`dispatch_assert_queue + 188 frame #2: 0x00000002444c63e0 libswift_Concurrency.dylib`swift_task_isCurrentExecutorImpl(swift::SerialExecutorRef) + 284 frame #3: 0x000000010b80bd84 MyTests`closure #3 in MyController.init() at MyController.swift:0 frame #4: 0x000000010b80bb04 MyTests`thunk for @escaping @callee_guaranteed @Sendable () -&gt; () at &lt;compiler-generated&gt;:0 frame #5: 0x00000001810276b0 Foundation`__20-[NSProgress cancel]_block_invoke_3 + 28 frame #6: 0x00000001801774ec libdispatch.dylib`_dispatch_call_block_and_release + 24 frame #7: 0x0000000180178de0 libdispatch.dylib`_dispatch_client_callout + 16 frame #8: 0x000000018018b7dc libdispatch.dylib`_dispatch_root_queue_drain + 1072 frame #9: 0x000000018018bf60 libdispatch.dylib`_dispatch_worker_thread2 + 232 frame #10: 0x00000001012a77d8 libsystem_pthread.dylib`_pthread_wqthread + 224 Any thoughts on why this is crashing and what we can do to work-around it? I have not been able to extract our code into a simple reproducible case yet. And I mostly see it when running our code in a testing environment (XCTest). Although I have been able to reproduce it running an app a few times, it's just less common.
23
7
2.3k
Oct ’24
Question regards thread safety for Dispatch queue and Network Framework completion callbacks
Hi there, I have some thread related questions regards to network framework completion callbacks. In short, how should I process cross thread data in the completion callbacks? Here are more details. I have a background serial dispatch queue (call it dispatch queue A) to sequentially process the nw_connection and any network io events. Meanwhile, user inputs are handled by serial dispatch queue ( dispatch queue B). How should I handle the cross thread user data in this case? (I write some simplified sample code below) struct { int client_status; char* message_to_sent; }user_data; nw_connection_t nw_connection; dispatch_queue_t dispatch_queue_A static void send_message(){ dispatch_data_t data = dispatch_data_create(message, len(message), dispath_event_loop->dispatch_queue, DISPATCH_DATA_DESTRUCTOR_DEFAULT); nw_connection_send( nw_connection, data, NW_CONNECTION_DEFAULT_MESSAGE_CONTEXT, false, ^(nw_error_t error) { user_data.client_status = SENT; mem_release(user_data.message_to_sent); }); }); } static void setup_connection(){ dispatch_queue_A= dispatch_queue_create("unique_id_a", DISPATCH_QUEUE_SERIAL); nw_connection = nw_connection_create(endpoint, params); nw_connection_set_state_changed_handler(){ if (state == nw_connection_state_ready) { user_data.client_status = CONNECTED } // ... other operations ... } nw_connection_start(nw_connection); nw_retain(nw_connection); } static void user_main(){ setup_connection() user_data.client_status = INIT; dispatch_queue_t dispatch_queue_B = dispatch_queue_create("unique_id_b", DISPATCH_QUEUE_SERIAL); // write socket dispatch_async(dispatch_queue_B, ^(){ if (user_data.client_status != CONNECTED ) return; user_data.message_to_sent = malloc(XX,***) // I would like to have all io events processed on dispatch queue A so that the io events would not interacted with the user events dispatch_async_f(dispatch_queue_A, send_message); // Disconnect block dispatch_async(dispatch_queue_B, ^(){ dispatch_async_f(dispatch_queue_A, ^(){ nw_connection_cancel(nw_connection) }); user_data.client_status = DISCONNECTING; }); // clean up connection and so on... } To be more specific, my questions would be: As I was using serial dispatch queue, I didn't protect the user_data here. However, which thread would the send_completion_handler get called? Would it be a data race condition where the Disconnect block and send_completion_handler both access user_data? If I protect the user_data with lock, it might block the thread. How does the dispatch queue make sure it would NOT put a related execution block onto the "blocked thread"?
4
0
702
Sep ’24
serial dispatch_queue_t crashed
background info: I dispatch async task to main queue in an es_handler_block_t(client subscribe open, create, exit, close events and mute all processes except DesktopServicesHelper). crash happened kinda randomly. most likely to happen when I copy a folder(contains a lot of files) in a volume to another volume. here's the crashed part of the diagnostic report . Thread 9 Crashed:: Dispatch queue: com.apple.main-thread 0 libsystem_kernel.dylib 0x18c6e2a60 __pthread_kill + 8 1 libsystem_pthread.dylib 0x18c71ac20 pthread_kill + 288 2 libsystem_c.dylib 0x18c627a20 abort + 180 3 libc++abi.dylib 0x18c6d1d30 abort_message + 132 4 libc++abi.dylib 0x18c6c1fe8 demangling_terminate_handler() + 348 5 libobjc.A.dylib 0x18c3601d0 _objc_terminate() + 144 6 libc++abi.dylib 0x18c6d10f4 std::__terminate(void (*)()) + 16 7 libc++abi.dylib 0x18c6d1098 std::terminate() + 108 8 libdispatch.dylib 0x18c56a3fc _dispatch_client_callout + 40 9 libdispatch.dylib 0x18c571a14 _dispatch_lane_serial_drain + 748 10 libdispatch.dylib 0x18c572578 _dispatch_lane_invoke + 432 11 libdispatch.dylib 0x18c57bea8 _dispatch_root_queue_drain + 392 12 libdispatch.dylib 0x18c57c6b8 _dispatch_worker_thread2 + 156 13 libsystem_pthread.dylib 0x18c716fd0 _pthread_wqthread + 228 14 libsystem_pthread.dylib 0x18c715d28 start_wqthread + 8 Thread 9 crashed with ARM Thread State (64-bit): x0: 0x0000000000000000 x1: 0x0000000000000000 x2: 0x0000000000000000 x3: 0x0000000000000000 x4: 0x000000018c6d62cb x5: 0x000000016c1eed20 x6: 0x000000000000006e x7: 0x0000000000000000 x8: 0x851ef9fdee51098d x9: 0x851ef9fc824ff98d x10: 0x0000000000000200 x11: 0x000000000000000b x12: 0x0000000000000000 x13: 0x00000000001ff800 x14: 0x00000000000007fb x15: 0x00000000a5a0204e x16: 0x0000000000000148 x17: 0x00000001fe792c30 x18: 0x0000000000000000 x19: 0x0000000000000006 x20: 0x000000016c1ef000 x21: 0x0000000000004003 x22: 0x000000016c1ef0e0 x23: 0x000000016c1ef0e0 x24: 0x00000001f442b6a8 x25: 0x0000000000000000 x26: 0x0000000000000000 x27: 0x0000600003664800 x28: 0x0000000000000000 fp: 0x000000016c1eec90 lr: 0x000000018c71ac20 sp: 0x000000016c1eec70 pc: 0x000000018c6e2a60 cpsr: 0x40001000 far: 0x0000000000000000 esr: 0x56000080 Address size fault
1
0
791
Jul ’24
DispatchIO crashes, tips on debugging?
BUG IN CLIENT OF LIBDISPATCH: Unexpected EV_VANISHED (do not destroy random mach ports or file descriptors) Which, ok, clear: somehow a file descriptor is being closed before DispatchIO.close() is called, yes? Only I can't figure out where it is being closed. I am currently using change_fdguard_np() to prevent closes anywhere else, and every single place where I call Darwin.close() is preceded by another call to change_fdguard_npand thenDispatchIO.close()`. eg self.unguardSocket() self.readDispatcher?.close() Darwin.close(self.socket) self.socket = -1 self.completion(self)
11
0
826
Jul ’24
EndPointSecurity system extension crashing due to deadline
Hi , Greetings of the day! I would like to get help to avoid the Endpoint Security System Extension crash due to below reason: Termination Reason: Namespace ENDPOINTSECURITY, Code 2 EndpointSecurity client terminated because it failed to respond to a message before its deadline Couple of events we have subscribed and for AUTH related events we are receiving deadline of 14 seconds in Sonoma and to avoid above issue we have implemented a queue to provide verdict within the deadline to avoid the OS killing of our extension however sometime we observe that we are getting crash with below message: Termination Reason: Namespace ENDPOINTSECURITY, Code 2 EndpointSecurity client terminated because it failed to respond to a message before its deadline **Dispatch Thread Soft Limit Reached: 64** (too many dispatch threads blocked in synchronous operations) There is no GCD API to check whether queue is reached to soft limit so we need help here to know or check whether queue is reached to soft limit 64. if we can check above then we should avoid adding the new tasks in it until its free to accept the tasks. And for NOTIFY_CLOSE, we are getting big value in seconds as deadline however we are adding all the processing of NOTIFY_CLOSE with dispatch_async however still receiving the crash. Here is code for AUTH_OPEN : dispatch_queue_t gNotifyCloseQueue = dispatch_queue_create( "com.example.notify_close_queue", dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_CONCURRENT_WITH_AUTORELEASE_POOL, QOS_CLASS_UTILITY, 0)); dispatch_queue_t gAuthOpenQueue = dispatch_queue_create("com.example.auth_open_queue",dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_CONCURRENT_WITH_AUTORELEASE_POOL,QOS_CLASS_USER_INTERACTIVE, 0)); BOOL AuthOpenEventHandler(es_message_t *pesMsg) { //Some Processing we are doing here like Calculate the deadline in seconds etc. and we are receiving 14 seconds in Sonoma // deadline - 14 seconds if ( deadlineInSeconds < 10 ) { dispatch_time_t triggerTime = dispatch_time(pesMsg->deadline, (int64_t)(-1 * NSEC_PER_SEC)); __block es_message_t *pesTempMsg; pesTempMsg = es_copy_message(pesMsg); dispatch_after(triggerTime, gAuthOpenQueue, ^{ if (pesTempMsg != NULL) { esRespondRes = es_respond_flags_result(pesClt,pesMsg,pesMsg->event.open.fflag,false); if(ES_RESPOND_RESULT_SUCCESS != esRespondRes) { es_free_message(pesTempMsg); return; } if (pesTempMsg != NULL) { es_free_message(pesTempMsg); } } return; }); } // Some Processing we are doing here to provide verdict and we are making sure that within 11 seconds we are setting the verdict // we are setting iRetFlag here based on verdict if (NULL != pesMsg) { esRespondRes = es_respond_flags_result(pesClt,pesMsg,iRetFlag,false); if(ES_RESPOND_RESULT_SUCCESS != esRespondRes) { es_free_message(pesMsg); return FALSE; } } return TRUE; } Here is the code for NOTIFY_CLOSE: BOOL NotifyEventHandler(es_message_t *pesMessage) { if (pesMessage->event_type == ES_EVENT_TYPE_NOTIFY_CLOSE && YES == pesMessage->event.close.modified) { __block es_message_t *pesTempMsg; pesTempMsg = es_copy_message(pesMessage); dispatch_async(gNotifyCloseQueue, ^{ // Performing Some processing on es_message_t if (pesTempMsg != NULL) { es_free_message(pesTempMsg); } }); if (pesMessage != NULL) { es_free_message(pesMessage); } } else { es_free_message(pesMessage); } return TRUE; } It would be helpful if someone help us to identify what could be wrong we are doing in above code and how to address/solve those problems (code snippet would be helpful) to avoid all possible crashes. ... Thanks & Regards, Mohamed Vasim
1
0
1.1k
Jul ’24