Post not yet marked as solved
This code can be compiled as command line tool for macOS.
import Foundation
@main
struct App {
static var counter = 0
static func main() async throws {
print("Thread: \(Thread.current)")
let task1 = Task { @MainActor () -> Void in
print("Task1 before await Task.yield(): \(Thread.current)")
await Task.yield()
print("Task1 before await increaseCounter(): \(Thread.current)")
await increaseCounter()
print("Task1 after await increaseCounter(): \(Thread.current)")
}
let task2 = Task { @MainActor () -> Void in
print("Task2 before await Task.yield(): \(Thread.current)")
await Task.yield()
print("Task2 before await decreaseCounter(): \(Thread.current)")
await decreaseCounter()
print("Task2 after await decreaseCounter(): \(Thread.current)")
}
_ = await (task1.value, task2.value)
print("Final counter value: \(counter)")
}
static func increaseCounter() async {
for i in 0..<999 {
counter += 1
print("up step \(i), counter: \(counter), thread: \(Thread.current)")
await Task.yield()
}
}
static func decreaseCounter() async {
for i in 0..<999 {
counter -= 1
print("down step \(i), counter: \(counter), thread: \(Thread.current)")
await Task.yield()
}
}
}
My understanding is:
static func main() async throws inherits MainActor async context, and should always run on the main thread (and it really seems that it does so)
Task is initialized by the initializer, so it inherits the actor async context, so I would expect that will run on the main thread. Correct?
Moreover, the closure for Task is annotated by @MainActor, so I would even more expect it will run on the main thread.
I would expect that static func main() async throws inherits MainActor async context and will prevent data races, so the final counter value will always be zero. But it is not.
Both task1 and task2 really start running on the main thread, however the async functions increaseCounter() and decreaseCounter() run on other threads than the main thread, so the Task does not prevent data races, while I would expect it.
When I annotate increaseCounter() and decreaseCounter() by @MainActor then it works correctly, but this is what I do not want to do, I would expect that Task will do that.
Can anyone explain, why this works as it does, please?
Post not yet marked as solved
I was wondering if anyone else was having this issue when distributing their iOS App for Ad Hoc or Development deployment with Xcode 13.3 RC.
Xcode generates the following error 'ipatool failed with an exception' with references to 'libswift_Concurrency'.
It only seems to happen when the following is true:
The App utilises Swift Concurrency
The Deployment Target is set to 14.0
Disabling rebuilding with Bitcode seems to work around the issue however this is less than ideal.
Feel free to reference my feedback FB9951218 where I've attached a minimal project that reproduces the issue.
I'm new to iOS development. I've created an app in SwiftUI that fetches data from firebase and saves it coredata. I've used Task{}, async and await for my JSON conversions, coredata fetch/save, etc.
The app is working fine and has been uploaded to app store too. Only in some devices with memory issues, the performance degrades. Though the app doesn't crash at any time in any devices, the UI blocks for a fraction of second while some data is being processed.
After a few hours of research regarding background threads, concurrency, managed objects, etc., I found it's not a good practice to use the viewContext for all the purposes.
So, here is why this happens: All the changes, fetch requests, batchdelete etc, are performed on the viewContect from the persistanceController. This file is auto generated by xcode while creating the project.
We need to use different context for saving and reading. We need to perform the CPU intensive operations in a background thread.
I did NOT find any codelabs/tutorials concerning coredata. The apple document explains building coredata stack etc with appDelegate, controller etc kinds of files which are not SwiftUI related.
Being new to iOS, I've learnt only SwiftUI and not the others. Please provide any links or documentation for recommended/best practices for coredata in SwiftUI.
Post not yet marked as solved
Assuming I have defined a global actor:
@globalActor actor MyActor {
static let shared = MyActor()
}
And I have a class, in which a couple of methods need to act under this:
class MyClass {
@MyActor func doSomething(undoManager: UndoManager) {
// Do something here
undoManager?.registerUndo(withTarget: self) {
$0.reverseSomething(undoManager: UndoManager)
}
}
@MyActor func reverseSomething(undoManager: UndoManager) {
// Do the reverse of something here
print(\(Thread.isMainThread) /// Prints true when called from undo stack
undoManager?.registerUndo(withTarget: self) {
$0.doSomething(undoManager: UndoManager)
}
}
}
Assume the code gets called from a SwiftUI view:
struct MyView: View {
@Environment(\.undoManager) private var undoManager: UndoManager?
let myObject: MyClass
var body: some View {
Button("Do something") { myObject.doSomething(undoManager: undoManager) }
}
}
Note that when the action is undone the 'reversing' func it is called on the MainThread. Is the correct way to prevent this to wrap the undo action in a task? As in:
@MyActor func reverseSomething(undoManager: UndoManager) {
// Do the reverse of something here
print(\(Thread.isMainThread) /// Prints true
undoManager?.registerUndo(withTarget: self) {
Task { $0.doSomething(undoManager: UndoManager) }
}
}
Post not yet marked as solved
I tried running the following code and it raises the following error every time:
DispatchQueue.main.sync { }
Thread 1: EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)
I found this post on stackoverflow that says to never run synchronous code on the main queue:
DispatchQueue crashing with main.sync in Swift
I had assumed that because the sync { } method is there that means there is some context that it can be used. Is there absolutely no use for executing synchronous code on the main queue?
Post not yet marked as solved
Does an operation that is synchronous have to be run on a separate thread? I think so, otherwise it would deadlock right from the beginning, if it’s run on and called from the main thread. What about if it’s run on and called from a thread other than the main thread? Will it deadlock?
This question arose when I was reading the following content at
Documentation/Foundation/Task Management/Operation
Asynchronous Versus Synchronous Operations
When you add an operation to an operation queue, the queue ignores the value of the isAsynchronous property and always calls the start() method from a separate thread. Therefore, if you always run operations by adding them to an operation queue, there is no reason to make them asynchronous.
Post not yet marked as solved
Hi,
if values are PUBLISHED rapidly, then ALL are present in the Combine sink, but SOME of them are absent from the async loop. Why the difference?
For example, in the code below, tapping repeatedly 4 times gives the output:
INPUT 24, INPUT 9, INPUT 31, INPUT 45, SINK 24, SINK 9, LOOP 24, SINK 31, SINK 45, LOOP 31.
import SwiftUI
import Combine
import PlaygroundSupport
var subject = PassthroughSubject<Int, Never>()
struct ContentView: View {
@State var bag = [AnyCancellable]()
@State var a = [String]()
var body: some View {
Text("TAP A FEW TIMES RAPIDLY")
.frame(width: 160, height: 160)
.onTapGesture {
Task {
let anyInt = Int.random(in: 1..<100)
print("INPUT \(anyInt)")
try await Task.sleep(nanoseconds: 3_000_000_000)
subject.send(anyInt)
}
}
.task {
for await anyInt in subject.values {
print(" LOOP \(anyInt)")
}
}
.onAppear{
subject.sink{ anyInt in
print(" SINK \(anyInt)")
}.store(in: &bag)
}
}
}
PlaygroundPage.current.setLiveView(ContentView())
Thank you.
Post not yet marked as solved
Hello,
When attempting to assign the UNNotificationResponse to a Published property on the main thread inside UNUserNotificationCenterDelegate's method
func userNotificationCenter(_ center: UNUserNotificationCenter, didReceive response: UNNotificationResponse) async
both Task { @MainActor in } and await MainActor.run are throwing a NSInternalInconsistencyException: 'Call must be made on main thread'.
I thought both of them were essentially doing the same thing, i.e. call their closure on the main thread. So why is this exception thrown? Is my understanding of the MainActor still incorrect, or is this a bug?
Thank you
Note: Task { await MainActor.run { ... } } and DispatchQueue.main.async don't throw any exception.
Post not yet marked as solved
Hi. I've been watching this video that probably many of you watched and noticed something strange during updates to the save functionality.
There is this snippet of code used in the result functionality (I cleaned it up and left only essentials for this question)
public func addDrink(mgCaffeine: Double, onDate date: Date) {
…
currentDrinks = drinks
Task {
await self.drinksUpdated()
}
}
private func drinksUpdated() async {
...
await store.save(currentDrinks)
}
addDrink is always called on the main actor so we are safe with updating instance variable currentDrinks. Everything is fine here. But then afterwards we schedule a task to perform saving of the current state. We await for the store because it is an actor and this means suspension point.
Now imagine that there are several calls happening to addDrink faster then store is able to save. This will mean that there will be several Tasks waiting for the access to the store. And there is no guarantee which task will run first so this means that we can't be sure which state of the currentDrinks will get written to the store last.
The same example in a bit different words.
We have 0 values in currentDrinks
addDrink is getting called
Now we have 1 value in currentDrinks
We fire a Task (lets call it Task1) to save to the store and it starts running
addDrink is getting called again
Now we have 2 values in currentDrinks
We schedule another Task (Task2) to save to the store while currentDrinks has 2 values and it suspends at the await store.save(currentDrinks) (first task is still running)
addDrink is getting called again
Now we have 3 values in currentDrinks
We schedule another Task (Task3) to save to the store while currentDrinks has 3 values and it suspends at the await store.save(currentDrinks) (first task is still running)
Task1 finishes
Now one of the tasks from Task2 and Task3 will get the chance to run but we don't control which one.
Task3 runs first and saves 3 values into store
Task2 runs and saves 2 values into store overwriting what Task3 did
We end up with 3 values in currentDrinks and 2 values in store
If we will relaunch app after this, we will basically loose 3rd drink that we added on the previous run.
Am I correct that this sample code contains a bug or am I missing something about Swift Concurrency?
Post not yet marked as solved
Is the following code accurate/safe?
final class ObjectWithPublished: @unchecked Sendable {
@Published
var property: Int?
}
I've tried testing this with the following test, which passes:
final class PublishedThreadSafetyTests: XCTestCase {
func testSettingValueFromMultipleThreads() {
let object = ObjectWithPublished()
let iterations = 1000
let completesIterationExpectation = expectation(description: "Completes iterations")
completesIterationExpectation.expectedFulfillmentCount = iterations
let receivesNewValueExpectation = expectation(description: "Received new value")
receivesNewValueExpectation.expectedFulfillmentCount = iterations
var cancellables: Set<AnyCancellable> = []
object
.$property
.dropFirst()
.sink { _ in
receivesNewValueExpectation.fulfill()
}
.store(in: &cancellables)
DispatchQueue.concurrentPerform(iterations: iterations) { iteration in
object.property = iteration
completesIterationExpectation.fulfill()
}
waitForExpectations(timeout: 1)
}
}
There are also no warnings when using the address sanitizer, so it seems like subscriptions and updating @Published values is thread-safe, but the Published type is not marked Sendable so I can't be 100% sure.
If this isn't safe, what do I need to protect to have the Sendable conformance be correct?
Post not yet marked as solved
Previously we could have some code in deinit that tears down local state:
class ViewController: UIViewController {
var displayLink: CADisplayLink?
deinit {
displayLink?.invalidate()
}
}
However this doesn't work in Swift 6 because we cannot access property 'displayLink' with a non-sendable type 'CADisplayLink?' from non-isolated context in deinit.
What's the right way to resolve this? Is the following a reasonable approach using Task to create an async context?
deinit {
Task {
await MainActor.run {
displayLink?.invalidate()
}
}
}
Post not yet marked as solved
In my TestApp I run the following code, to calculate every pixel of a bitmap concurrently:
private func generate() async {
for x in 0 ..< bitmap.width{
for y in 0 ..< bitmap.height{
let result = await Task.detached(priority:.userInitiated){
return iterate(x,y)
}.value
displayResult(result)
}
}
}
This works and does not give any warnings or runtime issues.
After watching the WWDC talk "Visualize and optimize Swift concurrency" I used instruments to visualize the Tasks:
The number of active tasks continuously raises until 2740 and stays constant at this value even after all 64000 pixels have been calculated and displayed.
What am I doing wrong?
Post not yet marked as solved
Hey there, I have a problem running multiply tasks in parallel in a SwiftUI view.
struct ModelsView: View {
@StateObject var tasks = TasksViewModel()
var body: some View {
NavigationView{
ScrollView {
ForEach(Array(zip(tasks.tasks.indices, tasks.tasks)), id: \.0) { task in
NavigationLink(destination: ModelView()) {
ModelPreviewView(model_name: "3dobject.usdz")
.onAppear {
if task.0 == tasks.tasks.count - 2 {
Task {
print(tasks.tasks.count)
await tasks.fetch_tasks(count: 4)
}
}
}
}
}
}.navigationTitle("3D modelle")
}.onAppear{
Task {
await tasks.fetch_tasks(count: 5)
await tasks.watch_for_new_tasks()
}
}
}
}
In my view, I spawn a task as soon as the View Appears which, first, fetches 5 tasks from the database (this works fine), and then it starts watching for new tasks.
In the Scroll View, right before the bottom is reached, I start loading new tasks.
The problem is, the asynchronous function fetch_tasks(count: 4) only gets continued if the asynchronous function watch_for_new_tasks() stops blocking.
actor TasksViewModel: ObservableObject {
@MainActor @Published private(set) var tasks : [Tasks.Task] = []
private var last_fetched_id : String? = nil
func fetch_tasks(count: UInt32) async {
do {
let tasks_data = try await RedisClient.shared.xrevrange(streamName: "tasks", end: last_fetched_id ?? "+" , start: "-", count: count)
last_fetched_id = tasks_data.last?.id
let fetched_tasks = tasks_data.compactMap { Tasks.Task(from: $0.data) }
await MainActor.run {
withAnimation(.easeInOut) {
self.tasks.append(contentsOf: fetched_tasks)
}
}
} catch {
print("Error fetching taskss \(error)")
}
}
func watch_for_new_tasks() async {
while !Task.isCancelled {
do {
let tasks_data = try await RedisClient.shared.xread(streams: "tasks", ids: "$")
let new_tasks = tasks_data.compactMap { Tasks.Task(from: $0.data) }
await MainActor.run {
for new_task in new_tasks.reversed() {
withAnimation {
self.tasks.insert(new_task, at: 0)
}
}
}
} catch {
print(error)
}
}
}
...
}
The asynchronous function watch_for_new_tasks() uses RedisClient.shared.xread(streams: "tasks", ids: "$") which blocks until at least one tasks is added to the Redis Stream.
I tried running the watch_for_new_tasks() on a Task.detached tasks, but that also blocks.
To be honest, I have no idea why this blocks, and I could use your guy's help if you could.
Thank you in Advance,
Michael
Post not yet marked as solved
// continuous version
let continuousClock = ContinuousClock()
let continuousElapsed = try await continuousClock.measure {
try await Task.sleep(until: .now + .seconds(5), clock: .continuous)
}
print(continuousElapsed)
// suspending version
let suspendingClock = SuspendingClock()
let suspendingElapsed = try await suspendingClock.measure {
try await Task.sleep(until: .now + .seconds(5), clock: .suspending)
}
print(suspendingElapsed)
result:
0.000126 seconds
5.324980708 seconds
Swift version: Apple Swift version 5.7 (swiftlang-5.7.0.113.202 clang-1400.0.16.2)
Platform: macOS 12.4, MacBook Pro 2.3GHz Quad-Core i5, 16GB RAM
I'm trying to read an OSLog concurrently because it is big and I don't need any of the data in order. Because the return from .getEntries is a sequence, the most efficient approach seems to be to iterate through.
Iteration through the sequence from start to finish can take a long time so I thought I can split it up into days and concurrently process each day. I'm using a task group to do this and it works as long as the number of tasks is less than 8. When it works, I do get the result faster but not a lot faster. I guess there is a lot of overhead but actually it seems to be that my log file is dominated by the processing on 1 of the days.
Ideally I want more concurrent tasks to break up the day into smaller blocks. But as soon as I try to create 8 or more tasks, I get a lockup with the following error posted to the console.
enable_updates_common timed out waiting for updates to reenable
Here are my tests.
First - a pure iterative approach. No tasks. completion of this routine on my i5 quad core takes 229s
func scanLogarchiveIterative(url: URL) async {
do {
let timer = Date()
let logStore = try OSLogStore(url: url)
let last5days = logStore.position(timeIntervalSinceEnd: -3600*24*5)
let filteredEntries = try logStore.getEntries(at: last5days)
var processedEntries: [String] = []
for entry in filteredEntries {
processedEntries.append(entry.composedMessage)
}
print("Completed iterative scan in: ", timer.timeIntervalSinceNow)
} catch {
}
}
Next is a concurrent approach using a TaskGroup which creates 5 child tasks. Completion takes 181s. Faster but the last day dominates so not a huge benefit as the most time is taken by a single task processing the last day.
func scanLogarchiveConcurrent(url: URL) async {
do {
let timer = Date()
var processedEntries: [String] = []
try await withThrowingTaskGroup(of: [String].self) { group in
let timestep = 3600*24
for logSectionStartPosition in stride(from: 0, to: -3600*24*5, by: -1*timestep) {
group.addTask {
let logStore = try OSLogStore(url: url)
let filteredEntries = try logStore.getEntries(at: logStore.position(timeIntervalSinceEnd: TimeInterval(logSectionStartPosition)))
var processedEntriesConcurrent: [String] = []
let endDate = logStore.position(timeIntervalSinceEnd: TimeInterval(logSectionStartPosition + timestep)).value(forKey: "date") as? Date
for entry in filteredEntries {
if entry.date > (endDate ?? Date()) {
break
}
processedEntriesConcurrent.append(entry.composedMessage)
}
return processedEntriesConcurrent
}
}
for try await processedEntriesConcurrent in group {
print("received task completion")
processedEntries.append(contentsOf: processedEntriesConcurrent)
}
}
print("Completed concurrent scan in: ", timer.timeIntervalSinceNow)
} catch {
}
}
If I split this further to concurrently process half days, then the app locks up. The console periodically prints enable_updates_common timed out waiting for updates to reenable If I pause the debugger, it seems like there is a wait on a semaphore which must be internal to the concurrent framework?
func scanLogarchiveConcurrentManyTasks(url: URL) async {
do {
let timer = Date()
var processedEntries: [String] = []
try await withThrowingTaskGroup(of: [String].self) { group in
let timestep = 3600*12
for logSectionStartPosition in stride(from: 0, to: -3600*24*5, by: -1*timestep) {
group.addTask {
let logStore = try OSLogStore(url: url)
let filteredEntries = try logStore.getEntries(at: logStore.position(timeIntervalSinceEnd: TimeInterval(logSectionStartPosition)))
var processedEntriesConcurrent: [String] = []
let endDate = logStore.position(timeIntervalSinceEnd: TimeInterval(logSectionStartPosition + timestep)).value(forKey: "date") as? Date
for entry in filteredEntries {
if entry.date > (endDate ?? Date()) {
break
}
processedEntriesConcurrent.append(entry.composedMessage)
}
return processedEntriesConcurrent
}
}
for try await processedEntriesConcurrent in group {
print("received task completion")
processedEntries.append(contentsOf: processedEntriesConcurrent)
}
}
print("Completed concurrent scan in: ", timer.timeIntervalSinceNow)
} catch {
}
}
I read that it may be possible to get more insight into concurrency issue by setting the following environment: LIBDISPATCH_COOPERATIVE_POOL_STRICT 1
This stops the lockup but it is because each task is run sequentially so there is no benefit from concurrency anymore.
I cannot see where to go next apart from accept the linear processing time. It also feels like doing any concurrency (even if under 8 tasks) is risky as there is no documentation to suggest that is a limit.
Could it be that concurrently processing the sequence from OSLog .getEntries is not suitable for concurrent access and shouldn't be done? Again, I don't see any documentation to suggest this is the case.
Finally, the processing of each entry is so light that there is little benefit to offloading just the processing to other tasks. The time taken seems to be purely dominated by iterating the sequence. In reality I do use a predicate in .getEntries which helps a bit but its not enough and concurrency would still be valuable if I could process 1 hour blocks concurrently.
Post not yet marked as solved
On an iMac Retina 5K, 27-inch, the render time is ridiculously slow and unresponsive.
It's not the graph, even though data arrives 3 times per second with 920 data points. I've tested this in standalone code - and its fast.
The text and buttons need to be updated at a similar speed.
Although this may appear to be working, it's far from being responsive. In fact, it's only possible to quit the program with a Command-Q, and displaying a basic About box takes forever.
I think my Views are well structured, with most modifiers factored out, so should I conclude that SwiftUI is not suitable for anything beyond a simple REST app running on IOS? I hope not. If Apple were writing something similar, how would they go about it?
I've spent over a year developing this app - wishing and expecting for Apple to come up with something better that doesn't run the entire UI on one single thread, and perhaps able to execute sub-views concurrently as one would expect.
I don't wish to sound critical. I just want to know how to get this UI rendering faster, so I'm crying out for help and advise.
Post not yet marked as solved
I have a Networking client that contains some shared mutable state. I decided to make this class an actor to synchronize access to this state. Since it's a networking client, it needs to make calls into URLSession's async data functions.
struct MiddlewareManager {
var middleware: [Middleware] = []
func finalRequest(for request: URLRequest) -> URLRequest { ... }
}
actor Networking {
var middleware = MiddlewareManager()
var session: URLSession
func data(from request: URLRequest) async throws -> (Data, URLResponse) {
let finalRequest = middleware.finalRequest(for: request)
let (data, response) = try await session.data(for: finalRequest)
let finalResponse = middleware.finalResponse(for: response)
// ...
return (data, finalResponse)
}
}
Making a network request could be a long running operation. From my understanding long running or blocking operations should be avoided inside of actors. I'm pretty sure this is non blocking due to the await call, the task just becomes suspended and the thread continues on, but it will inherit the actor's context. As long URLSession creates a new child task then this is ok, since the request isn't actually running on the actor's task?
My alternative approach after watching the tagged WWDC 2022 videos was to add the nonisolated tag to the function call to allow the task to not inherit the actor context, and only go to the actor when needed:
nonisolated func data(from request: URLRequest) async throws -> (Data, URLResponse) {
let finalRequest = await middleware.finalRequest(for: request)
let (data, response) = try await session.data(for: finalRequest)
let finalResponse = await middleware.finalResponse(for: response)
// ...
return (data, finalResponse)
}
Is this the proper or more correct way to solve this?
Post not yet marked as solved
Hello, I have an architectural question. Imagine this sample
https://developer.apple.com/documentation/swift/asyncstream
slightly modified so it doesn't use a static var
extension QuakeMonitor {
var quakes: AsyncStream<Quake> {
AsyncStream { continuation in
let monitor = QuakeMonitor()
monitor.quakeHandler = { quake in
continuation.yield(quake)
}
continuation.onTermination = { @Sendable _ in
monitor.stopMonitoring()
}
monitor.startMonitoring()
}
}
}
Suppose multiple receivers are interested in getting updates via the quakes stream, how would one architect such solution? E.g. let's say we have two views which exist at the same time such as below. Is there a way both can get updates simultaneously?
Alternatively, is it possible to "hand off" a stream from one object to another?
class MyFirstView: UIView {
private var quakeMonitor: QuakeMonitor
init(quakeMonitor: QuakeMonitor) {
self.quakeMonitor = quakeMonitor
}
func readQuakeData() {
for await quake in quakeMonitor.quakes {
print ("Quakes in first view: \(quake.date)")
}
}
}
class MySecondView: UIView {
private var quakeMonitor: QuakeMonitor
init(quakeMonitor: QuakeMonitor) {
self.quakeMonitor = quakeMonitor
}
func readQuakeData() {
for await quake in quakeMonitor.quakes {
print ("Quakes in second view: \(quake.date)")
}
}
}
Post not yet marked as solved
I'm fairly new to Swift and struggling to go beyond the basics with SwiftUI's state management. My app uses Firebase to authenticate itself with a server, so it goes through a series of state changes when logging in that the UI needs to respond to.
Because logging in goes through these multiple stages, I want to use async/await to make my code more readable than using a series of completion callbacks. The trouble is I can't update my state variables from an async function if SwiftUI is observing them via @ObservedObject or similar.
One way to fix that is to never update state directly from an async function, but use DispatchQueue.main.async instead. The trouble is, this adds noise, and Apple discourage it. Instead of changing how you set the state, you're supposed to change how you listen to it, by using .receive(on:).
The trouble is, I think the point where I would need to do that is buried somewhere in what @ObservedObject does behind the scenes. Why doesn't it already use .receive(on:)? Can I get or write a version that does without having to spend ages learning intricate details of SwiftUI or Combine?
Post not yet marked as solved
I have a long established app that uses a threadsafe var to protect an array of URLs from concurrency races, etc…
class ThreadsafeVar<T>
{
private var value = Optional<T>.none
func callAsFunction() -> T?
{
queue.sync(flags: .barrier) { [unowned self] in value }
}
func set(_ value: T?)
{
queue.async(flags: .barrier) { [unowned self] in self.value = value }
}
}
What I want to do is substitute an Actor, which I understand will be intrinsically threadsafe. So, I wrote one like this…
actor UrlManager
{
private var value = [URL]()
func callAsFunction() -> [URL]
{
value
}
func set(_ value: [URL])
{
self.value = value
}
}
The calling code is in one of four classes, all of which implement a protocol, which contains a read-write var…
protocol FolderProtocol
{
…
var fileURLs: [URL] { get set }
}
In this particular class, the var is implemented as read-only…
private var urls = ThreadsafeVar<[URL]>()
var fileURLs: [URL]
{
get
{
urls()?.sorted(using: .finder, pathComponent: .fullPath) ?? []
}
set { }
}
… the private var being updated from a metadata updating notification handler.
Calling the set(_:[URL]) method on the actor poses no problem…
Task { await urls.set(foundURLs) }
But, no matter what I try - continuation, try, await, etc, I don't seem to be able to read back the array of URLs from the actor's callAsFunction() method.
The fileURLs var is called on the main thread in response to NSCollectionView delegate methods.
Other implementing classes don't need the concurrency management as they are fixed once and never changed.
Can anyone help me with this one?