Hi everybody!
I'm desperately looking for help as I'm stuck with a rather fundamental problem regarding StoreKit2 - and maybe Swift Concurrency in general:
While renovating several freemium apps I'd like to move from local receipt validation with Receigen / OpenSSL to StoreKit2. These apps are using a dedicated "StoreManager" class which is encapsulating all App Store related operations like fetching products, performing purchases and listening on updates. For this purpose the StoreManager holds an array property with IDs of all purchased products, which is checked when a user invokes a premium function. This array can have various states during the app's life cycle:
Immediately after app launch (before the receipt / entitlements are checked) the array is empty
After checking the receipt the array holds all (locally registered) purchases
Later on it might change if an "Ask to Buy" purchase was approved or a purchase was performed
It is important that the array is instantly used in other (Objective-C) classes to reflect the "point in time" state of purchased products - basically acting like a cache: No async calls, completion handler, notification observer etc.
When moving to StoreKit2 the same logic applies, but the relevant API calls are (of course) in asynchronous functions: Transaction.updates triggers Transaction.currentEntitlements, which needs to update the array property. But Xcode 16 is raising a strict error because of potential data races when accessing the instance variable from an asynchronous function / actor.
What is the way to propagate IDs of purchased products app-wide without requiring every calling function as asynchronous? I'm sure I'm missing a general point with Swift Concurrency: Every example I found was working with call-backs / await, and although this talk of WWDC 2021 is addressing "protecting mutable states" I couldn't apply its outcomes to my problem. What am I missing?
                    
                  
                Concurrency
RSS for tagConcurrency is the notion of multiple things happening at the same time.
Posts under Concurrency tag
            
              
                133 Posts
              
            
            
              
                
              
            
          
          
  
    
    Selecting any option will automatically load the page
  
  
  
  
    
  
  
              Post
Replies
Boosts
Views
Activity
                    
                      I am trying to migrate a WatchConnectivity App to Swift6 and I found an Issue with my replyHandler callback for sendMessageData.
I am wrapping sendMessageData in withCheckedThrowingContinuation, so that I can await the response of the reply. I then update a Main Actor ObservableObject that keeps track of the count of connections that have not replied yet, before returning the data using continuation.resume.
...
@preconcurrency import WatchConnectivity
actor ConnectivityManager: NSObject, WCSessionDelegate {
  private var session: WCSession = .default
  private let connectivityMetaInfoManager: ConnectivityMetaInfoManager
  ...
  private func sendMessageData(_ data: Data) async throws -> Data? {
    Logger.shared.debug("called on Thread \(Thread.current)")
    await connectivityMetaInfoManager.increaseOpenSendConnectionsCount()
    return try await withCheckedThrowingContinuation({
      continuation in
      self.session.sendMessageData(
        data,
        replyHandler: { data in
          Task {
            await self.connectivityMetaInfoManager
              .decreaseOpenSendConnectionsCount()
          }
          continuation.resume(returning: data)
        },
        errorHandler: { (error) in
          Task {
            await self.connectivityMetaInfoManager
              .decreaseOpenSendConnectionsCount()
          }
          continuation.resume(throwing: error)
        }
      )
    })
  }
Calling sendMessageData somehow causing the app to crash and display the debug message: Incorrect actor executor assumption.
The code runs on swift 5 with SWIFT_STRICT_CONCURRENCY = complete.
However when I switch to swift 6 the code crashes.
I rebuilt a simple version of the App. Adding bit by bit until I was able to cause the crash.
See Broken App
Awaiting sendMessageData and wrapping it in a task and adding the @Sendable attribute to continuation, solve the crash.
See Fixed App
But I do not understand why yet.
Is this intended behaviour?
Should the compiler warn you about this?
Is it a WatchConnectivity issue?
I initially posted on forums.swift.org, but was told to repost here.
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		App & System Services
  	
                
                
                SubTopic:
                  
                    
	
		Processes & Concurrency
		
  	
                  
                
              
              
                Tags:
              
              
  
  
    
      
      
      
        
          
            Watch Connectivity
          
        
        
      
      
    
      
      
      
        
          
            Swift
          
        
        
      
      
    
      
      
      
        
          
            Concurrency
          
        
        
      
      
    
  
  
              
                
                
              
            
          
                    
                      Hey everyone,
I’m learning async/await and trying to fetch an image from a URL off the main thread to avoid overloading it, while updating the UI afterward. Before starting the fetch, I want to show a loading indicator (UI-related work). I’ve implemented this in two different ways using Task and Task.detached, and I have some doubts:
Is using Task { @MainActor the better approach?
I added @MainActor because, after await, the resumed execution might not return to the Task's original actor. Is this the right way to ensure UI updates are done safely?
Does calling fetchImage() on @MainActor force it to run entirely on the main thread?
I used an async data fetch function (not explicitly marked with any actor). If I were to use a completion handler instead, would the function run on the main thread?
Is using Task.detached overkill here?
I tried Task.detached to ensure the fetch runs on a non-main actor. However, it seems to involve unnecessary actor hopping since I still need to hop back to the main actor for UI updates. Is there any scenario where Task.detached would be a better fit?
class ViewController : UIViewController{
    override func viewDidLoad() {
        super.viewDidLoad()
        //MARK: First approch
        Task{@MainActor in
            showLoading()
            let image = try? await fetchImage() //Will the image fetch happen on main thread?
            updateImageView(image:image)
            hideLoading()
        }
        //MARK: 2nd approch
        Task{@MainActor in
            showLoading()
            let detachedTask = Task.detached{
                try await self.fetchImage()
            }
            updateImageView(image:try? await detachedTask.value)
            hideLoading()
        }
    }
    func fetchImage() async throws -> UIImage {
        let url = URL(string: "https://via.placeholder.com/600x400.png?text=Example+Image")!
        //Async data function call
        let (data, response) = try await URLSession.shared.data(from: url)
        guard let httpResponse = response as? HTTPURLResponse, httpResponse.statusCode == 200 else {
            throw URLError(.badServerResponse)
        }
        guard let image = UIImage(data: data) else {
            throw URLError(.cannotDecodeContentData)
        }
        return image
    }
    func showLoading(){
        //Show Loader handling
    }
    func hideLoading(){
        //Hides the loader
    }
    func updateImageView(image:UIImage?){
        //Image view updated
    }
}
                    
                  
                
                    
                      Given the below code with Swift 6 language mode, Xcode 16.2
If running with iOS 18+: the app crashes due to _dispatch_assert_queue_fail
If running with iOS 17 and below: there is a warning: warning: data race detected: @MainActor function at Swift6Playground/PublishedValuesView.swift:12 was not called on the main thread
Could anyone please help explain what's wrong here?
import SwiftUI
import Combine
@MainActor
class PublishedValuesViewModel: ObservableObject {
    @Published var count = 0
    @Published var content: String = "NA"
    private var cancellables: Set<AnyCancellable> = []
    
    func start() async {
        let publisher = $count
            .map { String(describing: $0) }
            .removeDuplicates()
        
        for await value in publisher.values {
            content = value
        }
    }
}
struct PublishedValuesView: View {
    @ObservedObject var viewModel: PublishedValuesViewModel
    
    var body: some View {
        Text("Published Values: \(viewModel.content)")
            .task {
                await viewModel.start()
            }
    }
}
                    
                  
                
                    
                      Running up Xcode 16.2 Beta 1, a lot of my code that used onPreferenceChange in Views to change @State properties of those views, such as some notion of a measured width is now complaining about mutating the @MainActor-isolated properties from Sendable closures.
Now I've got to hoop-jump to change @State properties from onPreferenceChange? OK, but seems a bit of extra churn.
                    
                  
                
                    
                      I have an issue where a very specific configuration of .overlay, withAnimation, and a bindable state can freeze the app when the state changes.
I've isolated the problematic source code into a sample project can be found here that demonstrates the issue:
https://github.com/katagaki/IcyOverlay
Steps to Reproduce
To reproduce the issue, tap the 'Simulate Content Load' button.
Once the progress bar has completed, a switch is toggled to hide the progress view, which causes the overlay to disappear, and the app to freeze.
Any help and/or advice will be appreciated!
Development Environment
Xcode Version 16.2 (16C5032a), macOS 15.2(24C101)
iOS SDK: 18.2 (22C146), Simulator: 18.2 (22C150)
                    
                  
                
                    
                      Hello,
I'm currently migrating my app location service to use the new CLLocationUpdate.Updates.
I'm trying to understand what can fail in this AsyncSequence. Based on the previous CLError, I thought authorisation was one of them for example but it turns out that this is handled by the CLLocationUpdate where we can check different properties.
So, is there a list of errors available somewhere?
Thanks
Axel, @alpennec
                    
                  
                
                    
                      I have enabled runtime concurrency warnings to check for future problems concerning concurrency: Build Setting / Other Swift Flags:
-Xfrontend -warn-concurrency -Xfrontend -enable-actor-data-race-checks
When trying to call the async form of PHPhotoLibrary.shared().performChanges{} I get the following runtime warning: warning: data race detected: @MainActor function at ... was not called on the main thread in the line containing performChanges.
My sample code inside a default Xcode multi platform app template is as follows:
import SwiftUI
import Photos
@MainActor
class FotoChanger{
    func addFotos() async throws{
        await PHPhotoLibrary.requestAuthorization(for: .addOnly)
        try! await PHPhotoLibrary.shared().performChanges{
            let data = NSDataAsset(name: "Swift")!.data
            let creationRequest = PHAssetCreationRequest.forAsset()
            creationRequest.addResource(with: .photo, data: data, options: PHAssetResourceCreationOptions())
        }
    }
}
struct ContentView: View {
    var body: some View {
        ProgressView()
            .task{
                try! await FotoChanger().addFotos()
            }
    }
}
You would have to have a Swift data asset inside the asset catalog to run the above code, but the error can even be recreated if the data is invalid.
But what am I doing wrong? I have not found a way to run perform changes, the block or whatever causes the error on the main thread.
PS: This is only test code to show the problem, don't mind the forced unwraps.
                    
                  
                
                    
                      One challenging aspect of Swift concurrency is flow control, aka backpressure.  I was explaining this to someone today and thought it better to post that explanation here, for the benefit of all.
If you have questions or comments, start a new thread in App & System Services > Processes & Concurrency and tag with Swift and Concurrency.
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
Why is flow control important?
In Swift concurrency you often want to model data flows using AsyncSequence.  However, that’s not without its challenges.  A key issue is flow control, aka backpressure.
Imagine you have a network connection with a requests property that returns an AsyncSequence of Request values.  The core of your networking code might be a loop like this:
func processRequests(connection: Connection) async throws {
    for try await request in connection.requests {
        let response = responseForRequest(request)
        try await connection.reply(with: response)
    }
}
Flow control is important in both the inbound and outbound cases.  Let’s start with the inbound case.
If the remote peer is generating requests very quickly, the network is fast, and responseForRequest(_:) is slow, it’s easy to fall foul of unbounded memory growth.  For example, if you use AsyncStream to implement the requests property, its default buffering policy is .unbounded.  So the code receiving requests from the connection will continue to receive them, buffering them in the async stream, without any bound.  In the worst case scenario that might run your process out of memory.  In a more typical scenario it might result in a huge memory spike.
The outbound case is similar.  Imagine that the remote peer keeps sending requests but stops receiving them.  If the reply(with:) method isn’t implemented correctly, this might also result in unbounded memory growth.
The solution to this problem is flow control.  This flow control operates independently on the send and receive side:
On the send side, the code sending responses should notice that the network connection has asserted flow control and stop sending responses until that flow control lifts.  In an async method, like the reply(with:) example shown above, it can simply not return until the network connection has space to accept the reply.
On the receive side, the code receiving requests from the connection should monitor how many are buffered.  If that gets too big, it should stop receiving.  That causes the requests to pile up in the connection itself.  If the network connection implements flow control properly [1], this will propagate to the remote peer, which should stop generating requests.
[1] TCP and QUIC both implement flow control.  Use them!  If you’re tempted to implement your own protocol directly on top of UDP, consider how it should handle flow control.
Flow control and Network framework
Network framework has built-in support for flow control.  On the send side, it uses a ‘push’ model.  When you call send(content:contentContext:isComplete:completion:) the connection buffers the message.  However, it only calls the completion handler when it’s passed that message to the network for transmission [2].  If you send a message and don’t receive this completion callback, it’s time to stop sending more messages.
On the receive side, Network framework uses a ‘pull’ model.  The receiver calls a receive method, like receiveMessage(completion:), which calls a completion handler when there’s a message available.  If you’ve already buffered too many messages, just stop calling this receive method.
These techniques are readily adaptable to Swift concurrency using Swift’s CheckedContinuation type.  That works for both send and receive, but there’s a wrinkle.  If you want to model receive as an AsyncSequence, you can’t use AsyncStream.  That’s because AsyncStream doesn’t support flow control.  So, you’ll need to come up with your own AsyncSequence implementation [3].
[2] Note that this doesn’t mean that the data has made it to the remote peer, or has even been sent on the wire.  Rather, it says that Network framework has successfully passed the data to the transport protocol implementation, which is then responsible for getting it to the remote peer.
[3] There’s been a lot of discussion on Swift Evolution about providing such an implementation but none of that has come to fruition yet.  Specifically:
The Swift Async Algorithms package provides AsyncChannel, but my understanding is that this is not yet ready for prime time.
I believe that the SwiftNIO folks have their own infrastructure for this.  They’re driving this effort to build such support into Swift Async Algorithms.
Avoid the need for flow control
In some cases you can change your design to avoid the need for control.  Imagine that your UI needs to show the state of a remote button.  The network connection sends you a message every time the button is depressed or released.  However, your UI only cares about the current state.
If you forward every messages from the network to your UI, you have to worried about flow control.  To eliminate that worry:
Have your networking code translate the message to reflect the current state.
Use AsyncStream with a buffering policy of .bufferingNewest(1).
That way there’s only ever one value in the stream and, if the UI code is slow for some reason, while it might miss some transitions, it always knows about the latest state.
2024-12-13 Added a link to the MultiProducerSingleConsumerChannel PR.
2024-12-10 First posted.
                    
                  
                
                    
                      Hello, I have been implementing faceID authentication using LocalAuthentication, and I've noticed that if i use swift 5 this code compiles but when i change to swift 6 it gives me a crash saying this compile error:
i have just created this project for this error purpose so this is my codebase:
import LocalAuthentication
import SwiftUI
struct ContentView: View {
    @State private var isSuccess: Bool = false
    var body: some View {
        VStack {
            if isSuccess {
                Text("Succed")
            } else {
                Text("not succeed")
            }
        }
        .onAppear(perform: authenticate)
    }
    
    func authenticate() {
        let context = LAContext()
        var error: NSError?
        
        if context.canEvaluatePolicy(.deviceOwnerAuthenticationWithBiometrics, error: &error) {
            let reason = "We need to your face to open the app"
            context.evaluatePolicy(.deviceOwnerAuthenticationWithBiometrics, localizedReason: reason) { sucexd, error in
                
                if sucexd {
                    let success = sucexd
                    Task { @MainActor [success] in
                        isSuccess = success
                    }
                } else {
                    print(error?.localizedDescription as Any)
                }
                
            }
        } else {
            print(error as Any)
        }
    }
}
#Preview {
    ContentView()
}
also i have tried to not use the task block and also gives me the same error. i think could be something about the LAContext NSObject that is not yet adapted for swift 6 concurrency?
also i tried to set to minimal but is the same error
Im using xcode 16.1 (16B40) with M1 using MacOS Seqouia 15.0.1
Help.
                    
                  
                
                    
                      Hello. I am re-writing our way of storing data into Core Data in our app, so it can be done concurrently.
The solution I opted for is to have a singleton actor that takes an API model, and maps it to a Core Data object and saves it.
For example, to store an API order model, I have something like this:
func store(
  order apiOrder: APIOrder,
  currentContext: NSManagedObjectContext?
) -> NSManagedObjectID? {
  let context = currentContext ?? self.persistentContainer.newBackgroundContext()
  // …
}
In the arguments, there is a context you can pass, in case you need to create additional models and relate them to each other. I am not sure this is how you're supposed to do it, but it seemed to work.
From what I've understood of Core Data and using multiple contexts, the appropriate way use them is with context.perform or context.performAndWait.
However, since my storage helper is an actor, @globalActor actor Storage2 { … }, my storage's methods are actor-isolated.
This gives me warnings / errors in Swift 6 when I try to pass the context for to another of my actor's methods.
let context = …
return context.performAndWait {
  // …
  if let apiBooking = apiOrder.booking {
    self.store(booking: apiBooking, context: context)
    /* causes warning:
     Sending 'context' risks causing data races; this is an error in the Swift 6 language mode
        'self'-isolated 'context' is captured by a actor-isolated closure. actor-isolated uses in closure may race against later nonisolated uses
        Access can happen concurrently
    */
  }
  // …
}
From what I understand this is because my methods are actor-isolated, but the closure of performAndWait does not execute in a thread safe environment.
With all this, what are my options? I've extracted the store(departure:context:) into its own method to avoid duplicated code, but since I can't call it from within performAndWait I am not sure what to do.
Can I ditch the performAndWait? Removing that makes the warning "go away", but I don't feel confident enough with Core Data to know the answer.
I would love to get any feedback on this, hoping to learn!
                    
                  
                
                    
                      I have this actor
actor ConcurrentDatabase: ModelActor {
    nonisolated let modelExecutor: any ModelExecutor
    nonisolated let modelContainer: ModelContainer
    init(modelContainer: ModelContainer) {
        self.modelExecutor = DefaultSerialModelExecutor(modelContext: ModelContext(modelContainer))
        self.modelContainer = modelContainer
    }
/// Save pending changes in the model context.
    private func save() {
        if self.modelContext.hasChanges {
            do {
                try self.modelContext.save()
            } catch {
                ...
            }
        }
    }
}
I am getting a runtime crash on:
try self.modelContext.save()
when trying to insert something into the database and save
Thread 1: Fatal error: Incorrect actor executor assumption; Expected same executor as MainActor.
Can anyone explain why this is happening?
                    
                  
                
                    
                      Has anyone found a thread-safe pattern that can extract results from completerDidUpdateResults(MKLocalSearchCompleter) in the MKLocalSearchCompleterDelegate ?
I've downloaded the code sample from Interacting with nearby points of interest and notice the conformance throws multiple errors in Xcode 16 Beta 5 with Swift 6:
extension SearchDataSource: MKLocalSearchCompleterDelegate {
    nonisolated func completerDidUpdateResults(_ completer: MKLocalSearchCompleter) {
        Task {
            let suggestedCompletions = completer.results
            await resultStreamContinuation?.yield(suggestedCompletions)
        }
    }
Error: Task-isolated value of type '() async -> ()' passed as a strongly transferred parameter; later accesses could race
and
Error: Sending 'suggestedCompletions' risks causing data races
Is there another technique I can use to share state of suggestedCompletions outside of the delegate in the code sample?
                    
                  
                
                    
                      When using conformance to ObservableObject and then doing async work in a Task, you will get a warning courtesy of Combine if you then update an @Published or @State var from anywhere but the main thread. However, if you are using @Observable there is no such warning.
Also, Thread.current is unavailable in asynchronous contexts, so says the warning. And I have read that in a sense you simply aren't concerned with what thread an async task is on.
So for me, that begs a question. Is the lack of a warning, which when using Combine is rather important as ignoring it could lead to crashes, a pretty major bug that Apple seemingly should have addressed long ago? Or is it just not an issue to update state from another thread, because Xcode is doing that work for us behind the scenes too, just as it manages what thread the async task is running on when we don't specify?
I see a lot of posts about this from around the initial release of Async/Await talking about using await MainActor.run {} at the point the state variable is updated, usually also complaining about the lack of a warning. But ow years later there is still no warning and I have to wonder if this is actually a non issue. On some ways similar to the fact that many of the early posts I have seen related to @Observable have examples of an @Observable ViewModel instantiated in the view as an @State variable, but in fact this is not needed as that is addressed behind the scenes for all properties of an @Observable type.
At least, that is my understanding now, but I am learning Swift coming from a PowerShell background so I question my understanding a lot.
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		App & System Services
  	
                
                
                SubTopic:
                  
                    
	
		Processes & Concurrency
		
  	
                  
                
              
              
                Tags:
              
              
  
  
    
      
      
      
        
          
            Concurrency
          
        
        
      
      
    
      
      
      
        
          
            Swift
          
        
        
      
      
    
      
      
      
        
          
            SwiftUI
          
        
        
      
      
    
      
      
      
        
          
            Combine
          
        
        
      
      
    
  
  
              
                
                
              
            
          
                    
                      Hello,
I have these two errors in this particular block of code: Capture of 'self' with non-sendable type 'MusicPlayer?' in a @Sendable closure and Capture of 'localUpdateProgress' with non-sendable type '(Double, Double) -&gt; Void' in a @Sendable closure
` @MainActor
func startProgressTimer(updateProgress: @escaping (Double, Double) -&gt; Void) {
timer?.invalidate() // Stop any existing timer
let localUpdateProgress = updateProgress
     timer = Timer.scheduledTimer(withTimeInterval: 1.0, repeats: true) { [weak self] _ in
         guard let self = self,
               let audioPlayer = self.audioPlayer,
               let currentItem = audioPlayer.currentItem else {
             print("currentItem is nil or audioPlayer is unavailable")
             return
         }
         let currentTime = currentItem.currentTime().seconds
         let duration = currentItem.duration.seconds
        localUpdateProgress(currentTime, duration)
     }
 }`
I've tried nearly every solution and can't think of one that works. Any help is greatly appreciated :)
                    
                  
                
                    
                      Hello,
I am exploring real-time object detection, and its replacement/overlay with another shape, on live video streams for an iOS app using Core ML and Vision frameworks. My target is to achieve high-speed, real-time detection without noticeable latency, similar to what’s possible with PageFault handling and Associative Caching in OS, but applied to video processing.
Given that this requires consistent, real-time model inference, I’m curious about how well the Neural Engine or GPU can handle such tasks on A-series chips in iPhones versus M-series chips (specifically M1 Pro and possibly M4) in MacBooks. Here are a few specific points I’d like insight on:
Hardware Suitability: How feasible is it to perform real-time object detection with Core ML on the Neural Engine (i.e., can it maintain low latency)? Would the M-series chips (e.g., M1 Pro or newer) offer a tangible benefit for this type of task compared to the A-series in mobile devices? Which A- and M- chips would be minimum feasible recommendation for such task.
Performance Expectations: For continuous, live video object detection, what would be the expected frame rate or latency using an optimized Core ML model? Has anyone benchmarked such applications, and is the M-series required to achieve smooth, real-time processing?
Differences Across Apple Hardware: How does performance scale between the A-series Neural Engine and M-series GPU and Neural Engine? Is the M-series vastly superior for real-time Core ML tasks like object detection on live video feeds?
If anyone has attempted live object detection on these chips, any insights on real-time performance, limitations, or optimizations would be highly appreciated.
Please refer: Apple APIs
Thank you in advance for your help!
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Machine Learning & AI
  	
                
                
                SubTopic:
                  
                    
	
		Core ML
		
  	
                  
                
              
              
                Tags:
              
              
  
  
    
      
      
      
        
          
            Machine Learning
          
        
        
      
      
    
      
      
      
        
          
            Core ML
          
        
        
      
      
    
      
      
      
        
          
            Performance
          
        
        
      
      
    
      
      
      
        
          
            Concurrency
          
        
        
      
      
    
  
  
              
                
                
              
            
          
                    
                      I have the following code in my ObservableObject class and recently XCode started giving purple coloured runtime issues with it (probably in iOS 18):
 Issue 1: Performing I/O on the main thread can cause slow launches.  
 Issue 2: Interprocess communication on the main thread can cause non-deterministic delays.
Issue 3: Interprocess communication on the main thread can cause non-deterministic delays.
Here is the code:
@Published var cameraAuthorization:AVAuthorizationStatus
@Published var micAuthorization:AVAuthorizationStatus
@Published var photoLibAuthorization:PHAuthorizationStatus
@Published var locationAuthorization:CLAuthorizationStatus
var locationManager:CLLocationManager
override init() {
    // Issue 1 (Performing I/O on the main thread can cause slow launches.)
    
    cameraAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.video) 
    micAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.audio)
    photoLibAuthorization = PHPhotoLibrary.authorizationStatus(for: .addOnly)
 //Issue 1: Performing I/O on the main thread can cause slow launches.
    locationManager = CLLocationManager()
    
    locationAuthorization = locationManager.authorizationStatus
    
    super.init()
    
  //Issue 2: Interprocess communication on the main thread can cause non-deterministic delays.
    locationManager.delegate = self
}
And also in route Change notification handler of  AVAudioSession.routeChangeNotification,
 //Issue 3: Hangs -  Interprocess communication on the main thread can cause non-deterministic delays.
    let categoryPlayback = (AVAudioSession.sharedInstance().category == .playback)
I wonder how checking authorisation status can give these issues? What is the fix here?
                    
                  
                
              
                
              
              
                
                Topic:
                  
	
		Media Technologies
  	
                
                
                SubTopic:
                  
                    
	
		Photos & Camera
		
  	
                  
                
              
              
                Tags:
              
              
  
  
    
      
      
      
        
          
            Core Location
          
        
        
      
      
    
      
      
      
        
          
            AVFoundation
          
        
        
      
      
    
      
      
      
        
          
            Concurrency
          
        
        
      
      
    
      
      
      
        
          
            Xcode Sanitizers and Runtime Issues
          
        
        
      
      
    
  
  
              
                
                
              
            
          
                    
                      Hi, I am stuck moving one of my projects from Xcode 15 to Xcode 16. This is a SwiftUI application that uses in some places classic Threads and locks/conditions for synchronization. I was hoping that in Swift 5 mode, I could compile and run this app also with Xcode 16 so that I can start migrating it towards Swift 6.
Unfortunately, my application crashes via EXC_BREAKPOINT (code=1, subcode=0x1800eb31c) whenever some blocking operation e.g. condition.wait() or DispatchQueue.main.sync { ... } is invoked from within the same module (I haven't seen this happening for frameworks that use the same code that I linked in dynamically). I have copied an abstraction below that I am using, to give an example of the kind of code I am talking about. I have verified that Swift 5 is used, "strict concurrency checking" is set to "minimal", etc.
I have not found a workaround and thus, I'm curious to hear if others were facing similar challenges? Any hints on how to proceed are welcome.
Thanks,
Matthias
Example abstraction that is used in my app. It's needed because I have synchronous computations that require a large stack. It's crashing whenever condition.wait() is executed.
public final class TaskSerializer: Thread {
  
  /// Condition to synchronize access to `tasks`.
  private let condition = NSCondition()
  
  /// The tasks queue.
  private var tasks = [() -> Void]()
  
  public init(stackSize: Int = 8_388_608, qos: QualityOfService = .userInitiated) {
    super.init()
    self.stackSize = stackSize
    self.qualityOfService = qos
  }
  
  /// Don't call directly; this is the main method of the serializer thread.
  public override func main() {
    super.main()
    while !self.isCancelled {
      self.condition.lock()
      while self.tasks.isEmpty {
        if self.isCancelled {
          self.condition.unlock()
          return
        }
        self.condition.wait()
      }
      let task = self.tasks.removeFirst()
      self.condition.unlock()
      task()
    }
    self.condition.lock()
    self.tasks.removeAll()
    self.condition.unlock()
  }
  
  /// Schedule a task in this serializer thread.
  public func schedule(task: @escaping () -> Void) {
    self.condition.lock()
    self.tasks.append(task)
    self.condition.signal()
    self.condition.unlock()
  }
}
                    
                  
                
                    
                      Crash occurs in @MainActor class or function in iOS 14
Apps built and distributed targeting Xcode 16 version swift6 crash on iOS 14 devices.
We create a static library and put it in our app's library.
Crash occurs in all classes or functions of the static library (@MainActor in front).
It does not occur from iOS / iPadOS 15.
If you change the minimum supported version of the static library to iOS 11, a crash occurs, and if you change it to iOS 14, a crash does not occur.
Is there a way to keep the minimum version of the static library at iOS 11 and prevent crashes?
                    
                  
                
                    
                      I have the following TaskExecutor code in Swift 6 and is getting the following error:
//Error
Passing closure as a sending parameter risks causing data races between main actor-isolated code and concurrent execution of the closure.
May I know what is the best way to approach this?
This is the default code generated by Xcode when creating a Vision Pro App using Metal as the Immersive Renderer.
Renderer
@MainActor
static func startRenderLoop(_ layerRenderer: LayerRenderer, appModel: AppModel) {
    Task(executorPreference: RendererTaskExecutor.shared) { //Error
        
        let renderer = Renderer(layerRenderer, appModel: appModel)
        await renderer.startARSession()
        await renderer.renderLoop()
         
    }
}
final class RendererTaskExecutor: TaskExecutor {
private let queue = DispatchQueue(label: "RenderThreadQueue", qos: .userInteractive)
func enqueue(_ job: UnownedJob) {
    queue.async {
      job.runSynchronously(on: self.asUnownedSerialExecutor())
    }
}
func asUnownedSerialExecutor() -> UnownedTaskExecutor {
    return UnownedTaskExecutor(ordinary: self)
}
static let shared: RendererTaskExecutor = RendererTaskExecutor()
}