Swift Concurrency Resources:
DevForums tags: Concurrency
The Swift Programming Language > Concurrency documentation
Migrating to Swift 6 documentation
WWDC 2022 Session 110351 Eliminate data races using Swift Concurrency — This ‘sailing on the sea of concurrency’ talk is a great introduction to the fundamentals.
WWDC 2021 Session 10134 Explore structured concurrency in Swift — The table that starts rolling out at around 25:45 is really helpful.
Swift Async Algorithms package
Swift Concurrency Proposal Index DevForum post
Why is flow control important? DevForums post
Matt Massicotte’s blog
Dispatch Resources:
DevForums tags: Dispatch
Dispatch documentation — Note that the Swift API and C API, while generally aligned, are different in many details. Make sure you select the right language at the top of the page.
Dispatch man pages — While the standard Dispatch documentation is good, you can still find some great tidbits in the man pages. See Reading UNIX Manual Pages. Start by reading dispatch in section 3.
WWDC 2015 Session 718 Building Responsive and Efficient Apps with GCD [1]
WWDC 2017 Session 706 Modernizing Grand Central Dispatch Usage [1]
Avoid Dispatch Global Concurrent Queues DevForums post
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
[1] These videos may or may not be available from Apple. If not, the URL should help you locate other sources of this info.
Concurrency
RSS for tagConcurrency is the notion of multiple things happening at the same time.
Posts under Concurrency tag
171 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I am trying to migrate a WatchConnectivity App to Swift6 and I found an Issue with my replyHandler callback for sendMessageData.
I am wrapping sendMessageData in withCheckedThrowingContinuation, so that I can await the response of the reply. I then update a Main Actor ObservableObject that keeps track of the count of connections that have not replied yet, before returning the data using continuation.resume.
...
@preconcurrency import WatchConnectivity
actor ConnectivityManager: NSObject, WCSessionDelegate {
private var session: WCSession = .default
private let connectivityMetaInfoManager: ConnectivityMetaInfoManager
...
private func sendMessageData(_ data: Data) async throws -> Data? {
Logger.shared.debug("called on Thread \(Thread.current)")
await connectivityMetaInfoManager.increaseOpenSendConnectionsCount()
return try await withCheckedThrowingContinuation({
continuation in
self.session.sendMessageData(
data,
replyHandler: { data in
Task {
await self.connectivityMetaInfoManager
.decreaseOpenSendConnectionsCount()
}
continuation.resume(returning: data)
},
errorHandler: { (error) in
Task {
await self.connectivityMetaInfoManager
.decreaseOpenSendConnectionsCount()
}
continuation.resume(throwing: error)
}
)
})
}
Calling sendMessageData somehow causing the app to crash and display the debug message: Incorrect actor executor assumption.
The code runs on swift 5 with SWIFT_STRICT_CONCURRENCY = complete.
However when I switch to swift 6 the code crashes.
I rebuilt a simple version of the App. Adding bit by bit until I was able to cause the crash.
See Broken App
Awaiting sendMessageData and wrapping it in a task and adding the @Sendable attribute to continuation, solve the crash.
See Fixed App
But I do not understand why yet.
Is this intended behaviour?
Should the compiler warn you about this?
Is it a WatchConnectivity issue?
I initially posted on forums.swift.org, but was told to repost here.
Hello,
I recently published an app that uses Swift Data as its primary data storage. The app uses concurrency, background threads, async await, and BLE communication.
Sadly, I see my app incurs many fringe crashes, involving EXC_BAD_ACCESS, KERN_INVALID_ADDRESS, EXC_BREAKPOINT, etc.
I followed these guidelines:
One ModelContainer that is stored as a global variable and used throughout.
ModelContexts are created separately for each task, changes are saved manually, and models are not passed around.
Threads with different ModelContexts might manipulate and/or read the same data simultaneously.
I was under the impression this meets the usage requirements.
I suspect perhaps the issue lies in my usage of contexts in a single await function, that might be paused and resumed on a different thread (although same execution path). Is that the case? If so, how should SwiftData be used in async scopes?
Is there anything else particularly wrong in my approach?
Given the below code with Swift 6 language mode, Xcode 16.2
If running with iOS 18+: the app crashes due to _dispatch_assert_queue_fail
If running with iOS 17 and below: there is a warning: warning: data race detected: @MainActor function at Swift6Playground/PublishedValuesView.swift:12 was not called on the main thread
Could anyone please help explain what's wrong here?
import SwiftUI
import Combine
@MainActor
class PublishedValuesViewModel: ObservableObject {
@Published var count = 0
@Published var content: String = "NA"
private var cancellables: Set<AnyCancellable> = []
func start() async {
let publisher = $count
.map { String(describing: $0) }
.removeDuplicates()
for await value in publisher.values {
content = value
}
}
}
struct PublishedValuesView: View {
@ObservedObject var viewModel: PublishedValuesViewModel
var body: some View {
Text("Published Values: \(viewModel.content)")
.task {
await viewModel.start()
}
}
}
I have an issue where a very specific configuration of .overlay, withAnimation, and a bindable state can freeze the app when the state changes.
I've isolated the problematic source code into a sample project can be found here that demonstrates the issue:
https://github.com/katagaki/IcyOverlay
Steps to Reproduce
To reproduce the issue, tap the 'Simulate Content Load' button.
Once the progress bar has completed, a switch is toggled to hide the progress view, which causes the overlay to disappear, and the app to freeze.
Any help and/or advice will be appreciated!
Development Environment
Xcode Version 16.2 (16C5032a), macOS 15.2(24C101)
iOS SDK: 18.2 (22C146), Simulator: 18.2 (22C150)
Hello,
I'm currently migrating my app location service to use the new CLLocationUpdate.Updates.
I'm trying to understand what can fail in this AsyncSequence. Based on the previous CLError, I thought authorisation was one of them for example but it turns out that this is handled by the CLLocationUpdate where we can check different properties.
So, is there a list of errors available somewhere?
Thanks
Axel, @alpennec
Hey everyone,
I’m learning async/await and trying to fetch an image from a URL off the main thread to avoid overloading it, while updating the UI afterward. Before starting the fetch, I want to show a loading indicator (UI-related work). I’ve implemented this in two different ways using Task and Task.detached, and I have some doubts:
Is using Task { @MainActor the better approach?
I added @MainActor because, after await, the resumed execution might not return to the Task's original actor. Is this the right way to ensure UI updates are done safely?
Does calling fetchImage() on @MainActor force it to run entirely on the main thread?
I used an async data fetch function (not explicitly marked with any actor). If I were to use a completion handler instead, would the function run on the main thread?
Is using Task.detached overkill here?
I tried Task.detached to ensure the fetch runs on a non-main actor. However, it seems to involve unnecessary actor hopping since I still need to hop back to the main actor for UI updates. Is there any scenario where Task.detached would be a better fit?
class ViewController : UIViewController{
override func viewDidLoad() {
super.viewDidLoad()
//MARK: First approch
Task{@MainActor in
showLoading()
let image = try? await fetchImage() //Will the image fetch happen on main thread?
updateImageView(image:image)
hideLoading()
}
//MARK: 2nd approch
Task{@MainActor in
showLoading()
let detachedTask = Task.detached{
try await self.fetchImage()
}
updateImageView(image:try? await detachedTask.value)
hideLoading()
}
}
func fetchImage() async throws -> UIImage {
let url = URL(string: "https://via.placeholder.com/600x400.png?text=Example+Image")!
//Async data function call
let (data, response) = try await URLSession.shared.data(from: url)
guard let httpResponse = response as? HTTPURLResponse, httpResponse.statusCode == 200 else {
throw URLError(.badServerResponse)
}
guard let image = UIImage(data: data) else {
throw URLError(.cannotDecodeContentData)
}
return image
}
func showLoading(){
//Show Loader handling
}
func hideLoading(){
//Hides the loader
}
func updateImageView(image:UIImage?){
//Image view updated
}
}
Hello. I am re-writing our way of storing data into Core Data in our app, so it can be done concurrently.
The solution I opted for is to have a singleton actor that takes an API model, and maps it to a Core Data object and saves it.
For example, to store an API order model, I have something like this:
func store(
order apiOrder: APIOrder,
currentContext: NSManagedObjectContext?
) -> NSManagedObjectID? {
let context = currentContext ?? self.persistentContainer.newBackgroundContext()
// …
}
In the arguments, there is a context you can pass, in case you need to create additional models and relate them to each other. I am not sure this is how you're supposed to do it, but it seemed to work.
From what I've understood of Core Data and using multiple contexts, the appropriate way use them is with context.perform or context.performAndWait.
However, since my storage helper is an actor, @globalActor actor Storage2 { … }, my storage's methods are actor-isolated.
This gives me warnings / errors in Swift 6 when I try to pass the context for to another of my actor's methods.
let context = …
return context.performAndWait {
// …
if let apiBooking = apiOrder.booking {
self.store(booking: apiBooking, context: context)
/* causes warning:
Sending 'context' risks causing data races; this is an error in the Swift 6 language mode
'self'-isolated 'context' is captured by a actor-isolated closure. actor-isolated uses in closure may race against later nonisolated uses
Access can happen concurrently
*/
}
// …
}
From what I understand this is because my methods are actor-isolated, but the closure of performAndWait does not execute in a thread safe environment.
With all this, what are my options? I've extracted the store(departure:context:) into its own method to avoid duplicated code, but since I can't call it from within performAndWait I am not sure what to do.
Can I ditch the performAndWait? Removing that makes the warning "go away", but I don't feel confident enough with Core Data to know the answer.
I would love to get any feedback on this, hoping to learn!
Hello, I have been implementing faceID authentication using LocalAuthentication, and I've noticed that if i use swift 5 this code compiles but when i change to swift 6 it gives me a crash saying this compile error:
i have just created this project for this error purpose so this is my codebase:
import LocalAuthentication
import SwiftUI
struct ContentView: View {
@State private var isSuccess: Bool = false
var body: some View {
VStack {
if isSuccess {
Text("Succed")
} else {
Text("not succeed")
}
}
.onAppear(perform: authenticate)
}
func authenticate() {
let context = LAContext()
var error: NSError?
if context.canEvaluatePolicy(.deviceOwnerAuthenticationWithBiometrics, error: &error) {
let reason = "We need to your face to open the app"
context.evaluatePolicy(.deviceOwnerAuthenticationWithBiometrics, localizedReason: reason) { sucexd, error in
if sucexd {
let success = sucexd
Task { @MainActor [success] in
isSuccess = success
}
} else {
print(error?.localizedDescription as Any)
}
}
} else {
print(error as Any)
}
}
}
#Preview {
ContentView()
}
also i have tried to not use the task block and also gives me the same error. i think could be something about the LAContext NSObject that is not yet adapted for swift 6 concurrency?
also i tried to set to minimal but is the same error
Im using xcode 16.1 (16B40) with M1 using MacOS Seqouia 15.0.1
Help.
One challenging aspect of Swift concurrency is flow control, aka backpressure. I was explaining this to someone today and thought it better to post that explanation here, for the benefit of all.
If you have questions or comments, start a new thread in App & System Services > Processes & Concurrency and tag with Swift and Concurrency.
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
Why is flow control important?
In Swift concurrency you often want to model data flows using AsyncSequence. However, that’s not without its challenges. A key issue is flow control, aka backpressure.
Imagine you have a network connection with a requests property that returns an AsyncSequence of Request values. The core of your networking code might be a loop like this:
func processRequests(connection: Connection) async throws {
for try await request in connection.requests {
let response = responseForRequest(request)
try await connection.reply(with: response)
}
}
Flow control is important in both the inbound and outbound cases. Let’s start with the inbound case.
If the remote peer is generating requests very quickly, the network is fast, and responseForRequest(_:) is slow, it’s easy to fall foul of unbounded memory growth. For example, if you use AsyncStream to implement the requests property, its default buffering policy is .unbounded. So the code receiving requests from the connection will continue to receive them, buffering them in the async stream, without any bound. In the worst case scenario that might run your process out of memory. In a more typical scenario it might result in a huge memory spike.
The outbound case is similar. Imagine that the remote peer keeps sending requests but stops receiving them. If the reply(with:) method isn’t implemented correctly, this might also result in unbounded memory growth.
The solution to this problem is flow control. This flow control operates independently on the send and receive side:
On the send side, the code sending responses should notice that the network connection has asserted flow control and stop sending responses until that flow control lifts. In an async method, like the reply(with:) example shown above, it can simply not return until the network connection has space to accept the reply.
On the receive side, the code receiving requests from the connection should monitor how many are buffered. If that gets too big, it should stop receiving. That causes the requests to pile up in the connection itself. If the network connection implements flow control properly [1], this will propagate to the remote peer, which should stop generating requests.
[1] TCP and QUIC both implement flow control. Use them! If you’re tempted to implement your own protocol directly on top of UDP, consider how it should handle flow control.
Flow control and Network framework
Network framework has built-in support for flow control. On the send side, it uses a ‘push’ model. When you call send(content:contentContext:isComplete:completion:) the connection buffers the message. However, it only calls the completion handler when it’s passed that message to the network for transmission [2]. If you send a message and don’t receive this completion callback, it’s time to stop sending more messages.
On the receive side, Network framework uses a ‘pull’ model. The receiver calls a receive method, like receiveMessage(completion:), which calls a completion handler when there’s a message available. If you’ve already buffered too many messages, just stop calling this receive method.
These techniques are readily adaptable to Swift concurrency using Swift’s CheckedContinuation type. That works for both send and receive, but there’s a wrinkle. If you want to model receive as an AsyncSequence, you can’t use AsyncStream. That’s because AsyncStream doesn’t support flow control. So, you’ll need to come up with your own AsyncSequence implementation [3].
[2] Note that this doesn’t mean that the data has made it to the remote peer, or has even been sent on the wire. Rather, it says that Network framework has successfully passed the data to the transport protocol implementation, which is then responsible for getting it to the remote peer.
[3] There’s been a lot of discussion on Swift Evolution about providing such an implementation but none of that has come to fruition yet. Specifically:
The Swift Async Algorithms package provides AsyncChannel, but my understanding is that this is not yet ready for prime time.
I believe that the SwiftNIO folks have their own infrastructure for this. They’re driving this effort to build such support into Swift Async Algorithms.
Avoid the need for flow control
In some cases you can change your design to avoid the need for control. Imagine that your UI needs to show the state of a remote button. The network connection sends you a message every time the button is depressed or released. However, your UI only cares about the current state.
If you forward every messages from the network to your UI, you have to worried about flow control. To eliminate that worry:
Have your networking code translate the message to reflect the current state.
Use AsyncStream with a buffering policy of .bufferingNewest(1).
That way there’s only ever one value in the stream and, if the UI code is slow for some reason, while it might miss some transitions, it always knows about the latest state.
2024-12-13 Added a link to the MultiProducerSingleConsumerChannel PR.
2024-12-10 First posted.
I have this actor
actor ConcurrentDatabase: ModelActor {
nonisolated let modelExecutor: any ModelExecutor
nonisolated let modelContainer: ModelContainer
init(modelContainer: ModelContainer) {
self.modelExecutor = DefaultSerialModelExecutor(modelContext: ModelContext(modelContainer))
self.modelContainer = modelContainer
}
/// Save pending changes in the model context.
private func save() {
if self.modelContext.hasChanges {
do {
try self.modelContext.save()
} catch {
...
}
}
}
}
I am getting a runtime crash on:
try self.modelContext.save()
when trying to insert something into the database and save
Thread 1: Fatal error: Incorrect actor executor assumption; Expected same executor as MainActor.
Can anyone explain why this is happening?
I have the following code in my ObservableObject class and recently XCode started giving purple coloured runtime issues with it (probably in iOS 18):
Issue 1: Performing I/O on the main thread can cause slow launches.
Issue 2: Interprocess communication on the main thread can cause non-deterministic delays.
Issue 3: Interprocess communication on the main thread can cause non-deterministic delays.
Here is the code:
@Published var cameraAuthorization:AVAuthorizationStatus
@Published var micAuthorization:AVAuthorizationStatus
@Published var photoLibAuthorization:PHAuthorizationStatus
@Published var locationAuthorization:CLAuthorizationStatus
var locationManager:CLLocationManager
override init() {
// Issue 1 (Performing I/O on the main thread can cause slow launches.)
cameraAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.video)
micAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.audio)
photoLibAuthorization = PHPhotoLibrary.authorizationStatus(for: .addOnly)
//Issue 1: Performing I/O on the main thread can cause slow launches.
locationManager = CLLocationManager()
locationAuthorization = locationManager.authorizationStatus
super.init()
//Issue 2: Interprocess communication on the main thread can cause non-deterministic delays.
locationManager.delegate = self
}
And also in route Change notification handler of AVAudioSession.routeChangeNotification,
//Issue 3: Hangs - Interprocess communication on the main thread can cause non-deterministic delays.
let categoryPlayback = (AVAudioSession.sharedInstance().category == .playback)
I wonder how checking authorisation status can give these issues? What is the fix here?
I have the following TaskExecutor code in Swift 6 and is getting the following error:
//Error
Passing closure as a sending parameter risks causing data races between main actor-isolated code and concurrent execution of the closure.
May I know what is the best way to approach this?
This is the default code generated by Xcode when creating a Vision Pro App using Metal as the Immersive Renderer.
Renderer
@MainActor
static func startRenderLoop(_ layerRenderer: LayerRenderer, appModel: AppModel) {
Task(executorPreference: RendererTaskExecutor.shared) { //Error
let renderer = Renderer(layerRenderer, appModel: appModel)
await renderer.startARSession()
await renderer.renderLoop()
}
}
final class RendererTaskExecutor: TaskExecutor {
private let queue = DispatchQueue(label: "RenderThreadQueue", qos: .userInteractive)
func enqueue(_ job: UnownedJob) {
queue.async {
job.runSynchronously(on: self.asUnownedSerialExecutor())
}
}
func asUnownedSerialExecutor() -> UnownedTaskExecutor {
return UnownedTaskExecutor(ordinary: self)
}
static let shared: RendererTaskExecutor = RendererTaskExecutor()
}
Hello,
I’ve encountered a warning while working with UITableViewDiffableDataSource. Here’s the exact message:
Warning: applying updates in a non-thread confined manner is dangerous and can lead to deadlocks. Please always submit updates either always on the main queue or always off the main queue - view=<UITableView: 0x7fd79192e200; frame = (0 0; 375 667); clipsToBounds = YES; autoresize = W+H; gestureRecognizers = <NSArray: 0x600003f3c9f0>; backgroundColor = <UIDynamicProviderColor: 0x60000319bf80; provider = <NSMallocBlock: 0x600003f0ce70>>; layer = <CALayer: 0x6000036e8fa0>; contentOffset: {0, -116}; contentSize: {375, 20}; adjustedContentInset: {116, 0, 49, 0}; dataSource: <TtGC5UIKit29UITableViewDiffableDataSourceOC17ArticleManagement21DiscardItemsViewModel17SectionIdentifierSS: 0x600003228270>>
OS: iOS Version: iOS 17+,
Xcode Version: 16.0,
Frameworks: UIKit, Diffable Data Source,
View: UITableView used with a UITableViewDiffableDataSource.
Steps to Reproduce:
Using a diffable data source with a table view.
Applying snapshot updates in the data source from a main thread.
Warning occurs intermittently during snapshot application.
Expected Behavior:
The snapshot should apply without warnings, provided the updates are on a main thread.
Actual Behavior:
The warning suggests thread safety issues when applying updates on non-thread-confined queues.
Questions:
Is there a recommended best practice to handle apply calls in diffable data sources with thread safety in mind?
Could this lead to potential deadlocks if not addressed?
Note :- I confirm I am always reloading / reconfiguring data source on main thread. Please find the attached screenshots for the reference.
Any guidance or clarification would be greatly appreciated!
Crash occurs in @MainActor class or function in iOS 14
Apps built and distributed targeting Xcode 16 version swift6 crash on iOS 14 devices.
We create a static library and put it in our app's library.
Crash occurs in all classes or functions of the static library (@MainActor in front).
It does not occur from iOS / iPadOS 15.
If you change the minimum supported version of the static library to iOS 11, a crash occurs, and if you change it to iOS 14, a crash does not occur.
Is there a way to keep the minimum version of the static library at iOS 11 and prevent crashes?
Hello,
I have these two errors in this particular block of code: Capture of 'self' with non-sendable type 'MusicPlayer?' in a @Sendable closure and Capture of 'localUpdateProgress' with non-sendable type '(Double, Double) -&gt; Void' in a @Sendable closure
` @MainActor
func startProgressTimer(updateProgress: @escaping (Double, Double) -&gt; Void) {
timer?.invalidate() // Stop any existing timer
let localUpdateProgress = updateProgress
timer = Timer.scheduledTimer(withTimeInterval: 1.0, repeats: true) { [weak self] _ in
guard let self = self,
let audioPlayer = self.audioPlayer,
let currentItem = audioPlayer.currentItem else {
print("currentItem is nil or audioPlayer is unavailable")
return
}
let currentTime = currentItem.currentTime().seconds
let duration = currentItem.duration.seconds
localUpdateProgress(currentTime, duration)
}
}`
I've tried nearly every solution and can't think of one that works. Any help is greatly appreciated :)
Hi, I am stuck moving one of my projects from Xcode 15 to Xcode 16. This is a SwiftUI application that uses in some places classic Threads and locks/conditions for synchronization. I was hoping that in Swift 5 mode, I could compile and run this app also with Xcode 16 so that I can start migrating it towards Swift 6.
Unfortunately, my application crashes via EXC_BREAKPOINT (code=1, subcode=0x1800eb31c) whenever some blocking operation e.g. condition.wait() or DispatchQueue.main.sync { ... } is invoked from within the same module (I haven't seen this happening for frameworks that use the same code that I linked in dynamically). I have copied an abstraction below that I am using, to give an example of the kind of code I am talking about. I have verified that Swift 5 is used, "strict concurrency checking" is set to "minimal", etc.
I have not found a workaround and thus, I'm curious to hear if others were facing similar challenges? Any hints on how to proceed are welcome.
Thanks,
Matthias
Example abstraction that is used in my app. It's needed because I have synchronous computations that require a large stack. It's crashing whenever condition.wait() is executed.
public final class TaskSerializer: Thread {
/// Condition to synchronize access to `tasks`.
private let condition = NSCondition()
/// The tasks queue.
private var tasks = [() -> Void]()
public init(stackSize: Int = 8_388_608, qos: QualityOfService = .userInitiated) {
super.init()
self.stackSize = stackSize
self.qualityOfService = qos
}
/// Don't call directly; this is the main method of the serializer thread.
public override func main() {
super.main()
while !self.isCancelled {
self.condition.lock()
while self.tasks.isEmpty {
if self.isCancelled {
self.condition.unlock()
return
}
self.condition.wait()
}
let task = self.tasks.removeFirst()
self.condition.unlock()
task()
}
self.condition.lock()
self.tasks.removeAll()
self.condition.unlock()
}
/// Schedule a task in this serializer thread.
public func schedule(task: @escaping () -> Void) {
self.condition.lock()
self.tasks.append(task)
self.condition.signal()
self.condition.unlock()
}
}
Hi, I'm trying to modify the ScreenCaptureKit Sample code by implementing an actor for Metal rendering, but I'm experiencing issues with frame rendering sequence.
My app workflow is:
ScreenCapture -> createFrame -> setRenderData
Metal draw callback -> renderAsync (getData from renderData)
I've added timestamps to verify frame ordering, I also using binarySearch to insert the frame with timestamp, and while the timestamps appear to be in sequence, the actual rendering output seems out of order.
// ScreenCaptureKit sample
func createFrame(for sampleBuffer: CMSampleBuffer) async {
if let surface: IOSurface = getIOSurface(for: sampleBuffer) {
await renderer.setRenderData(surface, timeStamp: sampleBuffer.presentationTimeStamp.seconds)
}
}
class Renderer {
...
func setRenderData(surface: IOSurface, timeStamp: Double) async {
_ = await renderSemaphore.getSetBuffers(
isGet: false,
surface: surface,
timeStamp: timeStamp
)
}
func draw(in view: MTKView) {
Task {
await renderAsync(view)
}
}
func renderAsync(_ view: MTKView) async {
guard await renderSemaphore.beginRender() else { return }
guard let frame = await renderSemaphore.getSetBuffers(
isGet: true, surface: nil, timeStamp: nil
) else {
await renderSemaphore.endRender()
return }
guard let texture = await renderSemaphore.getRenderData(
device: self.device,
surface: frame.surface) else {
await renderSemaphore.endRender()
return
}
guard let commandBuffer = _commandQueue.makeCommandBuffer(),
let renderPassDescriptor = await view.currentRenderPassDescriptor,
let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else {
await renderSemaphore.endRender()
return
}
// Shaders ..
renderEncoder.endEncoding()
commandBuffer.addCompletedHandler() { @Sendable (_ commandBuffer)-> Swift.Void in
updateFPS()
}
// commit frame in actor
let success = await renderSemaphore.commitFrame(
timeStamp: frame.timeStamp,
commandBuffer: commandBuffer,
drawable: view.currentDrawable!
)
if !success {
print("Frame dropped due to out-of-order timestamp")
}
await renderSemaphore.endRender()
}
}
actor RenderSemaphore {
private var frameBuffers: [FrameData] = []
private var lastReadTimeStamp: Double = 0.0
private var lastCommittedTimeStamp: Double = 0
private var activeTaskCount = 0
private var activeRenderCount = 0
private let maxTasks = 3
private var textureCache: CVMetalTextureCache?
init() {
}
func initTextureCache(device: MTLDevice) {
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, device, nil, &self.textureCache)
}
func beginRender() -> Bool {
guard activeRenderCount < maxTasks else { return false }
activeRenderCount += 1
return true
}
func endRender() {
if activeRenderCount > 0 {
activeRenderCount -= 1
}
}
func setTextureLoaded(_ loaded: Bool) {
isTextureLoaded = loaded
}
func getSetBuffers(isGet: Bool, surface: IOSurface?, timeStamp: Double?) -> FrameData? {
if isGet {
if !frameBuffers.isEmpty {
let frame = frameBuffers.removeFirst()
if frame.timeStamp > lastReadTimeStamp {
lastReadTimeStamp = frame.timeStamp
print(frame.timeStamp)
return frame
}
}
return nil
} else {
// Set
let frameData = FrameData(
surface: surface!,
timeStamp: timeStamp!
)
// insert to the right position
let insertIndex = binarySearch(for: timeStamp!)
frameBuffers.insert(frameData, at: insertIndex)
return frameData
}
}
private func binarySearch(for timeStamp: Double) -> Int {
var left = 0
var right = frameBuffers.count
while left < right {
let mid = (left + right) / 2
if frameBuffers[mid].timeStamp > timeStamp {
right = mid
} else {
left = mid + 1
}
}
return left
}
// for setRenderDataNormalized
func tryEnterTask() -> Bool {
guard activeTaskCount < maxTasks else { return false }
activeTaskCount += 1
return true
}
func exitTask() {
activeTaskCount -= 1
}
func commitFrame(timeStamp: Double,
commandBuffer: MTLCommandBuffer,
drawable: MTLDrawable) async -> Bool {
guard timeStamp > lastCommittedTimeStamp else {
print("Drop frame at commit: \(timeStamp) <= \(lastCommittedTimeStamp)")
return false
}
commandBuffer.present(drawable)
commandBuffer.commit()
lastCommittedTimeStamp = timeStamp
return true
}
func getRenderData(
device: MTLDevice,
surface: IOSurface,
depthData: [Float]
) -> (MTLTexture, MTLBuffer)? {
let _textureName = "RenderData"
var px: Unmanaged<CVPixelBuffer>?
let status = CVPixelBufferCreateWithIOSurface(kCFAllocatorDefault, surface, nil, &px)
guard status == kCVReturnSuccess, let screenImage = px?.takeRetainedValue() else {
return nil
}
CVMetalTextureCacheFlush(textureCache!, 0)
var texture: CVMetalTexture? = nil
let width = CVPixelBufferGetWidthOfPlane(screenImage, 0)
let height = CVPixelBufferGetHeightOfPlane(screenImage, 0)
let result2 = CVMetalTextureCacheCreateTextureFromImage(
kCFAllocatorDefault,
self.textureCache!,
screenImage,
nil,
MTLPixelFormat.bgra8Unorm,
width,
height,
0, &texture)
guard result2 == kCVReturnSuccess,
let cvTexture = texture,
let mtlTexture = CVMetalTextureGetTexture(cvTexture) else {
return nil
}
mtlTexture.label = _textureName
let depthBuffer = device.makeBuffer(bytes: depthData, length: depthData.count * MemoryLayout<Float>.stride)!
return (mtlTexture, depthBuffer)
}
}
Above's my code - could someone point out what might be wrong?
I ran into a memory issue that I don't understand why this could happen. For me, It seems like ARC doesn't guarantee thread-safety.
Let see the code below
@propertyWrapper
public struct AtomicCollection<T> {
private var value: [T]
private var lock = NSLock()
public var wrappedValue: [T] {
set {
lock.lock()
defer { lock.unlock() }
value = newValue
}
get {
lock.lock()
defer { lock.unlock() }
return value
}
}
public init(wrappedValue: [T]) {
self.value = wrappedValue
}
}
final class CollectionTest: XCTestCase {
func testExample() throws {
let rounds = 10000
let exp = expectation(description: "test")
exp.expectedFulfillmentCount = rounds
@AtomicCollection var array: [Int] = []
for i in 0..<rounds {
DispatchQueue.global().async {
array.append(i)
exp.fulfill()
}
}
wait(for: [exp])
}
}
It will crash for various reasons (see screenshots below)
I know that the test doesn't reflect typical application usage. My app is quite different from traditional app so the code above is just the simplest form for proof of the issue.
One more thing to mention here is that array.count won't be equal to 10,000 as expected (probably because of copy-on-write snapshot)
So my questions are
Is this a bug/undefined behavior/expected behavior of Swift/Obj-c ARC?
Why this could happen?
Any solutions suggest?
How do you usually deal with thread-safe collection (array, dict, set)?
Hi everyone,
I’m planning to develop a cross-platform PDF viewer app for iOS and macOS that will read PDFs from local storage and cloud services (Google Drive, iCloud, WorkDrive, etc.). The app should be read-only and display both the PDF content and its metadata (author, title, creation date, etc.).
Key Features:
View PDFs: Local and remote (cloud storage integration).
Display metadata: Title, author, page count, etc.
Cloud integration: Google Drive, iCloud, Zoho WorkDrive, etc.
Read-only mode: No editing features, just viewing.
Questions:
Xcode Template: Should I use the Document App or Generic App template for this?
PDF Metadata: Any built-in libraries for extracting PDF metadata in a read-only app?
Performance: Any advice or documentation on handling large PDFs or cloud fetching efficiently?
Thanks in advance for any advice or resources!
Hi everyone,
I'm working on integrating object recognition from live video feeds into my existing app by following Apple's sample code. My original project captures video and records it successfully. However, after integrating the Vision-based object detection components (VNCoreMLRequest), no detections occur, and the callback for the request is never triggered.
To debug this issue, I’ve added the following functionality:
Set up AVCaptureVideoDataOutput for processing video frames.
Created a VNCoreMLRequest using my Core ML model.
The video recording functionality works as expected, but no object detection happens. I’d like to know:
How to debug this further? Which key debug points or logs could help identify where the issue lies?
Have I missed any key configurations? Below is a diff of the modifications I’ve made to my project for the new feature.
Diff of Changes:
(Attach the diff provided above)
Specific Observations:
The captureOutput method is invoked correctly, but there is no output or error from the Vision request callback.
Print statements in my setup function setForVideoClassify() show that the setup executes without errors.
Questions:
Could this be due to issues with my Core ML model compatibility or configuration?
Is the VNCoreMLRequest setup incorrect, or do I need to ensure specific image formats for processing?
Platform:
Xcode 16.1, iOS 18.1, Swift 5, SwiftUI, iPhone 11,
Darwin MacBook-Pro.local 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64 x86_64
Any guidance or advice is appreciated! Thanks in advance.