Relevent parts of Swift Evolution in relation to real-time audio

Would somebody mind sharing which parts/proposals of Swift Evolution I should be following in relation to it one day being usable for real-time audio?


I've read through the various manifesto's, but I'm struggling to see where the major changes need to occur..?

I think it's the stuff that's in The Ownership Manifesto that matters the most. Looks like in Swift 5 we'll see some of the key features implemented from the manifesto. The Core Audio team hasn’t made any statements about it. Maybe it’s worth posting the question on the swift evolution mailing list.

I would also watch for any discussion on memory ordering constraints: whether read/write barrier instructions can be passed to the hardware, whether write and read ordering around memory barriers can be specified in any way, and whether any multiprocessor cache conherency issues or options are either specified or observable from Swift. These might affect whether and how data structures can be passed between deterministic-latency or lock-free real-time threads.

Relevent parts of Swift Evolution in relation to real-time audio
 
 
Q