You don't say what the context is, but if this something like a vertical list of controls, you could try using a horizontal UIStackView to place the UISwitch (without a label) on the left, followed by a UILabel on the right.
Post not yet marked as solved
No, this isn't safe, but not because of @Published. Your property is mutable, which means that two different threads with references to the same instance could modify the property simultaneously from different threads. That's a potentially catastrophic failure, in general.
In this case, @Published is a code smell, though. It's only used for mutable stored properties of a class, and such properties prevent the class from being safely Sendable.
OTOH, sendability isn't a relevant issue here, because you're not using Swift concurrency (in this code, at least).
Post not yet marked as solved
I don't see a race condition here. drinksUpdated doesn't capture the value of currentDrinks, it retrieves the value from self whenever the function executes. Since the Task closure is isolated to the main thread in this case (because addDrink is isolated to the main thread, and this kind of task inherits the execution context where it is created), retrieving the value of currentDrinks is safe.
So, in step 14 of your scenario, 3 values are written back to the store, not 2, and they're the same 3 that were written in step 13. There's a little bit of inefficiency here, but no race condition.
If, hypothetically, the value of currentDrinks was passed as a parameter into drinksUpdate, then that would be a race condition, but this code doesn't make that mistake.
The browser toolbar role is typically for units of content that you navigate to over time, similar to a browser visiting web pages. The document role is typically for content you treat as a single unit, such as a document stored in a file. They look similar, but the default menu on the title contains items for navigation (browser) or items for manipulating storage (document).
If your app doesn't exactly fit into one of the 3 roles, that's OK. You can start with a role that's most similar to the functionality you want, then customize the toolbar from there.
The global withUnsafeMutablePointer(to:) function isn't the right one to use here. Use Array's withUnsafeBufferPointer(_:) function instead:
floats.withUnsafeBufferPointer {
…
}
Post not yet marked as solved
Error -12927 is an internal error code that can mean several different things, but probably indicates something incorrect in your playlist. If I had to guess, I'd suggest that the value of your EXT-X-KEY attributes might be malformed.
The first step is always to check your playlist using mediastreamvalidator/hlsreport in Terminal. If that doesn't help, create a bug report using Feedback Assistant, and post the bug number here.
Post not yet marked as solved
This error suggests that your playlist contains some invalid characters, or contains some invalid syntax such as malformed comments. Perhaps a good place to start would be to use mediastreamvalidator/hlsreport to validate the file?
Post not yet marked as solved
There's no direct API solution to finding silence, but I recommend you take a look at this sample code project:
https://developer.apple.com/documentation/avfaudio/audio_engine/using_voice_processing
The PowerMeter class shows how to calculate the average and peak power levels in ongoing audio. It's intended to drive a "meter" view, but you should be able to check for levels below a threshold over a short period of time, and recognize silence that way.
Could you file a bug report via Feedback Assistant (https://developer.apple.com/bug-reporting/) and post the feedback number in this thread? Please include a sample stream URL in your bug report, and indicate what playback mechanism you're using (AVPlayer, Safari, etc).
It's unlikely that calling setCategory:error: is itself going to trigger the watchdog, though it's possible that if you're doing a lot of other work at initialization time this might send your usage "over the top".
Anyway, app delegate initialization seems way to early to set up your audio session. You should wait at least until the "didFinishLaunching" callback, or — better still, as recommended by the note in the documentation — wait until you first need to play or record some audio, so that you don't interrupt other app's audio until/unless you need to.
Post not yet marked as solved
HLS Steering was introduced at WWDC 2021, so it's available with iOS/tvOS 15 and above. I don't know if Apple TV+ uses it, but it's an internal detail that I couldn't comment on anyway.
Post not yet marked as solved
It's kind of a 2-level problem. If you're seeking to 10s, you're going to skip quite a lot of data (e.g. at sample rate 44.1KHz, you'd be skipping 441,000 samples, or nearly 1MB of data). So, the first level you have to solve is to find the next getAudioByteData result that contains the actual start of the data you want to play. That may involve calling it multiple times, if you don't have a better way.
The second level is that you don't want to use all of the data in the buffer. In your above code, the simplest approach would be to change your memcpy call to start at an offset and copy less data. (You would need to keep track of the incoming data's frame count, as well as the "trimmed" frame count, which would affect much of the above code.)
However, this would be better solved in your internal library. Instead of requesting data to ignore, it seems more efficient to pass in an optional offset, so that the library can skip the unwanted data for you.
Either way, this would give you an initial PCM buffer that's smaller than the ones that follow.
BTW, your byteCount variable appears to a frame count (a count of Int16 values). It's really off-putting to see this called a byte count, especially when you're using "unsafe" functions (in the Swift sense) like memcpy. 🙂
Post not yet marked as solved
AVSpeechSynthesizer doesn't need to go off-device to generate speech. SFSpeechRecognitionRequest (requiresOnDeviceRecognition) supports on-device recognition, when the device reports it as available via SFSpeechRecognizer (supportsOnDeviceRecognition).
Note that SFSpeechRecognizer may sometimes need an Internet connection, because some of its static data may change from time to time and it needs to update its local copy. However, requiresOnDeviceRecognition is sufficient to keep recognition tasks strictly on-device.
Post not yet marked as solved
There's nothing documented on this, so there's no behavior you can rely on that couldn't change between OS releases. I suspect that information about the previous stream doesn't affect variant selection on a new stream, but historical information about network quality and throughput may affect they initial variant selection.
Post not yet marked as solved
You can play more than one stream simultaneously (using multiple players) but it's not going to scale very well if your table view has a lot of items. It would be better to architect your app (and UI) with the assumption that some kind of sharing is going to be involved.
We don't know how IGTV is playing videos. HTTP Live Streaming startup times can be fast, but maybe they've devised a clever custom strategy for improving on the default behavior.