Post not yet marked as solved
I have a UICollectionView with a diffable data source and a global list header in a compositional layout (i.e., one header for the whole layout, not one per section).
let layout = UICollectionViewCompositionalLayout() { ... }
let listHeader = NSCollectionLayoutBoundarySupplementaryItem(
layoutSize: NSCollectionLayoutSize(widthDimension: .fractionalWidth(1), heightDimension: .estimated(1)),
elementType: ListHeader.self,
alignment: .top
)
layout.boundarySupplementaryItems = [listHeader]
Sometimes, the list header changes its size, and I'd love to animate that. However, I cannot call reconfigureItems with diffable data source. If the new snapshot is identical to the old snapshot, the header size doesn't update in the list (even if I call layoutIfNeeded on the header view directly). If there are changes between the snapshots, the header size updates abruptly, without animation.
How can I update and animate the size change of the global list header, regardless of whether the snapshot has changed or not?
When compiling my project for a physical device, Xcode does not reuse the build cache but instead recompiles every file. When compiling for a simulator target instead, Xcode properly uses the build cache, and incremental builds are lightning fast.
Is there a configuration I can check to enable incremental builds for physical devices, too?
Post not yet marked as solved
We're in the process of moving our app's storage from the app container to an app group container so that the data can be accessed by app extensions (such as share extensions) as well.
However, we found that Xcode does not backup app group containers via the Devices and Simulators window. How can I include the app group in the backup and restore process?
Similarly, could you confirm that app groups are indeed included in the iCloud backup and restore process?
Post not yet marked as solved
Background
We're writing a small recording app - think Voice Memos for the sake of argument. In our app, users should always record with the built-in iPhone microphone.
Our Problem
Our setup works fine when using just the speakers or in combination with Bluetooth headsets. However, it doesn't work well with Airplay. One of two things can happen:
The app records just silence
The app crashes when trying to connect the inputNode to the recorderNode (see code below), complaining that IsFormatSampleRateAndChannelCountValid == false
Our testing environment is an iPhone Xs, connected to an Airplay 2 compatible Sonos amp.
Code
We use the following code to set up the AVAudioSession (simplified, without error handling):
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playAndRecord, options: [.defaultToSpeaker, .allowBluetoothA2DP, .allowAirPlay])
try AVAudioSession.sharedInstance().setActive(true)
Every time we record, we configure the audio session to use the built-in mic, and then create a fresh AVAudioEngine.
let session = AVAudioSession.sharedInstance()
let builtInMicInput = session.availableInputs!.first(where: { $0.portType == .builtInMic })
try session.setPreferredInput(builtInMicInput)
let sampleRate: Double = 44100
let numChannels: AVAudioChannelCount = isStereoEnabled ? 2 : 1
let recordingOutputFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: sampleRate, channels: numChannels, interleaved: false)!
let engine = AVAudioEngine()
let recorderNode = AVAudioMixerNode()
// This sets the input volume of those nodes in their destination node (mainMixerNode) to 0.
// The raw outputVolume of these nodes remains 1, so when you tap them you still get the samples.
// If you set outputVolume = 0 instead, the taps would only receives zeros.
recorderNode.volume = 0
engine.attach(recorderNode)
engine.connect(engine.mainMixerNode, to: engine.outputNode, format: engine.outputNode.inputFormat(forBus: 0))
engine.connect(recorderNode, to: engine.mainMixerNode, format: recordingOutputFormat)
engine.connect(engine.inputNode, to: recorderNode, format: engine.inputNode.inputFormat(forBus: 0))
// and later
try engine.start()
We install a tap on the recorderNode to save the recorded audio into a file. The tap works fine and is out of scope for this question, and thus not included here.
Questions
How do we route/configure the audio engine correctly to avoid this problem?
Do you have any advice on how to debug such issues in the future? Which variables/states should we inspect?
Thank you so much in advance!