AVAssetWriter append audio/video streams concurrently in Real time recording setup

I see in most of the old sample codes from Apple that when using AVAssetWriter to append audio, video, and metadata samples in a real time camera recording setup, calls to .append(sampleBuffer) are either synchronised using an NSLock or all the samples are sent to the asset writer on the same dispatch queue thereby preventing concurrent writes. However I can't find any documentation that calls to assetWriterInput.append(sampleBuffer) for different media samples such as Audio and Video should not be done concurrently. Is it not valid for these methods to be executed in parallel for instance?

`videoSamplesAssetWriterInput.append(videoSampleBuffer)` from DispatchQueue 1 

`audioSamplesAssetWriterInput.append(audioSampleBuffer)` from DispatchQueue 2
AVAssetWriter append audio/video streams concurrently in Real time recording setup
 
 
Q