AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

Posts under AVFoundation tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Reducing storage of similar PNGs by compressing them into a video and retrieving them losslessly--possibility or dumb idea?
My app stores and transports lots of groups of similar PNGs. These aren't compressed well by official algorithms like .lzfse, .lz4, .lzbitmap... not even bz2, but I realized that they are well-suited for compression by video codecs since they're highly similar to one another. I ran an experiment where I compressed a dozen images into an HEVCWithAlpha .mov via AVAssetWriter, and the compression ratio was fantastic, but when I retrieved the PNGs via AVAssetImageGenerator there were lots of artifacts which simply wasn't acceptable. Maybe I'm doing something wrong, or maybe I'm chasing something that doesn't exist. Is there a way to use video compression like a specialized archive to store and retrieve PNGs losslessly while retaining alpha? I have no intention of using the videos except as condensed storage. Any suggestions on how to reduce storage size of many large PNGs are also welcome. I also tried using HEVC instead of PNG via the new UIImage.hevcData(), but the decompression/processing times were just insane (5000%+ increase), on top of there being fatal errors when using async.
18
0
372
1w
Best practices for live-streaming MV-HEVC content?
I was wondering of anyone had guidance on how to “livestream“ MV-HEVC content. More specifically, I have a left and right eye view for stereoscopic content (perhaps, for example, the views were taken from a stereoscopic video being passed through an AVPlayer). I know, based on sample code, that I can convert the stereoscopic video into a MV-HEVC file using AVAssetWriter. However, how would I take the stereoscopic video and encode it, in realtime, to a stream that could then leverage HLS Tools to deliver to clients? Is AVFoundation capable of this directly? Or is there an API within VideoToolbox that can help with this?
0
1
160
2w
Add 30 frames per secons in assetWriter
Hello, I have converted UIImage to CVPixelBuffer. I am creating a video writing app. In some cases, the same CVPixelBuffer should last in the video for 2 seconds or more. However, I need to add 30 CVPixelBuffers per second because the video, to work on social media, must be 30 frames per second. The problem is that whenever I try to add frames to long videos, like 50-minute videos, it gives an error. The error is something like "Operation cannot be completed". Give me an example of a loop to add 30 CVPixelBuffers per second to a currently written video. Example: while true { if videoInput.isReadyForMoreMediaData { break } if videoInput.isReadyForMoreMediaData, let buffer = videoProvider.getNextFrame() { adaptor.append(buffer, withPresentationTime: CMTime(value: 1, timescale: 30)) } } I await your response.
0
0
193
2w
builtInTripleCamera is not automatic switching from one camera to another
Hey! I'm working on a camera app and I've noticed that the .builtInTripleCamera doesn't behave anything like the native app. Tested on iPhone 15 Pro Max and iPhone 12 Pro Max. The documentation states the following, but that seems quite different from what is happening in the app: Automatic switching from one camera to another occurs when the zoom factor, light level, and focus position allow. So, does it automatically switch like the native camera, or do I need to do something? Custom Camera vs Native Camera Custom Camera Native Camera The code was adapted from the Apple's project AVCamFilter. Just download the AVCamFilter and update videoDeviceDiscoverySession: private let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession( deviceTypes: [.builtInTripleCamera], mediaType: .video, position: .unspecified )
1
0
164
4d
Why is `isDepthDataDeliverySupported` returning false on an iPad Pro using `builtInDualWideCamera`?
I am trying to use the AVCamFilter Apple sample project discussed in this WWDC session to get depth data using the dual camera. The project has built-in features to get depth data from the dual camera. When the sample project was written builtInDualWideCamera didn't exist yet, and the project only tries to get builtInDualCamera and builtInWideAngleCamera. When I run the project on my iPad Pro it doesn't show any of the depth-related UI because the device doesn't have a builtInDualCamera device. So I added builtInDualWideCamera in to the videoDeviceDiscoverySession, and it seems to get that device properly, but isDepthDataDeliverySupported is returning false still. Is there some reason why isDepthDataDeliverySupported is false even though I seem to be using a dual camera device? I know the device has a builtInLiDARDepthCamera but I wanted to try out the dual camera depth data to see how it performs for shorter distances. I wouldn't have expected the dual camera depth data delivery to be made unavailable on the device just because the LiDAR sensor is already available. Using iPadOS 17.5.1, iPad Pro 11-inch 4th generation. The depth feature of this sample app works fine on an iPhone 15 I tested. Also tried on an iPhone 15 Pro and it worked even though that device also has a LiDAR sensor, so the issue is presumably not related to the fact that the iPad Pro has a LiDAR sensor.
4
0
286
3w
"Microphone Recording Fails When Launching App from Shortcut (Error Code 561015905)"
I'm experiencing an issue with microphone recording in my app when launched from a Shortcut. The app works correctly when launched directly, but launching it through the Shortcut results in the "Session activation failed" error (code 561015905). Here's what I've done so far: My app has microphone permission granted. The startRecording function sets the audio session category to .playAndRecord. I've implemented error handling within startRecording to catch the error code. The Shortcut workflow includes an action to launch the app (no explicit microphone permission request within the Shortcut). xcode version - 15.2 iphone ios version - 17.4.1
1
0
223
2w
Properly rotate/mirror video in AVCaptureVideoDataOutput and flipping input devices
Hey all, I have a pretty complicated camera setup so bear with me. You know how Instagram's Camera supports recording a video and flipping Camera devices while recording? I built the same thing using AVCaptureVideoDataOutput and it works fine, but it does not support rotation. (neither does Instagram, but I still need it, lol) So there's two ways to implement rotation (and mirroring) in AVCaptureVideoDataOutput: 1. Set it on the AVCaptureConnection Rotation and vertical mirror mode can be set directly on the AVCaptureVideoDataOutput's connection to the Camera: let output = AVCaptureVideoDataOutput(...) cameraSession.addOutput(output) for connection in output.connections { connection.videoRotation = 90 connection.isVideoMirrored = true } But according to the documentation this is expensive and comes with a performance overhead. I haven't really benchmarked it yet, but I assume rotating and mirroring 4k buffers isn't cheap. I'm building a camera library that is used by a lot of people, so all performance decisions have a big impact. 2. Set it on AVAssetWriter Instead of actually physically rotating large pixel buffers, we can also just set the AVAssetWriter's transform property to some affine transformation - which is comparable to how EXIF tags work. We can set both rotation and mirror modes using CGAffineTransforms. Obviously this is much more efficient and does not come with a performance overhead on the camera pipeline at all, so I'd prefer to go this route. Problem The problem is that when I start recording with the front Camera (AVAssetWriter.transform has a mirror on the CGAffineTransform), and then flip to the back Camera, the back Camera is also mirrored. Now I thought I could just avoid rotation on my buffers and only use isVideoMirrored on the AVCaptureConnection when we are using the front camera, which is a fair performance compromise - but this won't work because isVideoMirrored applies mirroring alongside the vertical axis - and since the video stream is naturally in landscape orientation, this will flip the image upside down instead of mirroring it alongside the vertical axis... whoops! 😅 This is pretty obvious as the transform applies to the entire video stream, but now I am not sure if the AVAssetWriter approach will work for my use-case. I think I will need to eagerly physically rotate the pixel buffers by setting the AVCaptureConnection's videoRotation & isVideoMirrored properties, but I wanted to ask here in case someone knows any alternatives to doing that in order to avoid the performance and memory overhead of rotating buffers? Thanks!
0
0
205
4w
Invalid binary when submitting a build to appstore connect
When I send a build in Xcode the process occurs normally, but a few minutes later I receive an e-mail saying: "ITMS-90683: Missing purpose string in Info.plist - Your app’s code references one or more APIs that access sensitive user data, or the app has one or more entitlements that permit such access. The Info.plist file for the “***.app” bundle should contain a NSMicrophoneUsageDescription key with a user-facing purpose string explaining clearly and completely why your app needs the data. If you’re using external libraries or SDKs, they may reference APIs that require a purpose string. While your app might not use these APIs, a purpose string is still required." So the problem is the description of the use of the microphone, right? As the attached image shows that I have already done this process, and I continue to receive this error. Even when I remove the part of the avfoundation code that uses the microphone to try to submit the build, the error continues to be returned to me
1
0
231
4w
DestinationVideo -- MV-HEVC Files
In the code example provided there is a bool in the Video object to set a video as 3D: /// A Boolean value that indicates whether the video contains 3D content. let is3D: Bool I have a hosted spatial video that I know works correctly on the AVP player. When I point the Videos.json file to the this URL and set is3D=true my 3D video doesn't show up and I get the follow error: iPVC/1-0 Received playback error: [Error Domain=AVFoundationErrorDomain Code=-11850 "Operation Stopped" UserInfo={NSLocalizedFailureReason=The server is not correctly configured., NSLocalizedDescription=Operation Stopped, NSUnderlyingError=0x30227c510 {Error Domain=CoreMediaErrorDomain Code=-12939 "byte range length mismatch - should be length 2 is length 2434" UserInfo={NSDescription=byte range length mismatch - should be length 2 is length 2434, NSURL=https: <omitted for post> }}}] Can anyone tell me what might be going on? The error is telling me my server is not configured correctly. For context, I'm using a google drive to deliver dynamic images/videos using: https://drive.google.com/uc?export=download&id= <file ID> And the above works great for my images and 2d videos. Is there something I need to do specifically when delivering MV-HEVC videos?
1
0
303
4w
AVFoundation: Strange error while trying to switch camera formats with the touch of a single button.
I'm getting the following output from my iOS app's debug console, note the error on the last line: Capture format keys: ["600x600@25", "1200x1200@5", "1200x1200@30", "1600x1200@2", "1600x1200@30", "3200x2400@15", "3200x2400@2", "600x600@30"] Start capture session for 1600x1200@30: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetPhoto]> Stop capture session: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetInputPriority]> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoDataOutput: 0x303edf1e0> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCapturePhotoOutput: 0x303ee3e20> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoPreviewLayer: 0x3030b33c0> Start capture session for 600x600@30: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetInputPriority]> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoDataOutput: 0x303edf1e0> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCapturePhotoOutput: 0x303ee3e20> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoPreviewLayer: 0x3030b33c0> <<<< FigSharedMemPool >>>> Fig assert: "blkHdr->useCount > 0" at (FigSharedMemPool.c:591) - (err=0) This is in response to trying to switch capture formats between the two key modes that must regularly be used by my application. Below you will find the functions that I use to start and stop capturing frames to my preview layer. I have a UI with three buttons. Off Mode 1 Mode 2 If I tap the Off button in between tapping Mode 1 or Mode 2 all is well; I can do this all day. However, attempt to jump between Mode 1andMode 2` directly I run into issues. I added a layer of software between the UI and the underlying functions so that I could make sure to turn off the Camera before turning it back on in the opposite mode and was surprised to get this output. Can someone at Apple please tell me what is going on here? For the rest of you, if anyone knows the magic incantation to safely switch camera formats, please paste that code here. Thanks. I've included my code below. func start(for deviceFormat: String) { sessionQueue.async { [unowned self] in logger.debug("Start capture session for \(deviceFormat): \(self.captureSession)") do { guard let format = formatDict[deviceFormat] else { throw Error.captureFormatNotFound } captureSession.stopRunning() captureSession.beginConfiguration() // May not be necessary. try captureDevice.lockForConfiguration() // Without this we get an error. captureDevice.activeFormat = format captureDevice.unlockForConfiguration() // Matching function: Necessary. captureSession.commitConfiguration() // Matching function: May not be necessary. captureSession.startRunning() } catch { logger.fault("Failed to start camera: \(error.localizedDescription)") errorPublisher.send(error) } } } func stop() { sessionQueue.async { [unowned self] in logger.debug("Stop capture session: \(self.captureSession)") captureSession.stopRunning() } }
0
0
248
May ’24
HEIC Image generation broken for iOS 17.5 simulator?
This code to write UIImage data as heic works in iOS simulator with iOS < 17.5 import AVFoundation import UIKit extension UIImage { public var heic: Data? { heic() } public func heic(compressionQuality: CGFloat = 1) -> Data? { let mutableData = NSMutableData() guard let destination = CGImageDestinationCreateWithData(mutableData, AVFileType.heic as CFString, 1, nil), let cgImage = cgImage else { return nil } let options: NSDictionary = [ kCGImageDestinationLossyCompressionQuality: compressionQuality, kCGImagePropertyOrientation: cgImageOrientation.rawValue, ] CGImageDestinationAddImage(destination, cgImage, options) guard CGImageDestinationFinalize(destination) else { return nil } return mutableData as Data } public var isHeicSupported: Bool { (CGImageDestinationCopyTypeIdentifiers() as! [String]).contains("public.heic") } var cgImageOrientation: CGImagePropertyOrientation { .init(imageOrientation) } } extension CGImagePropertyOrientation { init(_ uiOrientation: UIImage.Orientation) { switch uiOrientation { case .up: self = .up case .upMirrored: self = .upMirrored case .down: self = .down case .downMirrored: self = .downMirrored case .left: self = .left case .leftMirrored: self = .leftMirrored case .right: self = .right case .rightMirrored: self = .rightMirrored @unknown default: fatalError() } } } But with iOS 17.5 simulator it seems to be broken. The call of CGImageDestinationFinalize writes this error into the console: writeImageAtIndex:962: *** CMPhotoCompressionSessionAddImage: err = kCMPhotoError_UnsupportedOperation [-16994] (codec: 'hvc1') On physical devices it still seems to work. Is there any known workaround for the iOS simulator?
1
1
264
May ’24
Getting 561015905 while trying to initiate recording when the app is in background
I'm trying to start and stop recording when my app is in background periodically. I implemented it using Timer and DispatchQueue. However whenever I am trying to initiate the recording I get this error. This issue does not exist in foreground. Here is the current state of my app and configuration. I have added "Background Modes" capability in the Signing & Capability and I also checked Audio and Self Care. Here is my Info.plist: <plist version="1.0"> <dict> <key>UIBackgroundModes</key> <array> <string>audio</string> </array> <key>WKBackgroundModes</key> <array> <string>self-care</string> </array> </dict> </plist> I also used the AVAudioSession with .record category and activated it. Here is the code snippet: func startPeriodicMonitoring() { let session = AVAudioSession.sharedInstance() do { try session.setCategory(AVAudioSession.Category.record, mode: .default, options: [.mixWithOthers]) try session.setActive(true, options: []) print("Session Activated") print(session) // Start recording. measurementTimer = Timer.scheduledTimer(withTimeInterval: measurementInterval, repeats: true) { _ in self.startMonitoring() DispatchQueue.main.asyncAfter(deadline: .now() + self.recordingDuration) { self.stopMonitoring() } } measurementTimer?.fire() // Start immediately } catch let error { print("Unable to set up the audio session: \(error.localizedDescription)") } } Any thoughts on this? I have tried most of the ways but the issue is still there.
3
0
279
May ’24
How can I disable Camera Reaction Effects on macOS
I have an app that has the camera continuously running, as it is doing its own AI, have zero need for Apple'video effects, and am seeing a 200% performance hit after updating to Sonoma. The video effects are the "heaviest stack trace" when profiling my app with Instruments CPU profiler (see below). Is forcing your software onto developers not something Microsoft would do? Is there really no way to opt out? 6671 Jamscape_exp (23038) 2697 start_wqthread 2697 _pthread_wqthread 2183 _dispatch_workloop_worker_thread 2156 _dispatch_root_queue_drain_deferred_wlh 2153 _dispatch_lane_invoke 2146 _dispatch_lane_serial_drain 1527 _dispatch_client_callout 1493 _dispatch_call_block_and_release 777 __88-[PTHandGestureDetector initWithFrameSize:asyncInitQueue:externalHandDetectionsEnabled:]_block_invoke 777 -[VCPHandGestureVideoRequest initWithOptions:] 508 -[VCPHandGestureClassifier initWithMinHandSize:] 508 -[VCPCoreMLRequest initWithModelName:] 506 +[MLModel modelWithContentsOfURL:configuration:error:] 506 -[MLModelAsset modelWithError:] 506 -[MLModelAsset load:] 506 +[MLLoader loadModelFromAssetAtURL:configuration:error:] 506 +[MLLoader _loadModelFromAssetAtURL:configuration:loaderEvent:error:] 505 +[MLLoader _loadModelFromArchive:configuration:loaderEvent:useUpdatableModelLoaders:error:] 505 +[MLLoader _loadWithModelLoaderFromArchive:configuration:loaderEvent:useUpdatableModelLoaders:error:] 505 +[MLLoader _loadModelFromArchive:configuration:modelVersion:compilerVersion:loaderEvent:useUpdatableModelLoaders:loadingClasses:error:] 505 +[MLLoader _loadModelWithClass:fromArchive:modelVersionInfo:compilerVersionInfo:configuration:error:] 445 +[MLMultiFunctionProgramEngine loadModelFromCompiledArchive:modelVersionInfo:compilerVersionInfo:configuration:error:] 333 -[MLMultiFunctionProgramEngine initWithProgramContainer:configuration:error:] 333 -[MLNeuralNetworkEngine initWithContainer:configuration:error:] 318 -[MLNeuralNetworkEngine _setupContextAndPlanWithConfiguration:usingCPU:reshapeWithContainer:error:] 313 -[MLNeuralNetworkEngine _addNetworkToPlan:error:] 313 espresso_plan_add_network 313 EspressoLight::espresso_plan::add_network(char const*, espresso_storage_type_t) 313 EspressoLight::espresso_plan::add_network(char const*, espresso_storage_type_t, std::__1::shared_ptrEspresso::net) 313 Espresso::load_network(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&, std::__1::shared_ptrEspresso::abstract_context const&, Espresso::compute_path, bool) 235 Espresso::reload_network_on_context(std::__1::shared_ptrEspresso::net const&, std::__1::shared_ptrEspresso::abstract_context const&, Espresso::compute_path) 226 Espresso::load_and_shape_network(std::__1::shared_ptrEspresso::SerDes::generic_serdes_object const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&, std::__1::shared_ptrEspresso::abstract_context const&, Espresso::network_shape const&, Espresso::compute_path, std::__1::shared_ptrEspresso::blob_storage_abstract const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&) 214 Espresso::load_network_layers_internal(std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&, std::__1::shared_ptrEspresso::abstract_context const&, Espresso::network_shape const&, std::__1::basic_istream<char, std::__1::char_traits>, Espresso::compute_path, bool, std::__1::shared_ptrEspresso::blob_storage_abstract const&) 208 Espresso::run_dispatch_v2(std::__1::shared_ptrEspresso::abstract_context, std::__1::shared_ptrEspresso::net, std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, Espresso::network_shape const&, Espresso::compute_path const&, std::__1::basic_istream<char, std::__1::char_traits>) 141 try_dispatch(std::__1::shared_ptrEspresso::abstract_context, std::__1::shared_ptrEspresso::net, std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, Espresso::network_shape const&, Espresso::compute_path const&, std::__1::basic_istream<char, std::__1::char_traits>, Espresso::platform const&, Espresso::compute_path const&) 131 Espresso::get_net_info_ir(std::__1::shared_ptrEspresso::abstract_context, std::__1::shared_ptrEspresso::net, std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, Espresso::network_shape const&, Espresso::compute_path const&, Espresso::platform const&, Espresso::compute_path const&, std::__1::shared_ptrEspresso::cpu_context_transfer_algo_t&, std::__1::shared_ptrEspresso::net_info_ir_t&, std::__1::shared_ptrEspresso::kernels_validation_status_t&) 131 Espresso::cpu_context_transfer_algo_t::create_net_info_ir(std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, std::__1::shared_ptrEspresso::abstract_context, Espresso::network_shape const&, Espresso::compute_path, std::__1::shared_ptrEspresso::net_info_ir_t) 120 Espresso::cpu_context_transfer_algo_t::check_all_kernels_availability_on_context(std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, std::__1::shared_ptrEspresso::abstract_context&, Espresso::compute_path, std::__1::shared_ptrEspresso::net_info_ir_t&) 120 is_kernel_available_on_engine(unsigned long, std::__1::shared_ptrEspresso::base_kernel, Espresso::kernel_info_t const&, std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::shared_ptrEspresso::abstract_context, Espresso::compute_path, std::__1::shared_ptrEspresso::net_info_ir_t, std::__1::shared_ptrEspresso::kernels_validation_status_t) 83 Espresso::ANECompilerEngine::mix_reshape_kernel::is_valid_for_engine(std::__1::shared_ptrEspresso::kernels_validation_status_t, Espresso::base_kernel::validate_for_engine_args_t const&) const 45 int ValidateLayer<ANECReshapeLayerDesc, ZinIrReshapeUnit, ZinIrReshapeUnitInfo, ANECReshapeLayerDescAlternate>(void, ANECReshapeLayerDesc const*, ANECTensorDesc const*, unsigned long, unsigned long*, ANECReshapeLayerDescAlternate**, ANECTensorValueDesc const*) 45 void ValidateLayer_Impl<ANECReshapeLayerDesc, ZinIrReshapeUnit, ZinIrReshapeUnitInfo, ANECReshapeLayerDescAlternate>(void*, ANECReshapeLayerDesc const*, ANECTensorDesc const*, unsigned long, unsigned long*, ANECReshapeLayerDescAlternate**, ANECTensorValueDesc const*) (...)
3
0
297
May ’24
AVPlayer and TLS 1.3 compliance for low latency HLS live stream
Hi guys, I'm investigating failure to play low latency Live HLS stream and I'm getting following error: (String) “<AVPlayerItemErrorLog: 0x30367da10>\n#Version: 1.0\n#Software: AppleCoreMedia/1.0.0.21L227 (Apple TV; U; CPU OS 17_4 like Mac OS X; en_us)\n#Date: 2024/05/17 13:11:46.046\n#Fields: date time uri cs-guid s-ip status domain comment cs-iftype\n2024/05/17 13:11:16.016 https://s2-h21-nlivell01.cdn.xxxxxx.***/..../xxxx.m3u8 -15410 \“CoreMediaErrorDomain\” \“Low Latency: Server must support http2 ECN and SACK\” -\n2024/05/17 13:11:17.017 -15410 \“CoreMediaErrorDomain\” \“Invalid server blocking reload behavior for low latency\” -\n2024/05/17 13:11:17.017 The stream works when loading from dev server with TLS 1.3, but fails on CDN servers with TLS 1.2. Regular Live streams and VOD streams work normally on those CDN servers. I tried to configure TLSv1.2 in Info.plist, but that didn't help. When running nscurl --ats-diagnostics --verbose it is passing for the server with TLS 1.3, but failing for CDN servers with TLS 1.2 due to error Code=-1005 "The network connection was lost." Is TLS 1.3 required or just recommended? Refering to https://developer.apple.com/documentation/http-live-streaming/enabling-low-latency-http-live-streaming-hls and https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis Is it possible to configure AVPlayer to skip ECN and SACK validation? Thanks.
0
0
312
May ’24