VideoToolbox

RSS for tag

Work directly with hardware-accelerated video encoding and decoding capabilities using VideoToolbox.

VideoToolbox Documentation

Posts under VideoToolbox tag

26 Posts
Sort by:
Post not yet marked as solved
0 Replies
26 Views
Hello, is it possible to make https://developer.apple.com/devcenter/download.action?path=/wwdc_2014/wwdc_2014_sample_code/usingVideoToolboxtodecodecompressedsamplebuffers.zip available again?
Posted
by
Post not yet marked as solved
0 Replies
107 Views
In the context of an app that uses a VTDecompressionSession to decode incoming ts streams, the creation of the decompression session always fails for some h264 streams with code -12911 when running on a M1 iPad (iPad Pro, 12.9-inch, 5th generation, iOS 15.5). When the same app is executed on my M1 Mac mini -as My Mac (Designed for iPad)- with the same stream, VTDecompressionSessionCreatesucceeds and the stream can be decoded. Any idea of what could cause this error on the iPad? The code: decompressionSessionCreationStatus = VTDecompressionSessionCreate(allocator: kCFAllocatorDefault,                                                                   formatDescription: videoFormatDescription!,                                                                   decoderSpecification: nil,                                                                   imageBufferAttributes: nil,                                                                   outputCallback: nil,                                                                   decompressionSessionOut: &videoDecompressionSession) if videoDecompressionSession != nil { ... } else {     NSLog("videoDecompressionSession could not be created (status: %d): video format: %@",            decompressionSessionCreationStatus, (videoFormatDescription != nil) ? (CFCopyDescription(videoFormatDescription!) as String) : "{?}") } where videoFormatDescription was previously created by extracting h264 parameter sets and calling CMVideoFormatDescriptionCreateFromH264ParameterSets. Output: videoDecompressionSession could not be created (status: -12911): video format: <CMVideoFormatDescription 0x281894360 [0x1dacc01b8]> { mediaType:'vide'  mediaSubType:'avc1'  mediaSpecific: { codecType: 'avc1' dimensions: 1920 x 1080  }  extensions: {{     CVImageBufferChromaLocationBottomField = Left;     CVImageBufferChromaLocationTopField = Left;     CVImageBufferColorPrimaries = "ITU_R_709_2";     CVImageBufferTransferFunction = "ITU_R_709_2";     CVImageBufferYCbCrMatrix = "ITU_R_709_2";     CVPixelAspectRatio =     {         HorizontalSpacing = 1;         VerticalSpacing = 1;     };     FullRangeVideo = 0;     SampleDescriptionExtensionAtoms =     {         avcC = {length = 119, bytes = 0x01640028 ffe10064 67640028 ad90a470 ... 68ff3cb0 fdf8f800 };     }; }} } Any help on this would be greatly appreciated! :)
Posted
by
Post not yet marked as solved
2 Replies
492 Views
I am trying to use AVPlayerView Controller But when i try to play the video its crash This is my Code `import UIKit import AVKit import SwiftyJSON import Alamofire import EffyIosFramework class VideoViewController: UIViewController, AVPlayerViewControllerDelegate { var player : AVPlayer! var vc = AVPlayerViewController() var isVideoPlaying = false var instance = VideoViewController() //View Did Load override func viewDidLoad() { super.viewDidLoad() vc.delegate = self } //Play Video func playVideo(url: URL) { player = AVPlayer(url: url) vc.player = player player.automaticallyWaitsToMinimizeStalling = false vc.delegate = self vc.showsPlaybackControls = false vc.videoGravity = AVLayerVideoGravity.resizeAspectFill let playerLayer = AVPlayerLayer(player: player) playerLayer.opacity = 0 self.isVideoPlaying = true self.vc.player?.play() DispatchQueue.main.asyncAfter(wallDeadline: .now()+0.2) { self.vc.view.frame = self.view.window!.bounds self.vc.view.backgroundColor = .clear DispatchQueue.main.async { self.vc.view.addSubview(self.view) } self.view.window?.addSubview(self.vc.view) } } }
Posted
by
Post not yet marked as solved
1 Replies
185 Views
We have ffmpeg-based code that runs on both iOS and Macos. We've noticed, that with the upgrade from iOS 14 -> iOS 15 videotoolbox decoder started failing to process h264 stream. Today we've finally dug a bit deeper into this, and it seems the failure is similar on iOS 15, and on MacOS 12.3 -- but only on Apply Silicon for the latter. The same software and input work fine on Intel Mac. The failure is explained in some detail here: https://trac.ffmpeg.org/ticket/9713 and here: https://trac.ffmpeg.org/ticket/9016 It is quite easily reproduced with ffmpeg
Posted
by
Post not yet marked as solved
0 Replies
168 Views
I am developing a hybrid app, I am using java script and html, to be able to compile it to xcode I use capacitor, the problem is that my app includes videos but I cannot block the native ios player, I want to block it. webview.allowsInlineMediaPlayback = yes; I found this, the problem is that it only blocks it for ipad, not for iphones.
Posted
by
Post not yet marked as solved
0 Replies
156 Views
How can we use iOS system "software encoder" API? We leveraged ffmpeg software encoder. Also used iOS videotoolbox hardware encoder. How can we use software encoder API of iOS system?
Posted
by
Post not yet marked as solved
0 Replies
199 Views
Hi there, When I enable LTR by setting "kVTCompressionPropertyKey_EnableLTR" to be true, and get LTR refresh frame by setting ForceLTRRefresh to be true in VTCompressionSessionEncodeFrame() call. Everything goes well, but I found I cannot know which encoded frame is LTR Refresh frame, I try to find out, so dumpped the attachments of the CMSampleBuffer frame, I found the normal P frame's attachment like this: { DependsOnOthers = 1; EarlierDisplayTimesAllowed = 0; EncodedFrameAvgQP = 35; IsDependedOnByOthers = 1; NotSync = 1; RequireLTRAcknowledgementToken = 9177; TemporalID = 0; } And LTR refresh frame may look like this: { DependsOnOthers = 1; EarlierDisplayTimesAllowed = 0; EncodedFrameAvgQP = 38; FECGroupID = 0; FECLastFrameInGroup = 1; FECLevelOfProtection = 1; IsDependedOnByOthers = 1; NotSync = 1; ReferenceWasRefreshed = 1; RequireLTRAcknowledgementToken = 9180; TemporalID = 0; } The laster has 4 more key-value pairs than the former: FECGroupID = 0; FECLastFrameInGroup = 1; FECLevelOfProtection = 1; ReferenceWasRefreshed = 1; The key "ReferenceWasRefreshed" is very noticeable, So question 1: May this key signify current frame is a LTR Refresh frame? (But It's weird I didn't find any documents about this key.) Question 2: If this frame is LTR refresh frame, How can I know which frame this LTR frame refers to? More specifily, the corresponding 'AcknowledgementToken'. Question 3: This question is independent. I found that if I didn't set 'kVTEncodeFrameOptionKey_AcknowledgedLTRTokens' for a while (keep the tokens be empty), I cannot get the 'AcknowledgementToken' from encoded CMSampleBuffer for a while too, that's to say this key is not contained in the attachments of CMSampleBuffer. What should I do to make the encoded frame always contains the AcknowledgementToken? Thanks very much!
Posted
by
Post not yet marked as solved
2 Replies
777 Views
Hi We use VideoToolbox to encode our videos and upload them on our servers. But since iOS 15.4, all videos uploaded have a really bad quality. It is because the bitrate is very low. Apparently, when we create ou VTCompressionSessionRef object, we have to change the kVTCompressionPropertyKey_DataRateLimits and put it much higher than before, but if we put it too high, it does not work either. It is very weird. Has someone the same problem ? Best regards
Posted
by
Post not yet marked as solved
0 Replies
171 Views
VTDecompressionSessionCreate fails if instantiated with predefined constant - kVTVideoDecoderSpecification_RequireHardwareAcceleratedVideoDecoder. However it is succeeded if replaced with literal - CFSTR( "RequireHardwareAcceleratedVideoDecoder" ). Literally they have same value. Sample data (sps and pps) identical for both cases. Could you help please? Build command: clang -x objective-c -target x86_64-apple-macos10.14 -framework Foundation -framework CoreMedia -framework VideoToolbox -o vt main.m main.m
Posted
by
Post not yet marked as solved
0 Replies
236 Views
Does iOS camera app use VideoToolBox framework's method e.g. VTCompressionSessionEncodeFrame() for encoding? Or has apple their own separate APIs apart from open APIs available for 3rd party developers in VideoToolBox?
Posted
by
Post not yet marked as solved
0 Replies
318 Views
I am trying to decode my h264 stream by using only c++ (not obj-c and swift). I wrote decoding part by following this stackoverflow post. According to status the decoding is successfully completed. But I saw that most of decoded data are zero when check the data. This shoow that decoding is wrong or not completed. I think, I made a mistake at decoding initializing stage. I have added my code to below. VTDecompressionOutputCallbackRecord cb{}; cb.decompressionOutputCallback = &VisionModuleHandler::decode_cb; cb.decompressionOutputRefCon = NULL; CMFormatDescriptionRef format; CMBlockBufferRef block = NULL; CMSampleBufferRef buffer = NULL; VTDecodeInfoFlags flags; OSStatus status; /* Initialize the decoder. */ std::vector<uint8_t> sps(encoded_frame.sps_pps_frame.begin()+4, encoded_frame.sps_pps_frame.begin()+20 ); std::vector<uint8_t> pps(encoded_frame.sps_pps_frame.begin()+24, encoded_frame.sps_pps_frame.end() ); std::size_t sps_pps_size_array [] = {16,5}; const size_t sampleSizeArray {encoded_frame.encoded_frame.size()}; const uint8_t* const parameterSetPointers[2] = {sps.data(), pps.data()}; status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault, 2, parameterSetPointers, sps_pps_size_array, 4, &format); if (status != noErr) spdlog::info("Error while decomprasion - 1"); status = VTDecompressionSessionCreate(kCFAllocatorDefault, format, NULL, NULL, &cb, &session); if (status != noErr) spdlog::info("Error while decomprasion - 2"); status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault, (void*)(encoded_frame.encoded_frame.data()), encoded_frame.encoded_frame.size(), kCFAllocatorNull, NULL, 0, encoded_frame.encoded_frame.size(), 0, &block); if (status != noErr) spdlog::info("Error while decomprasion - 3"); status = CMSampleBufferCreate(kCFAllocatorDefault, block, true, NULL, NULL, format, 1, 0, NULL, 1, &sampleSizeArray, &buffer); if (status != noErr) spdlog::info("Error while decomprasion - 4"); /* Loop over compressed data; our callback will be called with * each decoded frame buffer. Passed flags make this asynchronous. */ status = VTDecompressionSessionDecodeFrame(session, buffer, 0, NULL, 0 ); if (status != noErr) spdlog::info("Error while decomprasion - 5"); /* Flush in-process frames. */ status = VTDecompressionSessionFinishDelayedFrames(session); if (status != noErr) spdlog::info("Error while decomprasion - 6"); /* Block until our callback has been called with the last frame. */ status = VTDecompressionSessionWaitForAsynchronousFrames(session); if (status != noErr) spdlog::info("Error while decomprasion - 7"); I also added Callback part of decoding. if (imageType == CVPixelBufferGetTypeID()) { auto a = 1; } CVPixelBufferLockBaseAddress(imageBuffer, 0); OSType format = CVPixelBufferGetPixelFormatType(imageBuffer); void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer,0); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); uint8_t *yBuffer = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); size_t yPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0); uint8_t *cbCrBuffer = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1); size_t cbCrPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1); void *srcYpData = malloc(height *yPitch); memcpy(srcYpData, yBuffer, height *yPitch); void *srcCbCrData = malloc(height *cbCrPitch); memcpy(srcCbCrData, cbCrBuffer, height *cbCrPitch); size_t aRgbPitch = width * 4; uint8_t *aRgbBuffer = (uint8_t*)malloc(height *aRgbPitch); memset(aRgbBuffer, 0, height *aRgbPitch); vImage_Buffer srcYp = {srcYpData,height,width,yPitch}; vImage_Buffer srcCbCr = {srcCbCrData,height,width,cbCrPitch}; vImage_Buffer dest = {aRgbBuffer,height,width,aRgbPitch}; vImage_YpCbCrPixelRange pixelRange = {16,128,265,240,235,16,240,16}; vImage_YpCbCrToARGB infoYpCbCrToARGB = {}; vImage_Error error = vImageConvert_YpCbCrToARGB_GenerateConversion(kvImage_YpCbCrToARGBMatrix_ITU_R_601_4, &pixelRange, &infoYpCbCrToARGB, kvImage420Yp8_CbCr8, kvImageARGB8888, kvImageNoFlags); uint8_t permuteMap[4] = {0, 1, 2, 3}; error = vImageConvert_420Yp8_CbCr8ToARGB8888(&srcYp, &srcCbCr, &dest, &infoYpCbCrToARGB, permuteMap, 255, kvImageNoFlags); How can I fix this?
Posted
by
Post not yet marked as solved
2 Replies
1.3k Views
I am processing an H264 encoded video stream from a non-apple IoT device. I want to record bits of this video stream. I'm getting an error when I try to save to the photo gallery: The operation couldn’t be completed. (PHPhotosErrorDomain error 3302.) My Code, let me know if I need to share more:   private func beginRecording() {     self.handlePhotoLibraryAuth()     self.createFilePath()     guard let videoOutputURL = self.outputURL,        let vidWriter = try? AVAssetWriter(outputURL: videoOutputURL, fileType: AVFileType.mp4),        self.formatDesc != nil else {          print("Warning: No Format For Video")          return        }     let vidInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: nil, sourceFormatHint: self.formatDesc)     guard vidWriter.canAdd(vidInput) else {       print("Error: Cant add video writer input")       return     }           let sourcePixelBufferAttributes = [       kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),       kCVPixelBufferWidthKey as String: "1280",       kCVPixelBufferHeightKey as String: "720"] as [String : Any]           self.videoWriterInputPixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(       assetWriterInput: vidInput,       sourcePixelBufferAttributes: sourcePixelBufferAttributes)     vidInput.expectsMediaDataInRealTime = true     vidWriter.add(vidInput)     guard vidWriter.startWriting() else {       print("Error: Cant write with vid writer")       return     }     vidWriter.startSession(atSourceTime: CMTimeMake(value: self.videoFrameCounter, timescale: self.videoFPS))     self.videoWriter = vidWriter     self.videoWriterInput = vidInput     print("Recording Video Stream")   } Save the Video   private func saveRecordingToPhotoLibrary() {     let fileManager = FileManager.default     guard fileManager.fileExists(atPath: self.path) else {       print("Error: The file: \(self.path) not exists, so cannot move this file camera roll")       return     }     print("The file: \(self.path) has been save into documents folder, and is ready to be moved to camera roll") This is what Fails     PHPhotoLibrary.shared().performChanges({       PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: URL(fileURLWithPath: self.path))     }) { completed, error in       guard completed else {         print ("Error: Cannot move the video \(self.path) to camera roll, error: \(String(describing: error?.localizedDescription))")         return       }       print("Video \(self.path) has been moved to camera roll")     }   }  When Recording ends we save the video   private func endRecording() {     guard let vidInput = videoWriterInput, let vidWriter = videoWriter else {       print("Error, no video writer or video input")       return     }     vidInput.markAsFinished()     if !vidInput.isReadyForMoreMediaData {       vidWriter.finishWriting {         print("Finished Recording")         guard vidWriter.status == .completed else {           print("Warning: The Video Writer status is not completed, status: \(vidWriter.status.rawValue)")           print(vidWriter.error.debugDescription)           return         }         print("VideoWriter status is completed")         self.saveRecordingToPhotoLibrary()       }     }   }
Posted
by
Post not yet marked as solved
0 Replies
333 Views
This question is similar with VTCompressionSessionEncodeFrame performance decrease I also post this question on StackOverflow In my demo, the average execution time of VTCompressionSessionEncodeFrame in iphone12 is 10ms while iphoneXs only costs 6ms. If I decrease the frequency of calling that function, the execution time also decreases but the total time(delay+execution time) stays same about 11ms on iphone12 and 7ms on iphoneXs. I try various configuration of VTCompressionSession, but the result (iphone12 &gt; iphoneXs) never change! Here is the configuration of VTCompressionSession bool VideoToolboxEncoder::InitCompressionSession() {  CFMutableDictionaryRef sourceImageBufferAttributes = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, NULL, NULL);  CFDictionarySetValue(sourceImageBufferAttributes, kCVPixelBufferOpenGLESCompatibilityKey, kCFBooleanTrue);  OSType target_pixelformat = kCVPixelFormatType_420YpCbCr8Planar;  dict_set_i32(sourceImageBufferAttributes,          kCVPixelBufferPixelFormatTypeKey, target_pixelformat);  dict_set_i32(sourceImageBufferAttributes,          kCVPixelBufferBytesPerRowAlignmentKey, 16);  CFDictionarySetValue(sourceImageBufferAttributes, kCVPixelBufferWidthKey, CFNumberCreate(NULL, kCFNumberIntType, &amp;codec_settings.width));  CFDictionarySetValue(sourceImageBufferAttributes, kCVPixelBufferHeightKey, CFNumberCreate(NULL, kCFNumberIntType, &amp;codec_settings.height));  OSStatus status = VTCompressionSessionCreate(NULL,                    codec_settings.width,                    codec_settings.height,                    kCMVideoCodecType_HEVC,                    NULL,                    sourceImageBufferAttributes,                    NULL,                    encodeComplete,                    this,                    &amp;compression_session_);     status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_RealTime, kCFBooleanTrue);  status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_AllowFrameReordering, kCFBooleanFalse);  status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_ExpectedFrameRate, (__bridge CFTypeRef)@(29.97));  status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_MaxKeyFrameInterval,                 (__bridge CFTypeRef)@(codec_settings.gop_size));  CFStringRef profileRef;  profileRef = kVTProfileLevel_HEVC_Main_AutoLevel;  status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_ProfileLevel, profileRef);  status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_AllowOpenGOP, kCFBooleanFalse);  status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_AverageBitRate, (__bridge CFTypeRef)@(codec_settings.bitrate));  status = VTSessionSetProperty(compression_session_, kVTCompressionPropertyKey_DataRateLimits, (__bridge CFTypeRef)@[@20000000, @2]);  VTCompressionSessionPrepareToEncodeFrames(compression_session_);  return 0; } I also try to export video with AVAssetWriter and I get the same result. It's unexpected that videotoolbox performance decreases on newer iphones. I want to figure out this problem is due to my incorrect configuration or hardware of iphone. Has anyone encountered the same problem?If someone could help with this issue would be really grateful!
Posted
by
Post not yet marked as solved
0 Replies
466 Views
I have created three related Feedback Assistant issues that haven't been replied to, and I also have found many WebKit bugs entered over the past six months that could be related to this issue (see links below). FB9688897 FB9666426 FB9554184 Replication in iPadOS: Download - http://files.panomoments.com/bbb_sunflower_2160p_60fps_normal.mp4 Attempt to play locally stored on the device either using Files app or Safari. Note the first few seconds playback with many frame drops and pauses (sound is unaffected). After initial playback, try seeking to various places in the timeline and note frame drops and stuttering. Note - I am testing on a 1st Gen iPad Pro 13in Replication in MacOS: Download - http://files.panomoments.com/bbb_sunflower_2160p_60fps_normal.mp4 Note when opening in Quicktime the first frame is black. This also happens in the Finder spacebar preview function, but is harder to see. The reason why you don’t usually see it in the spacebar preview, is likely because the preview video player has already decoded several frames asynchronously, and you just miss them due to the loading time of the UI. It’s a very fast flicker that’s easy to just ignore (unlike the frozen black frame in Quicktime Player) Note - I am testing on a 2018 Macbook Pro i9 Regarding potentially related WebKit issues (it seems there was a ton of video decoding / GPU / WebGL work in iOS 15 and Safari 15), see these links: https://bugs.webkit.org/show_bug.cgi?id=223740 https://bugs.webkit.org/show_bug.cgi?id=231031 https://bugs.webkit.org/show_bug.cgi?id=216250 https://bugs.webkit.org/show_bug.cgi?id=215908 https://bugs.webkit.org/show_bug.cgi?id=230617 https://bugs.webkit.org/show_bug.cgi?id=231359 https://bugs.webkit.org/show_bug.cgi?id=231424 https://bugs.webkit.org/show_bug.cgi?id=231012 https://bugs.webkit.org/show_bug.cgi?id=227586 https://bugs.webkit.org/show_bug.cgi?id=231354
Posted
by
Post not yet marked as solved
0 Replies
526 Views
Refer to recent reports, it seems like apple will support ProRes in the next generation iPhone. But currently i couldn't find any doc about how to decode ProRes in iOS 15 beta api.
Posted
by
Post not yet marked as solved
1 Replies
502 Views
Hello Everyone, We have a feature in our application wherein our user can upload a picture or a video for others to look into. We would like to add some compression logic to both the media types when uploaded so we can save the memory and our users can also upload the media quickly and don’t have to wait for a longer time. We have tried adding iOS native compression however that is degrading the quality of the photo or a video. Can you please help us with the best possible solution which we can integrate without losing the quality of the media? As an alternative for now, we are restricting the users to upload the video of max 30 seconds but if we are able to integrate the compression, we would like to allow them to upload a video of 3 mins. Please let us know if you need any additional information. Thank you.
Posted
by
Post not yet marked as solved
1 Replies
600 Views
Hey folks - While working with OBS Studio, I noticed that when it's running on my 2019 Mac Pro 7,1, it lists two hardware h.264 encoders available. If I check that same version of OBS on my 2018-era Macbook Pro (with discrete AMD GPU), it only lists a single harrdware h.264 encoder. My Mac Pro has the single AMD Radeon Pro Vega II GPU (not the dual). It's not really clear why the API is presenting two hardware encoders, and if there are actually any differences between the two. One of the OBS devs wrote me a quick-and-dirty C program to dump out the list of encoders. The program looks like: #include <stdlib.h> #include <CoreFoundation/CoreFoundation.h> #include <VideoToolbox/VideoToolbox.h> #include <VideoToolbox/VTVideoEncoderList.h> #include <CoreMedia/CoreMedia.h> int main() { CFArrayRef encoder_list; VTCopyVideoEncoderList(NULL, &encoder_list); CFIndex size = CFArrayGetCount(encoder_list); for (CFIndex i = 0; i < size; i++) { CFDictionaryRef encoder_dict = CFArrayGetValueAtIndex(encoder_list, i); #define VT_PRINT(key, name) \ CFStringRef name##_ref = CFDictionaryGetValue(encoder_dict, key); \ CFIndex name##_len = CFStringGetLength(name##_ref); \ char *name = malloc(name##_len + 1); \ memset(name, 0, name##_len + 1); \ CFStringGetFileSystemRepresentation(name##_ref, name, name##_len); VT_PRINT(kVTVideoEncoderList_EncoderName, name); printf("Name: %s\n", name); VT_PRINT(kVTVideoEncoderList_DisplayName, dn); printf("Display Name: %s\n", dn); VT_PRINT(kVTVideoEncoderList_EncoderID, id); printf("Id: %s\n", id); printf("=========================\n"); } CFRelease(encoder_list); exit(0); } Executing that on my Mac Pro, I see, among the output: Name: Apple H.264 (HW) Display Name: Apple H.264 (HW) Id: com.apple.videotoolbox.videoencoder.h264.gva.100000abc ========================= Name: Apple H.264 (HW) Display Name: Apple H.264 (HW) Id: com.apple.videotoolbox.videoencoder.h264.gva When I run it on the laptop: ========================= Name: Apple H.264 (HW) Display Name: Apple H.264 (HW) Id: com.apple.videotoolbox.videoencoder.h264.gva My question is: what's the difference between those two hardware encoders listed for the Mac Pro? Thanks for any guidance. :-)
Posted
by