Core Image

RSS for tag

Use built-in or custom filters to process still and video images using Core Image.

Core Image Documentation

Posts under Core Image tag

79 Posts
Sort by:
Post not yet marked as solved
1 Replies
339 Views
Hi, recently I just received a crash report from firebase Crashlytics. It only happens on iOS15. I do not really know what does the crash means. Can someone please explain it to me? Thank you very much! EXC_BREAKPOINT 0x0000000181064114 Crashed: com.apple.root.utility-qos 0 CoreFoundation 0xbc114 CFDataGetBytes + 156 1 ImageIO 0x1d48 CGImageGetImageSource + 156 2 UIKitCore 0x1d0954 -[_UIImageCGImageContent dealloc] + 48 3 libobjc.A.dylib 0x755c AutoreleasePoolPage::releaseUntil(objc_object**) + 200 4 libobjc.A.dylib 0x3928 objc_autoreleasePoolPop + 208 5 libdispatch.dylib 0x463c _dispatch_last_resort_autorelease_pool_pop + 44 6 libdispatch.dylib 0x16064 _dispatch_root_queue_drain + 1056 7 libdispatch.dylib 0x165f8 _dispatch_worker_thread2 + 164 8 libsystem_pthread.dylib 0x10b8 _pthread_wqthread + 228 9 libsystem_pthread.dylib 0xe94 start_wqthread + 8
Posted
by
Post not yet marked as solved
2 Replies
402 Views
Hi. I like to use - as I thought possibly – a Core ML model to identify the main clouts of an image. The idea is to detect used colors in fashion images to get a kind of a "color trend" in a set of images. I found this question in the forum already, but it never got an answers (as questions to the questions did not get answered by the initial poster): https://developer.apple.com/forums/thread/94324 Maybe, Core Models are not the way to do this (are there more about objects and texts)? Any hint to other techniques are welcome, too. The only approach I do not want to follow is to use online services as images have to get delivered to them – and usually are kept there. I want to realize an on-premise approach. Thanks for any hints!
Posted
by
Post not yet marked as solved
0 Replies
266 Views
Hello. I am an iOS app developer. I want to get the contents of the corresponding QR from the QR code image. As of iOS 15.0.2, iOS API CIDetector featuresInImage returns no data. The same API works normally in iOS 14.6. Please check if there is a bug in CIDetector featuresInImage in iOS 15.0.2. Best regards, Hyoung-jin, Kim
Posted
by
Post not yet marked as solved
0 Replies
370 Views
Hello, I am trying to create an animated sequence of HEIC images but I cannot save the frame property duration. It seems this is a well know bug: https://github.com/SDWebImage/SDWebImage/issues/3120 The kCGImagePropertyHEICSDictionary is never saved. Here's a sample project to reproduce the bug: ImageIOHEICSEncodeDecodeBug.zip Has anybody managed to save this information in a HEIC sequence? Thanks! Here's how I am writing an reading the image sequence - (void)testHEICSBug {     // First, load an animated image (GIF)     // And you can change the type into png, which is an animated PNG format. Same result     NSData *GIFData = [NSData dataWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"image1" ofType:@"gif"]];     CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)GIFData, nil);     NSUInteger frameCount = CGImageSourceGetCount(source);     NSAssert(frameCount > 1, @"GIF frame count > 1");          // Split into frames array, encode to HEICS     NSMutableData *heicsData = [NSMutableData data];     CGImageDestinationRef destination = CGImageDestinationCreateWithData((__bridge CFMutableDataRef)heicsData, (__bridge CFStringRef)AVFileTypeHEIC, frameCount, nil);               for (int i = 0; i < frameCount; i++) {         // First get the GIF input image and duration         CGImageRef cgImage = CGImageSourceCreateImageAtIndex(source, i, nil);         NSDictionary *inputProperties = (__bridge_transfer  NSDictionary *)CGImageSourceCopyPropertiesAtIndex(source, i, nil);         NSDictionary *inputDictionary = inputProperties[(__bridge NSString *)kCGImagePropertyGIFDictionary];         NSTimeInterval duration = [inputDictionary[(__bridge NSString *)kCGImagePropertyGIFUnclampedDelayTime] doubleValue];         NSAssert(cgImage, @"CGImage not nil");         NSAssert(duration > 0, @"Input duration > 0");                                    // Then, encode into HEICS animated image         NSMutableDictionary *outputDProperties = [NSMutableDictionary dictionary];         outputDProperties[(__bridge NSString *)kCGImagePropertyHEICSDictionary] = @{(__bridge NSString *)kCGImagePropertyHEICSUnclampedDelayTime : @(duration)};         CGImageDestinationAddImage(destination, cgImage, (__bridge_retained CFDictionaryRef)outputDProperties);     }          // Output HEICS image data     BOOL result = CGImageDestinationFinalize(destination);     NSAssert(result, @"Encode HEICS failed");               // Next, try to use ImageIO to decode HEICS and check duration          CGImageSourceRef newSource = CGImageSourceCreateWithData((__bridge CFDataRef)heicsData, nil);     frameCount = CGImageSourceGetCount(newSource);     NSAssert(frameCount > 1, @"New HEICS should be aniamted image");     NSUInteger frameIndex = 1; // I pick the 2nd frame, actually any frame contains this issue.     NSDictionary *newProperties = (__bridge_transfer NSDictionary *)CGImageSourceCopyPropertiesAtIndex(newSource, frameIndex, nil);     NSDictionary *newDictionary = newProperties[(__bridge NSString *)kCGImagePropertyHEICSDictionary];     NSTimeInterval newDuration = [newDictionary[(__bridge NSString *)kCGImagePropertyHEICSUnclampedDelayTime] doubleValue];     CGImageRef newImage = CGImageSourceCreateImageAtIndex(newSource, frameIndex, nil);          // Now, check the HEICS frame duration, however, it's nil :(     // Only image is kept.     NSAssert(newImage, @"frame image is not nil");     NSAssert(newDuration > 0, @"Decode the HEICS (which encoded from GIF) will loss the frame duration"); }
Posted
by
Post not yet marked as solved
0 Replies
350 Views
I want to get text by reading QR Code image. By the way, in iOS15.0.2, CIDetector featuresInImage returns no data. But in iOS 14.6 it returns data. Please answer what the reason is.
Posted
by
Post not yet marked as solved
0 Replies
521 Views
We see strange crashes when running our app since macOS 12 Beta (but still on macOS 12.0.1). We have not been able to fully identify the issue but it seems to happen on continue video playback in an AVPlayer, sometimes due to background, sometimes due to continue playback directly. Xcode points to some code in the libsystem_kernel.dylib (seems different every time and never in our own code) The log will show: -[MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion 'MTLResource 0x600002293790 (label: (null)), referenced in cmd buffer 0x7f7b2200a000 (label: (null)) is in volatile or empty purgeable state at commit' -[MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion 'MTLResource 0x600002293790 (label: (null)), referenced in cmd buffer 0x7f7b2200a000 (label: (null)) is in volatile or empty purgeable state at commit' We tried finding the object 0x600002293790 and 0x7f7b2200a000 but this gave no additional information as to why the app crashes. We are using a custom VideoCompositor: AVVideoCompositing and initialise the CIContext for the work done here with these options: if let mtlDevice = MTLCreateSystemDefaultDevice() let options: [CIContextOption : Any] = [ CIContextOption.useSoftwareRenderer: false, CIContextOption.outputPremultiplied: false, ] let context = CIContext(mtlDevice: mtlDevice, options: options) } Not sure this is an Xcode 13 debug issue? a macOS 12.0.1 Monterey issue? or an actual issue as we have not seen it crash when not using Xcode to build our app giving this information. But we have seen strange crashes on Audio/Video threads that we could not trace back to our code as well. The crash has never occurred on Xcode 12 or on macOS Big Sur during previous testing. Any information as to locating the source of the issue or a solution would be awesome.
Posted
by
Post not yet marked as solved
0 Replies
250 Views
I am working in face editing function and here I want to make face smooth as well as white skin tone. I am able to smooth face on image using YUCIHighPassSkinSmoothing library. But I am not able to whitening the face only can smoothing face. self.inputCIImage = CIImage(cgImage: self.imgPhotoForEdit.image!.cgImage!) self.filter.inputImage = self.inputCIImage self.filter.inputRadius = 6.0 self.filter.inputAmount = NSNumber(value: selectedSmoothingValue) self.filter.inputSharpnessFactor = 0.0 let outputCIImage = filter.outputImage! let outputCGImage = self.context.createCGImage(outputCIImage, from: outputCIImage.extent) let outputUIImage = UIImage(cgImage: outputCGImage!, scale: self.imgPhotoForEdit.image!.scale, orientation: self.imgPhotoForEdit.image!.imageOrientation) self.imgPhotoForEdit.image = outputUIImage } Here is my code implemented using YUCIHighPassSkinSmoothing library.
Posted
by
Post not yet marked as solved
1 Replies
591 Views
I am not able to scan or read the 1D barcode present in the device photos library image, whereas the same I have achieved for QR code(2D barcode) image using the below code. CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:nil options:@{CIDetectorAccuracy:CIDetectorAccuracyHigh}]; if (detector) { CIImage *img = [[CIImage alloc]initWithImage:image]; NSArray* imgFeatures = [detector featuresInImage:img]; NSString* contents; for (CIQRCodeFeature* imgFeature in imgFeatures) { DLog(@"decode %@ ",imgFeature.messageString); contents = imgFeature.messageString; if(contents){ DLog(@"Success"); }else{ DLog(@"Failure"); } return; } } As per my inference, the CIDetector has only the following types to detect from image CIDetectorTypeFace CIDetectorTypeRectange CIDetectorTypeQRCode CIDetectorTypeText https://developer.apple.com/documentation/coreimage/cidetector/detector_types?language=objc Please let me know how I can get my barcode images from the device photos library read/scanned.
Posted
by
Post marked as solved
2 Replies
437 Views
I am wondering if it is possible to detect a document or an envelope with aspect ratio (width / height) equals to or more than 2.0 on iOS 15 using a CIDetector object. I found that starting from iOS 15, my application stopped to detect envelopes with the previously mentioned aspect ratios. I have tried to use CIDetectorAspectRatio, CIDetectorFocalLength & CIDetectorMinFeatureSize options with desired values to fine-tune the detection, but that didn't solve the problem. The following is the method I'm using for getting the detected rectangles. It returns an CIRectangleFeature array of 1 element in case running application on iPhone with iOS version earlier than iOS 15, while it returns an empty array in case I'm running the application on iPhone with iOS 15 or later. static func rectangles(inImage image: CIImage) -> [CIRectangleFeature]? {         let rectangleDetector = CIDetector(ofType: CIDetectorTypeRectangle, context: CIContext(options: nil), options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])         guard let rectangleFeatures = rectangleDetector?.features(in: image) as? [CIRectangleFeature] else {             return nil         }         return rectangleFeatures     } Thank you in advance.
Posted
by
Post not yet marked as solved
0 Replies
241 Views
Hi,  Apologies, but I am completely new to Apple development, struggling to find the right information that I need, and would really appreciate some pointers from experienced developers as to the best approach for a project I am starting. The use case I have relates to using properties of colour to predict the density of a fluid from a photograph. Each photograph will simply be a single colour, the properties of the photograph (colour intensity / brightness / saturation) will vary between each photograph as the density of the fluid changes and I am looking to use these (or possibly other similar properties) to determine a value for the fluid density. What I would like to ask is: 1- Do you think CoreML is the best approach to use for predicting the density based upon the colour properties of the photograph, or should I start somewhere else? 2- Can you point me to any helpful related documentation which will help me get started. I hope someone can help. Many thanks in advance Steve
Posted
by
Post not yet marked as solved
1 Replies
501 Views
I have tried everything but it looks to be impossible to get MTKView to display full range of colors of HDR CIImage made from CVPixelBuffer (in 10bit YUV format). Only builtin layers such as AVCaptureVideoPreviewLayer, AVPlayerLayer, AVSampleBufferDisplayLayer are able to fully display HDR images on iOS. Is MTKView incapable of displaying full BT2020_HLG color range? Why does MTKView clip colors no matter even if I set pixel Color format to bgra10_xr or bgra10_xr_srgb?  convenience init(frame: CGRect, contentScale:CGFloat) {         self.init(frame: frame)         contentScaleFactor = contentScale     }     convenience init(frame: CGRect) {         let device = MetalCamera.metalDevice         self.init(frame: frame, device: device)         colorPixelFormat = .bgra10_xr         self.preferredFramesPerSecond = 30     }     override init(frame frameRect: CGRect, device: MTLDevice?) {         guard let device = device else {             fatalError("Can't use Metal")         }         guard let cmdQueue = device.makeCommandQueue(maxCommandBufferCount: 5) else {             fatalError("Can't make Command Queue")         }         commandQueue = cmdQueue         context = CIContext(mtlDevice: device, options: [CIContextOption.cacheIntermediates: false])         super.init(frame: frameRect, device: device)         self.framebufferOnly = false         self.clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0)     } And then rendering code:  override func draw(_ rect: CGRect) {         guard let image = self.image else {             return         }         let dRect = self.bounds         let drawImage: CIImage         let targetSize = dRect.size         let imageSize = image.extent.size         let scalingFactor = min(targetSize.width/imageSize.width, targetSize.height/imageSize.height)         let scalingTransform = CGAffineTransform(scaleX: scalingFactor, y: scalingFactor)         let translation:CGPoint = CGPoint(x: (targetSize.width - imageSize.width * scalingFactor)/2 , y: (targetSize.height - imageSize.height * scalingFactor)/2)         let translationTransform = CGAffineTransform(translationX: translation.x, y: translation.y)         let scalingTranslationTransform = scalingTransform.concatenating(translationTransform)        drawImage = image.transformed(by: scalingTranslationTransform)         let commandBuffer = commandQueue.makeCommandBufferWithUnretainedReferences()         guard let texture = self.currentDrawable?.texture else {             return         }         var colorSpace:CGColorSpace                 if #available(iOS 14.0, *) {             colorSpace = CGColorSpace(name: CGColorSpace.itur_2100_HLG)!         } else {             // Fallback on earlier versions             colorSpace = drawImage.colorSpace ?? CGColorSpaceCreateDeviceRGB()         }         NSLog("Image \(colorSpace.name), \(image.colorSpace?.name)")         context.render(drawImage, to: texture, commandBuffer: commandBuffer, bounds: dRect, colorSpace: colorSpace)         commandBuffer?.present(self.currentDrawable!, afterMinimumDuration: 1.0/Double(self.preferredFramesPerSecond))         commandBuffer?.commit()     }
Posted
by
Post not yet marked as solved
3 Replies
689 Views
As part of our workflow we update exif data using the Core Image API. Specifically we get metadata from an original Image, take the metadata, modify the metadata and then use CGImageDestinationCopyImageSource() to merge the new metadata into a copy of the original image. We have found that if we compile to an iOS 15 device from XCode 13, the updated metadata is no longer merged in. The following test code demonstrates the problem. (in order for it to work for you, you'll need to change the image input, and make sure to change an exif tag that exists in your image) The test will pass if the target device is below iOS 15, but fail if on iOS 15   func testCGImageDestinationCopyImageSource() throws {     guard let imageURL = Bundle(for: self.classForCoder).url(forResource: "Image_000001", withExtension: "jpg") else {       XCTFail()       return     }     // Work with the image data     let originalData = try Data(contentsOf: imageURL)     // Create source from data     guard let imageSource = CGImageSourceCreateWithData(originalData as CFData, nil) else {       XCTFail()       return     }     guard let UTI: CFString = CGImageSourceGetType(imageSource) else {       XCTFail()       return     }     // Setup a new destination to copy data too     let imageData: CFMutableData = CFDataCreateMutable(nil, 0)     guard let destination = CGImageDestinationCreateWithData(imageData as CFMutableData, UTI, 1, nil) else {       XCTFail()       return     }           // Get the metadata     var mutableMetadata: CGMutableImageMetadata     if let imageMetadata = CGImageSourceCopyMetadataAtIndex(imageSource, 0, nil) {       mutableMetadata = CGImageMetadataCreateMutableCopy(imageMetadata) ?? CGImageMetadataCreateMutable()     } else {       mutableMetadata = CGImageMetadataCreateMutable()     }     // Inspect and check the old value     guard let tag = CGImageMetadataCopyTagMatchingImageProperty(mutableMetadata,                                   kCGImagePropertyExifDictionary,                                   kCGImagePropertyExifLensModel) else {       XCTFail()       return     }     guard let originalValue = CGImageMetadataTagCopyValue(tag) as? String else {       XCTFail()       return     }     XCTAssertEqual(originalValue, "iOS.0")     // Set a new value in the metadata     CGImageMetadataSetValueMatchingImageProperty(mutableMetadata,                            kCGImagePropertyExifDictionary,                            kCGImagePropertyExifLensModel, "iOS" as CFString)     // Ensure new value is set in the metadata     guard let newTag = CGImageMetadataCopyTagMatchingImageProperty(mutableMetadata,                                   kCGImagePropertyExifDictionary,                                   kCGImagePropertyExifLensModel) else {       XCTFail()       return     }     guard let newValue = CGImageMetadataTagCopyValue(newTag) as? String else {       XCTFail()       return     }     XCTAssertEqual(newValue, "iOS")     // Combine the new metadata with the original image     let options = [       kCGImageDestinationMetadata as String : mutableMetadata,       kCGImageDestinationMergeMetadata as String : true       ] as [String : Any]     guard CGImageDestinationCopyImageSource(destination, imageSource, options as CFDictionary, nil) else {       XCTFail()       return     }     // Create a new source from the mutated data     guard let newSource = CGImageSourceCreateWithData(imageData as CFData, nil) else {       XCTFail()       return     }     // Get a new copy of the metadata     var mutableMetadata2: CGMutableImageMetadata     if let imageMetadata2 = CGImageSourceCopyMetadataAtIndex(newSource, 0, nil) {       mutableMetadata2 = CGImageMetadataCreateMutableCopy(imageMetadata2) ?? CGImageMetadataCreateMutable()     } else {       mutableMetadata2 = CGImageMetadataCreateMutable()     }     // Inspect and check the changed value     guard let updatedTag = CGImageMetadataCopyTagMatchingImageProperty(mutableMetadata2,                                   kCGImagePropertyExifDictionary,                                   kCGImagePropertyExifLensModel) else {       XCTFail()       return     }     guard let updatedValue = CGImageMetadataTagCopyValue(updatedTag) as? String else {       XCTFail()       return     }     XCTAssertEqual(updatedValue, "iOS")   }
Posted
by
Post marked as solved
1 Replies
665 Views
I have doubts about Core Image coordinate system, way transforms are applied and way the image extent is determined. I couldn't find much in documentation or on internet so I tried the following code to rotate CIImage and display it in UIImageView. As I understand there is no absolute coordinate system in Core Image. The bottom left corner of an image is supposed to be (0,0). But my experiments show something else. I created a prototype to rotate a CIImage by pi/10 radians on each button click. Here is the code I wrote. override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. imageView.contentMode = .scaleAspectFit let uiImage = UIImage(contentsOfFile: imagePath) ciImage = CIImage(cgImage: (uiImage?.cgImage)!) imageView.image = uiImage } private var currentAngle = CGFloat(0) private var ciImage:CIImage! private var ciContext = CIContext() @IBAction func rotateImage() { let extent = ciImage.extent let translate = CGAffineTransform(translationX: extent.midX, y: extent.midY) let uiImage = UIImage(contentsOfFile: imagePath) currentAngle = currentAngle + CGFloat.pi/10 let rotate = CGAffineTransform(rotationAngle: currentAngle) let translateBack = CGAffineTransform(translationX: -extent.midX, y: -extent.midY) let transform = translateBack.concatenating(rotate.concatenating(translate)) ciImage = CIImage(cgImage: (uiImage?.cgImage)!) ciImage = ciImage.transformed(by: transform) NSLog("Extent \(ciImage.extent), Angle \(currentAngle)") let cgImage = ciContext.createCGImage(ciImage, from: ciImage.extent) imageView.image = UIImage(cgImage: cgImage!) } But in the logs, I see the extent of images have negative origin.x and origin.y. What does it mean? Relative to whom it is negative and where exactly is (0,0) then? What exactly is image extent and how does Core Image coordinate system work? 2021-09-24 14:43:29.280393+0400 CoreImagePrototypes[65817:5175194] Metal API Validation Enabled 2021-09-24 14:43:31.094877+0400 CoreImagePrototypes[65817:5175194] Extent (-105.0, -105.0, 1010.0, 1010.0), Angle 0.3141592653589793 2021-09-24 14:43:41.426371+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.6283185307179586 2021-09-24 14:43:42.244703+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.9424777960769379
Posted
by
Post not yet marked as solved
0 Replies
285 Views
Hi I am getting a frame of video data that is from a third party SDK. The object contains the data buffer, data length, y buffer, u buffer, v buffer and a few more bits that are related to the SDK. The data is I420 (from the name of the object). I am using the following code to try and make an NSImage from the data.  var pseudoVideoData = Data.init(bytes: buffer!,                                 count: Int(bufSize))             let cgImg = pseudoVideoData.withUnsafeMutableBytes { (ptr) -> CGImage in                 let ctx = CGContext(                     data: ptr.baseAddress,                     width: Int(Double(windowWidth) * 0.562),      // Why?                     height: Int(Double(windowHeight) * 0.519),    // Why? again???                     bitsPerComponent: 8,                     bytesPerRow: Int(4 * streamWidth),                     space: CGColorSpace(name: CGColorSpace.sRGB)!,                     bitmapInfo: bmInfo                 )!                 return ctx.makeImage()!             }             let imgSize = NSSize(width: CGFloat(windowWidth), height: CGFloat(windowHeight))             let img = NSImage.init(cgImage: cgImg, size: imgSize)             self.pseudoVideoView.image = img } I can blast the image into an NSImageView.image but the image is missing colour. I can get the y buffer, u buffer and v buffer, but I don't know how to mash all the data into a nice coloured image. Can someone point me to a URL or some sample code that I can look at to get over this problem? Thanks and Best Regards John
Posted
by
Post marked as solved
4 Replies
709 Views
Hello, The metal compiler is crashing for me when attempting to compile a metal source file that contains Core Image kernel implementations. This is a minimal version of a file that produces the crash: extern "C" { namespace coreimage { inline void swap(thread float4 &a, thread float4 &b) { float4 tmp = a; a = min(a, b); b = max(tmp, b); } typedef sample_t s; float4 median_reduction_3(s v0, s v1, s v2) { swap(v1, v2); swap(v0, v2); swap(v0, v1); return v1; } }} Some observations: If inline is removed, the code compiles fine. I'm not sure if there's a performance impact, as the backend llvm compiler might as well decide to inline it on its own. If the calls to swap are commented in the median reduction function, the code compiles. If the -fcikernel compilation flag is not used, it also compiles fine (doesn't crash). Of course, that configuration doesn't allow the use of functions inside the file as Core Image kernels. I'm using the build settings recommended in this WWDC20 session (without indicating the location of the header files, since it's empty in my project and the new compiler interprets the argument following -I as a directory).
Posted
by
Post not yet marked as solved
0 Replies
817 Views
I am developing an app that sends pixel buffers from the Broadcast Upload Extension to OpenTok. When I run my broadcast extension it hits its memory limit in seconds. I have been looking for ways to reduce the size and scale of CMSampleBuffers and ended the process by first converting them to CIImage, then scaling them, and then converting them to CVPixelBuffers for sending OpenTok Servers. Unfortunately, the extension still crashes, even though I tried to reduce the pixel buffer. My code follows: First I convert the CMSampleBuffer to CVPixelBuffer in processSampleBuffer function from Sample Handler then pass CVPixelBuffer to my function along with timestamps. Here I convert the CVPixelBuffer to cIImage and scale it using cIFilter(CILanczosScaleTransform). After that, I generate Pixel Buffer from CIImage using PixelBufferPool and cIContext and then send the new buffer to OpenTok Servers using videoCaptureConsumer. func processPixelBuffer(pixelBuffer:CVPixelBuffer, timeStamp ts:CMTime) { guard let ciImage = self.scaleFilterImage(inputImage: pixelBuffer.cmIImage, withAspectRatio: 1.0, scale: CGFloat(kVideoFrameScaleFactor)) else {return} if self.pixelBufferPool == nil || self.pixelBuffer?.size != pixelBuffer.size{ self.destroyPixelBuffers() self.updateBufferPool(newWidth: Int(ciImage.extent.size.width), newHeight: Int(ciImage.extent.size.height)) guard CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, self.pixelBufferPool, &self.pixelBuffer) == kCVReturnSuccess else {return} } context?.render(ciImage, to:pixelBuffer) self.videoCaptureConsumer?.consumeImageBuffer(pixelBuffer, orientation:.up, timestamp:ts, metadata:nil) } If the pixelBufferPool is nil or there is a change in the size of the pixelBuffer I update the pool. private func updateBufferPool(newWidth: Int, newHeight: Int) { let pixelBufferAttributes: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: UInt(self.videoFormat), kCVPixelBufferWidthKey as String: newWidth, kCVPixelBufferHeightKey as String: newHeight, kCVPixelBufferIOSurfacePropertiesKey as String: [:] ] CVPixelBufferPoolCreate(nil,nil, pixelBufferAttributes as NSDictionary?, &pixelBufferPool) } This is the function I use to scale the cIImage: func scaleFilterImage(inputImage:CIImage, withAspectRatio aspectRatio:CGFloat, scale:CGFloat) -> CIImage? { scaleFilter?.setValue(inputImage, forKey:kCIInputImageKey) scaleFilter?.setValue(scale, forKey:kCIInputScaleKey) scaleFilter?.setValue(aspectRatio, forKey:kCIInputAspectRatioKey) return scaleFilter?.outputImage } My question is why it still keeps crashing and is there another way to reduce the CVPixelBuffer size without causing a memory limit crash? I would appreciate any help on this. Swift or Objective - C, I am open to all suggestions.
Posted
by
Post marked as solved
1 Replies
341 Views
From the imagecapturecore-rs crate here: https://github.com/brandonhamilton/image-capture-core-rs/issues/7 Only didOpenSessionWithError fires when connecting a PTP (picture transfer protocol) device with a None for the error value and an NSArray with a count of 0. decl.add_method( sel!(device:didOpenSessionWithError:), device_did_open_session_with_error as extern "C" fn(&Object, Sel, id, id), ); println!(" 📸 add_method didCloseSessionWithError"); decl.add_method( sel!(device:didCloseSessionWithError:), device_did_close_session_with_error as extern "C" fn(&Object, Sel, id, id), ); println!(" 📸 add_method didRemoveDevice"); decl.add_method( sel!(didRemoveDevice:), device_did_remove_device as extern "C" fn(&Object, Sel, id), ); println!(" 📸 add_method withCompleteContentCatalog"); decl.add_method( sel!(withCompleteContentCatalog:), device_did_become_ready as extern "C" fn(&Object, Sel, id), ); Do I need to be using the fancier cameraDevice.requestOpenSession() with the callback function from here? https://developer.apple.com/documentation/imagecapturecore/icdevice/3142916-requestopensession As seen on StackOverflow: https://stackoverflow.com/questions/68978185/apple-ptp-withcompletecontentcatalog-not-firing-rust-obj-c
Posted
by
Post not yet marked as solved
0 Replies
406 Views
In the past we have tested iOS 13 and iOS 12 iPhone 6, 6s, and 10 with the face anti spoofing. It was working. However, with iOS 14, we have learned that the input from camera is not working with face anti spoofing. The image taken from camera is producing poor scores on whether the face (in image) is a real person. The machine learning model works by reading the pixels and checks for many things, including the depth of the face, the background of the head, and whether there appears to be image manipulation in the pixel. we are very confident we have not changed our app in anyway, so we are asking if there has been any changes made to the iOS 14 camera that affected the image being outputted to the public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection). Currently, the model works great on Android phones.
Posted
by
Post marked as solved
2 Replies
699 Views
This is a weird XCode 13 beta bug (including beta 5). Metal Core Image kernels fail to load from the library giving error 2021-08-26 12:05:23.806226+0400 MetalFilter[23183:1751438] [api] +[CIKernel kernelWithFunctionName:fromMetalLibraryData:options:error:] Cannot initialize kernel with given library data. [ERROR] Failed to create CIColorKernel: Error Domain=CIKernel Code=6 "(null)" UserInfo={CINonLocalizedDescriptionKey=Cannot initialize kernel with given library data.} But there is no such error with XCode 12.5. The kernel loads fine. Only on XCode 13 beta there is an error.
Posted
by
Post not yet marked as solved
0 Replies
276 Views
As of 08/19/21, when calling this new API writeHEIF10RepresentationOfImage with the same arguments as writeHEIFRepresentationOfImage (minus the forum argument), I get the following exception... *** -[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[0] (NSInvalidArgumentException) I am assuming that the new API isn't working, but I'm posting here for visibility. I used Xcode 13.0 Beta 5 and I tested it on an iPhone 12 running iOS 15.0 Beta 6
Posted
by