Image I/O

RSS for tag

Read and write most image file formats, manage color, access image metadata using Image I/O.

Image I/O Documentation

Posts under Image I/O tag

64 Posts
Sort by:
Post not yet marked as solved
0 Replies
41 Views
I have an image viewing app with support for avif (and avis) images. I'm trying to figure out if the recent bug in CoreMedia (dav1d) affects my app. The apple security update: https://support.apple.com/en-gb/HT214097 The vulnerable code path in dav1d is only reached when c->n_fc > 1 (https://code.videolan.org/videolan/dav1d/-/blob/2b475307dc11be9a1c3cc4358102c76a7f386a51/src/decode.c#L2845), where c is the dav1d context. With some reverse engineering, the way I see CMPhoto calling into VideoToolBox (which internally calls into AV1SW.videodecoder, which is a wrapper around dav1d), the max frame delay is hardcoded to 1 in the dav1d settings which intern means that c->n_fc in dav1d is always 1. The vulnerable code path in dav1d is only reached when c->n_fc > 1 (https://code.videolan.org/videolan/dav1d/-/blob/2b475307dc11be9a1c3cc4358102c76a7f386a51/src/decode.c#L2845). From my understand, this should mean that my app isn't affected. The apple security update however clearly mentions that "Processing an image may lead to arbitrary code execution". Surely I'm missing something?
Posted Last updated
.
Post not yet marked as solved
1 Replies
81 Views
A simple view has misaligned localized content after being converted to an image using ImageRenderer. This is still problematic on real phone and TestFlight I'm not sure what the problem is, I'm assuming it's an ImageRenderer bug. I tried to use UIGraphicsImageRenderer, but the UIGraphicsImageRenderer captures the image in an inaccurate position, and it will be offset resulting in a white border. And I don't know why in some cases it encounters circular references that result in blank images. "(1) days" is also not converted to "1 day" properly.
Posted
by paisleyy.
Last updated
.
Post not yet marked as solved
0 Replies
97 Views
I have some depth map files in TIFF format that I am trying to extract data from programmatically. I see that I can import TIFF format images with NSImage, and from there, can get the raw pixel data. But how do I convert this to real distances? Any help would be appreciated, thanks!
Posted
by CrayE.
Last updated
.
Post not yet marked as solved
0 Replies
151 Views
I have a page that needs to display a large PNG image (1024 x 100247 ) Everything works fine in Chrome and Edge, but failed in safari. this is test image : https://storage-staging.passton.jp/images/2024/03/11/E0R6G8FKd3B3iLPO.png is there any limit in safari ?
Posted Last updated
.
Post not yet marked as solved
3 Replies
226 Views
Under Sonoma 14.4 the compression option doesn't work with PNG images. It works for JPG/HEIF. Preview can export PNG file to HEIC with compression option. What am I missing? Previously this has worked. I am trying with 0.01 and 0.9 as compression quality and the file size is the same for PNG. Is Preview using some trick to convert the image using ciContext.createCGImage? PS: Compression option of 1.0 was broken under 14.4 RC and Preview created empty file. func heifImageDataUsingDestination(at url: URL, compressionQuality : CGFloat) -> Data? { guard let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil) else { return nil } guard let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else { return nil } var mutableData = NSMutableData() guard let imageDestination = CGImageDestinationCreateWithData(mutableData, "public.heic" as CFString, 1, nil) else { return nil } let options = [ kCGImageDestinationLossyCompressionQuality: compressionQuality ] as CFDictionary CGImageDestinationAddImage(imageDestination, cgImage, options) let success = CGImageDestinationFinalize(imageDestination) if success { return mutableData as Data } return nil } func heifImageDataUsingCIContext(at url: URL, compressionQuality : CGFloat) -> Data? { guard let ciImage = CIImage(contentsOf: url) else { return nil } let context = CIContext() let colorspace = ciImage.colorSpace ?? CGColorSpaceCreateDeviceRGB() let options = [CIImageRepresentationOption(rawValue: kCGImageDestinationLossyCompressionQuality as String) : compressionQuality] return context.heifRepresentation(of: ciImage, format: .RGBA8, colorSpace: colorspace, options: options) }
Posted
by xhruso00.
Last updated
.
Post not yet marked as solved
0 Replies
137 Views
if balloon == yellow1_balloon { soundFile = "Sounds/newblop.wav" playSound() balloon.isHidden = true poppedImages.isHidden = false poppedImages.animationImages = ["popyellow-1","popyellow-2","popyellow-3","popyellow-4","popyellow-5","popyellow-6","popyellow-7"] .compactMap({ name in UIImage(named: name) }) let x:CGFloat = yellow1_balloon.frame.origin.x let y:CGFloat = yellow1_balloon.frame.origin.y poppedImages.frame.origin.x = x poppedImages.frame.origin.y = y poppedImages.animationDuration = 1.0 poppedImages.animationRepeatCount = 1 poppedImages.startAnimating() score = score + 10 scoreLbl.text = String(score) return } x,y cordinates are always the same a when yellow1_balloon is first created and not where it ends up after being touched.
Posted
by DRLeewood.
Last updated
.
Post not yet marked as solved
0 Replies
205 Views
I would like to save the depth map from ARDepthData as .tiff, but notice my output tiff distances are incorrect. Objects that are close are reported to be slightly farther away, and walls that are around 4 meters away from me have a recorded value of 2 meters. I am using this code to write the tiff: import UIKit # Save method extension CVPixelBuffer { func saveDepthMapToTIFF(to path: URL) { let ciImage = CIImage(cvPixelBuffer: self) let context = CIContext() do { try context.writeTIFFRepresentation( of: ciImage, to: path, format: .Lf, colorSpace: CGColorSpaceCreateDeviceGray() ) } catch { print("Failed to write TIFF: \(error)") } } } # Calling the save arFrame.sceneDepth?.depthMap.saveDepthMapToTIFF(to: depthMapPath) I am reading the file like this in Python import tifffile depth_map = tifffile.imread("test.tiff") plt.imshow(depth_map) plt.colorbar() which creates this image: The farthest parts of the room should be around 4 meters, not 2. The dark blue spot on the lower right is closer than half a meter away. Notably the depth map contains distances from the camera plane to each region, not the distance from the camera sensor to the region. Even correcting for this though, the depth map remains about the same. Is there an issue with how I am saving the depth image? Is there a scale factor or format error?
Posted Last updated
.
Post not yet marked as solved
3 Replies
586 Views
Hello, I'm wondering if there is a way to programmatically write a series of UIImages into an APNG, similar to what the code below does for GIFs (credit: https://github.com/AFathi/ARVideoKit/tree/swift_5). I've tried implementing a similar solution but it doesn't seem to work. My code is included below I've also done a lot of searching and have found lots of code for displaying APNGs, but have had no luck with code for writing them. Any hints or pointers would be appreciated. func generate(gif images: [UIImage], with delay: Float, loop count: Int = 0, _ finished: ((_ status: Bool, _ path: URL?) -> Void)? = nil) { currentGIFPath = newGIFPath gifQueue.async { let gifSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFLoopCount as String : count]] let imageSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFDelayTime as String : delay]] guard let path = self.currentGIFPath else { return } guard let destination = CGImageDestinationCreateWithURL(path as CFURL, __UTTypeGIF as! CFString, images.count, nil) else { finished?(false, nil); return } //logAR.message("\(destination)") CGImageDestinationSetProperties(destination, gifSettings as CFDictionary) for image in images { if let imageRef = image.cgImage { CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary) } } if !CGImageDestinationFinalize(destination) { finished?(false, nil); return } else { finished?(true, path) } } } My adaptation of the above code for APNGs (doesn't work; outputs empty file): func generateAPNG(images: [UIImage], delay: Float, count: Int = 0) { let apngSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGLoopCount as String : count]] let imageSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGDelayTime as String : delay]] guard let destination = CGImageDestinationCreateWithURL(outputURL as CFURL, UTType.png.identifier as CFString, images.count, nil) else { fatalError("Failed") } CGImageDestinationSetProperties(destination, apngSettings as CFDictionary) for image in images { if let imageRef = image.cgImage { CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary) } } }
Posted
by wmk.
Last updated
.
Post not yet marked as solved
0 Replies
161 Views
I am developing an app using a data cable to link a camera. When I enter the page for the first time, I can detect the camera device, and then when I exit the page and enter again, I cannot detect the linked camera. - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view. self.view.backgroundColor = [UIColor whiteColor]; [self addImageCaptureCore]; } - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(1.0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{ [self checkCameraConnection]; }); } - (void)checkCameraConnection { if (@available(iOS 13.0, *)) { NSArray<ICDevice *> *connectedDevices = self.browser.devices; if (connectedDevices.count > 0) { NSLog(@"Camera is connected"); } else { NSLog(@"Camera is not connected"); } } else { // Fallback on earlier versions } } - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; if (@available(iOS 13.0, *)) { if (self.cameraDevice) { if (self.cameraDevice.hasOpenSession) { [self.cameraDevice requestCloseSession]; dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{ [self.browser stop]; self.browser.delegate = nil; self.browser = nil; }); } else { dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{ [self.browser stop]; self.browser.delegate = nil; self.browser = nil; }); } } } else { // Fallback on earlier versions } } - (void)addImageCaptureCore { if (@available(iOS 13.0, *)) { ICDeviceBrowser *browser = [[ICDeviceBrowser alloc] init]; browser.delegate = self; [browser start]; self.browser = browser; } else { } } #pragma mark - ICDeviceBrowserDelegate - (void)deviceBrowser:(ICDeviceBrowser*)browser didAddDevice:(ICDevice*)device moreComing:(BOOL) moreComing API_AVAILABLE(ios(13.0)){ NSLog(@"Device name = %@",device.name); if ([device isKindOfClass:[ICCameraDevice class]]) { if ([device.capabilities containsObject:ICCameraDeviceCanAcceptPTPCommands]) { ICCameraDevice *cameraDevice = (ICCameraDevice *)device; cameraDevice.delegate = self; [cameraDevice requestOpenSession]; self.cameraDevice = cameraDevice; } } } - (void)deviceBrowser:(ICDeviceBrowser*)browser didRemoveDevice:(ICDevice*)device moreGoing:(BOOL) moreGoing API_AVAILABLE(ios(13.0)){ if (self.cameraDevice) { if (self.cameraDevice.hasOpenSession) { [self.cameraDevice requestCloseSession]; self.cameraDevice.delegate = nil; self.cameraDevice = nil; } else { self.cameraDevice.delegate = nil; self.cameraDevice = nil; } } } #pragma mark - ICCameraDeviceDelegate - (void)cameraDevice:(ICCameraDevice*)camera didAddItems:(NSArray<ICCameraItem*>*) items API_AVAILABLE(ios(13.0)){ if (items.count > 0) { ICCameraItem *latestItem = items.lastObject; NSLog(@"name = %@",latestItem.name); } } #pragma mark - ICDeviceDelegate - (void)device:(ICDevice*)device didOpenSessionWithError:(NSError* _Nullable) error API_AVAILABLE(ios(13.0)){ if (error) { NSLog(@"Failed to open session %@",error.localizedDescription); } else { NSLog(@"open session success"); } } - (void)device:(ICDevice*)device didCloseSessionWithError:(NSError* _Nullable)error API_AVAILABLE(ios(13.0)){ if (error) { NSLog(@"close session error = %@",error.localizedDescription); } else { NSLog(@"didCloseSession"); } } - (void)didRemoveDevice:(ICDevice*)device { }
Posted
by Nuoyun.
Last updated
.
Post not yet marked as solved
2 Replies
460 Views
I have trained a model to classify some symbols using Create ML. In my app I am using VNImageRequestHandler and VNCoreMLRequest to classify image data. If I use a CVPixelBuffer obtained from an AVCaptureSession then the classifier runs as I would expect. If I point it at the symbols it will work fairly accurately, so I know the model is trained fairly correctly and works in my app. If I try to use a cgImage that is obtained by cropping a section out of a larger image (from the gallery), then the classifier does not work. It always seems to return the same result (although the confidence is not a 1.0 and varies for each image, it will be to within several decimal points of it, eg 9.9999). If I pause the app when I have the cropped image and use the debugger to obtain the cropped image (via the little eye icon and then open in preview), then drop the image into the Preview section of the MLModel file or in Create ML, the model correctly classifies the image. If I scale the cropped image to be the same size as I get from my camera, and convert the cgImage to a CVPixelBuffer with same size and colour space to be the same as the camera (1504, 1128, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) then I get some difference in ouput, it's not accurate, but it returns different results if I specify the 'centerCrop' or 'scaleFit' options. So I know that 'something' is happening, but it's not the correct thing. I was under the impression that passing a cgImage to the VNImageRequestHandler would perform the necessary conversions, but experimentation shows this is not the case. However, when using the preview tool on the model or in Create ML this conversion is obviously being done behind the scenes because the cropped part is being detected. What am I doing wrong. tl;dr my model works, as backed up by using video input directly and also dropping cropped images into preview sections passing the cropped images directly to the VNImageRequestHandler does not work modifying the cropped images can produce different results, but I cannot see what I should be doing to get reliable results. I'd like my app to behave the same way the preview part behaves, I give it a cropped part of an image, it does some processing, it goes to the classifier, it returns a result same as in Create ML.
Posted
by Bergasms.
Last updated
.
Post not yet marked as solved
2 Replies
382 Views
I'm working on a very simple App where I need to visualize an image on the screen of an iPhone. However, the image has some special properties. It's a 16bit, yuv422_yuy2 encoded image. I already have all the raw bytes saved in a Data object. After googling for a long time, I still did not figure out the correct way. My current understanding is first create a CVPixelBuffer to properly represent the encoding information. Then conver the CVPixelBuffer to an UIImage. The following is my current implementation. public func YUV422YUY2ToUIImage(data: Data, height: Int, width: Int, bytesPerRow: Int) -> UIImage { return rosImage.data.withUnsafeMutableBytes { rawPointer in let baseAddress = rawPointer.baseAddress! let tempBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.allocate(capacity: 1) CVPixelBufferCreateWithBytes( kCFAllocatorDefault, width, height, kCVPixelFormatType_422YpCbCr16, baseAddress, bytesPerRow, nil, nil, nil, tempBufferPointer) let ciImage = CIImage(cvPixelBuffer: tempBufferPointer.pointee!) return UIImage(ciImage: ciImage) } } However, when I execute the code, I have the followin error -[CIImage initWithCVPixelBuffer:options:] failed because its pixel format v216 is not supported. So it seems CIImage is unhappy. I think I need to convert the encoding from yuv422_yuy2 to something like plain ARGB. But after a long tim googling, I didn't find a way to do that. The closest function I cand find is https://developer.apple.com/documentation/accelerate/1533015-vimageconvert_422cbypcryp16toarg But the function is too complex for me to understand how to use it. Any help is appreciated. Thank you!
Posted
by yaoyuh.
Last updated
.
Post not yet marked as solved
0 Replies
220 Views
How to extract an object from a picture or remove the background of an object just like you can create stickers in Photos app. Is there any other official model or library other than using some website's API? (DeepLabV3.mlmodel cannot infer what I need)
Posted
by Alex23421.
Last updated
.
Post not yet marked as solved
4 Replies
937 Views
Using the screencapture CLI on macOS Sonoma 14.0 (23A344) results in a 72dpi image file, no matter if it was captured on a retina display or not. For example, using screencapture -i ~/Desktop/test.png in Terminal lets me create a selective screenshot, but the resulting file does not contain any DPI metadata (checked using mdls ~/Desktop/test.png), nor does the image itself have the correct DPI information (should be 144, but it's always 72; checked using Preview.app). I noticed a (new?) flag option, -r, for which the documentation states: -r Do not add screen dpi meta data to captured file. Is that flag somehow automatically set? Setting it myself makes no difference and obviously results in a no-dpi-in-metadata and wrong-dpi-in-image file. The only two ways I got the correct DPI information in a resulting image file was using the default options (forced by -p): screencapture -i -p, and by making the capture go to the clipboard screencapture -i -c. Sadly, I can't use those in my case. Feedback filed: FB13208235 I'd appreciate any pointers, Matthias
Posted Last updated
.
Post not yet marked as solved
3 Replies
1.5k Views
I take a picture using the iPhone's camera. The taken resolution is 3024.0 x 4032. I then have to apply a watermark to this image. After a bunch of trial and error, the method I decided to use was taking a snapshot of a watermark UIView, and drawing that over the image, like so: // Create the watermarked photo. let result: UIImage=UIGraphicsImageRenderer(size: image.size).image(actions: { _ in   image.draw(in: .init(origin: .zero, size: image.size))   let watermark: Watermark = .init(     size: image.size,     scaleFactor: image.size.smallest / self.frame.size.smallest   )   watermark.drawHierarchy(in: .init(origin: .zero, size: image.size), afterScreenUpdates: true) }) Then with the final image — because the client wanted it to have a filename as well when viewed from within the Photos app and exported from it, and also with much trial and error — I save it to a file in a temporary directory. I then save it to the user's Photo library using that file. The difference as compared to saving the image directly vs saving it from the file is that when saved from the file the filename is used as the filename within the Photos app; and in the other case it's just a default photo name generated by Apple. The problem is that in the image saving code I'm getting the following error: [Metal] 9072 by 12096 iosurface is too large for GPU And when I view the saved photo it's basically just a completely black image. This problem only started when I changed the AVCaptureSession preset to .photo. Before then there was no errors. Now, the worst problem is that the app just completely crashes on drawing of the watermark view in the first place. When using .photo the resolution is significantly higher, so the image size is larger, so the watermark size has to be commensurately larger as well. iOS appears to be okay with the size of the watermark UIView. However, when I try to draw it over the image the app crashes with this message from Xcode: So there's that problem. But I figured that could be resolved by taking a more manual approach to the drawing of the watermark then using a UIView snapshot. So it's not the most pressing problem. What is, is that even after the drawing code is commented out, I still get the iosurface is too large error. Here's the code that saves the image to the file and then to the Photos library: extension UIImage {   /// Save us with the given name to the user's photo album.   /// - Parameters:   ///  - filename: The filename to be used for the saved photo. Behavior is undefined if the filename contain characters other than what is represented by this regular expression [A-Za-z0-9-_]. A decimal point for the file extension is permitted.   ///  - location: A GPS location to save with the photo.   fileprivate func save(_ filename: String, _ location: Optional&lt;Coordinates&gt;) throws {           // Create a path to a temporary directory. Adding filenames to the Photos app form of images is accomplished by first creating an image file on the file system, saving the photo using the URL to that file, and then deleting that file on the file system.     //   A documented way of adding filenames to photos saved to Photos was never found.     // Furthermore, we save everything to a `tmp` directory as if we just tried deleting individual photos after they were saved, and the deletion failed, it would be a little more tricky setting up logic to ensure that the undeleted files are eventually     // cleaned up. But by using a `tmp` directory, we can save all temporary photos to it, and delete the entire directory following each taken picture.     guard       let tmpUrl: URL=try {         guard let documentsDirUrl=NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first else {           throw GeneralError("Failed to create URL to documents directory.")         }         let url: Optional&lt;URL&gt; = .init(string: documentsDirUrl + "/tmp/")         return url       }()     else {       throw GeneralError("Failed to create URL to temporary directory.")     }           // A path to the image file.     let filePath: String=try {               // Reduce the likelihood of photos taken in quick succession from overwriting each other.       let collisionResistantPath: String="\(tmpUrl.path(percentEncoded: false))\(UUID())/"               // Make sure all directories required by the path exist before trying to write to it.       try FileManager.default.createDirectory(atPath: collisionResistantPath, withIntermediateDirectories: true, attributes: nil)               // Done.       return collisionResistantPath + filename     }()     // Create `CFURL` analogue of file path.     guard let cfPath: CFURL=CFURLCreateWithFileSystemPath(nil, filePath as CFString, CFURLPathStyle.cfurlposixPathStyle, false) else {       throw GeneralError("Failed to create `CFURL` analogue of file path.")     }           // Create image destination object.     //     // You can change your exif type here.     //   This is a note from original author. Not quite exactly sure what they mean by it. Link in method documentation can be used to refer back to the original context.     guard let destination=CGImageDestinationCreateWithURL(cfPath, UTType.jpeg.identifier as CFString, 1, nil) else {       throw GeneralError("Failed to create `CGImageDestination` from file url.")     }           // Metadata properties.     let properties: CFDictionary={               // Place your metadata here.       // Keep in mind that metadata follows a standard. You can not use custom property names here.       let tiffProperties: Dictionary&lt;String, Any&gt;=[:]               return [         kCGImagePropertyExifDictionary as String: tiffProperties       ] as CFDictionary     }()           // Create image file.     guard let cgImage=self.cgImage else {       throw GeneralError("Failed to retrieve `CGImage` analogue of `UIImage`.")     }     CGImageDestinationAddImage(destination, cgImage, properties)     CGImageDestinationFinalize(destination)             // Save to the photo library.     PHPhotoLibrary.shared().performChanges({       guard let creationRequest: PHAssetChangeRequest = .creationRequestForAssetFromImage(atFileURL: URL(fileURLWithPath: filePath)) else {         return       }       // Add metadata to the photo.       creationRequest.creationDate = .init()       if let location=location {         creationRequest.location = .init(latitude: location.latitude, longitude: location.longitude)       }     }, completionHandler: { _, _ in       try? FileManager.default.removeItem(atPath: tmpUrl.absoluteString)     })   } } If anyone can provide some insight as to what's causing the iosurface is too large error and what can be done to resolve it, that'd be awesome.
Posted Last updated
.
Post not yet marked as solved
3 Replies
371 Views
hdiutiul bug? When making a DMG image for the whole content of user1 profile (meaning using srcFolder = /Users/user1) using hdiutil, the program fails indicating: /Users/user1/Library/VoiceTrigger: Operation not permitted hdiutil: create failed - Operation not permitted The complete command used was: "sudo hdiutil create -srcfolder /Users/user1 -skipunreadable -format UDZO /Volumes/testdmg/test.dmg" And, of course, the user had local admin rights. I was using Sonoma 14.2.1 and a MacBook Pro (Intel T2) What I would have expected, asuming that /VoiceTrigger cannot be copied for whatever reason, would be to skip that file or folder and continue the process. Then, at the end, produce a log listing the files/folders not included and the reason for their exclusion. The fact that hdiutil just ended inmediately, looks to me as a bug. Or what else could explain the problem described?
Posted
by Lautarob1.
Last updated
.
Post not yet marked as solved
0 Replies
260 Views
I use a data cable to connect my Nikon camera to my iPhone. In my project, I use the framework ImageCaptureCore. Now I can read the photos in the camera memory card, but when I press the shutter of the camera to take a picture, the camera does not respond, the connection between the camera and the phone is normal. Then the camera screen shows a picture of a laptop. I don't know. Why is that? I hope someone can help me.
Posted
by Nuoyun.
Last updated
.
Post not yet marked as solved
1 Replies
332 Views
I found that the app reported a crash of a pure virtual function call, which could not be reproduced. A third-party library is referenced: https://github.com/lincf0912/LFPhotoBrowser Achieve smearing, blurring, and mosaic processing of images Crash code: if (![LFSmearBrush smearBrushCache]) { [_edit_toolBar setSplashWait:YES index:LFSplashStateType_Smear]; CGSize canvasSize = AVMakeRectWithAspectRatioInsideRect(self.editImage.size, _EditingView.bounds).size; [LFSmearBrush loadBrushImage:self.editImage canvasSize:canvasSize useCache:YES complete:^(BOOL success) { [weakToolBar setSplashWait:NO index:LFSplashStateType_Smear]; }]; } - (UIImage *)LFBB_patternGaussianImageWithSize:(CGSize)size orientation:(CGImagePropertyOrientation)orientation filterHandler:(CIFilter *(^ _Nullable )(CIImage *ciimage))filterHandler { CIContext *context = LFBrush_CIContext; NSAssert(context != nil, @"This method must be called using the LFBrush class."); CIImage *midImage = [CIImage imageWithCGImage:self.CGImage]; midImage = [midImage imageByApplyingTransform:[self LFBB_preferredTransform]]; midImage = [midImage imageByApplyingTransform:CGAffineTransformMakeScale(size.width/midImage.extent.size.width,size.height/midImage.extent.size.height)]; if (orientation > 0 && orientation < 9) { midImage = [midImage imageByApplyingOrientation:orientation]; } //图片开始处理 CIImage *result = midImage; if (filterHandler) { CIFilter *filter = filterHandler(midImage); if (filter) { result = filter.outputImage; } } CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]]; UIImage *image = [UIImage imageWithCGImage:outImage]; CGImageRelease(outImage); return image; } This line trigger crash: CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]]; b9c90c7bbf8940e5aabed7f3f62a65a2-symbolicated.crash
Posted
by peterKing.
Last updated
.