Post not yet marked as solved
In my SwiftUI view, I try to load the image from data.
var body: some View {
Group{
if let data = model.detailImageData, let uiimage = UIImage(data: data) {// no memory issue
Image(uiImage: uiimage)
.resizable()
.scaledToFit()
}
}
}
But I want to get the HDR style of my image, so I use
if let data = model.detailImageData, let uiimage = UIImageReader.default.image(data:data){ //memory leaks!!!
When I change the data, the memory of the previous image is never freeed. finally caused my app to crash.
You can see it from the Instrument screenshot.
Post not yet marked as solved
I use this code to show the Image in HDR in SwiftUI
struct HDRImageView: UIViewRepresentable {
// Set up a common reader for all UIImage read requests.
static let reader: UIImageReader = {
var config = UIImageReader.Configuration()
config.prefersHighDynamicRange = true
return UIImageReader(configuration: config)
}()
let data:Data?
let enableHDR:Bool
func makeUIView(context: Context) -> UIImageView {
let view = UIImageView()
view.preferredImageDynamicRange = enableHDR ? .high : .standard
update(view)
// Set this view to fit itself to the parent view.
view.setContentCompressionResistancePriority(.defaultLow, for: .horizontal)
view.setContentCompressionResistancePriority(.defaultLow, for: .vertical)
view.setContentHuggingPriority(.required, for: .horizontal)
view.setContentHuggingPriority(.required, for: .vertical)
return view
}
func updateUIView(_ view: UIImageView, context: Context) {
update(view)
}
func update(_ view: UIImageView) {
autoreleasepool{//not working
if let data = data {
view.image = nil//set to nil first is not working
view.image = HDRImageView.reader.image(data: data)
} else {
view.image = nil
}
view.preferredImageDynamicRange = enableHDR ? .high : .standard
}
}
}
But when I update the input data, seems that the old image data can not be freeed.
After several changes, the app takes too much memory and crash.
I found it's the VM:ImageIO_Surface_Data and the VM_Image_IO take up the memory.
If I change the HDRImageView into a normal Image(uiimage:UIImage(data:)) It no longer have this issue.
Is it a memory leak? and how to solve this.
Update: I then tried using Image(_:cgImage), and it appear to be the same result.
Post not yet marked as solved
so, my app needs to find the dominant palette and the position in the image of the k-most dominant colors. I followed the very useful sample project from the vImage documentation
https://developer.apple.com/documentation/accelerate/bnns/calculating_the_dominant_colors_in_an_image
and the algorithm works fine although I can't wrap my head around how should I go on about and linking said colors with a point in the image. Since the algorithm works by filling storages first, I tried also filling an array of CGPoints called LocationStorage and working with that
//filling the array
for i in 0...width {
for j in 0...height {
locationStorage.append(
CGPoint(x: i, y: j))
}
.
.
.
//working with the array
let randomIndex = Int.random(in: 0 ..< width * height)
centroids.append(Centroid(red: redStorage[randomIndex],
green: greenStorage[randomIndex],
blue: blueStorage[randomIndex],
position: locationStorage[randomIndex]))
}
struct Centroid {
/// The red channel value.
var red: Float
/// The green channel value.
var green: Float
/// The blue channel value.
var blue: Float
/// The number of pixels assigned to this cluster center.
var pixelCount: Int = 0
var position: CGPoint = CGPointZero
init(red: Float, green: Float, blue: Float, position: CGPoint) {
self.red = red
self.green = green
self.blue = blue
self.position = position
}
}
although it's not accurate.
I also tried force trying every pixel in the image to get as close to each color but I think it's too slow.
What do you think my approach should be?
Let me know if you need additional info
Please be kind I'm learning Swift.
Post not yet marked as solved
I would like to save the depth map from ARDepthData as .tiff, but notice my output tiff distances are incorrect. Objects that are close are reported to be slightly farther away, and walls that are around 4 meters away from me have a recorded value of 2 meters. I am using this code to write the tiff:
import UIKit
# Save method
extension CVPixelBuffer {
func saveDepthMapToTIFF(to path: URL) {
let ciImage = CIImage(cvPixelBuffer: self)
let context = CIContext()
do {
try context.writeTIFFRepresentation(
of: ciImage,
to: path,
format: .Lf,
colorSpace: CGColorSpaceCreateDeviceGray()
)
} catch {
print("Failed to write TIFF: \(error)")
}
}
}
# Calling the save
arFrame.sceneDepth?.depthMap.saveDepthMapToTIFF(to: depthMapPath)
I am reading the file like this in Python
import tifffile
depth_map = tifffile.imread("test.tiff")
plt.imshow(depth_map)
plt.colorbar()
which creates this image:
The farthest parts of the room should be around 4 meters, not 2. The dark blue spot on the lower right is closer than half a meter away.
Notably the depth map contains distances from the camera plane to each region, not the distance from the camera sensor to the region. Even correcting for this though, the depth map remains about the same.
Is there an issue with how I am saving the depth image? Is there a scale factor or format error?
Post not yet marked as solved
Under Sonoma 14.4 the compression option doesn't work with PNG images. It works for JPG/HEIF. Preview can export PNG file to HEIC with compression option. What am I missing? Previously this has worked. I am trying with 0.01 and 0.9 as compression quality and the file size is the same for PNG.
Is Preview using some trick to convert the image using ciContext.createCGImage?
PS: Compression option of 1.0 was broken under 14.4 RC and Preview created empty file.
func heifImageDataUsingDestination(at url: URL, compressionQuality : CGFloat) -> Data? {
guard let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil) else { return nil }
guard let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else { return nil }
var mutableData = NSMutableData()
guard let imageDestination = CGImageDestinationCreateWithData(mutableData, "public.heic" as CFString, 1, nil) else { return nil }
let options = [ kCGImageDestinationLossyCompressionQuality: compressionQuality ] as CFDictionary
CGImageDestinationAddImage(imageDestination, cgImage, options)
let success = CGImageDestinationFinalize(imageDestination)
if success {
return mutableData as Data
}
return nil
}
func heifImageDataUsingCIContext(at url: URL, compressionQuality : CGFloat) -> Data? {
guard let ciImage = CIImage(contentsOf: url) else { return nil }
let context = CIContext()
let colorspace = ciImage.colorSpace ?? CGColorSpaceCreateDeviceRGB()
let options = [CIImageRepresentationOption(rawValue: kCGImageDestinationLossyCompressionQuality as String) : compressionQuality]
return context.heifRepresentation(of: ciImage, format: .RGBA8, colorSpace: colorspace, options: options)
}
Post not yet marked as solved
CIFormat static var such as RGBA16 give concurrency warnings:
Reference to static property 'RGBA16' is not concurrency-safe because it involves shared mutable state; this is an error in Swift 6
Should all these formats be static let to suppress the warnings (future errors)?
Post not yet marked as solved
I haven't found any really thorough documentation or guidance on the use of CIRAWFilter.linearSpaceFilter. The API documentation calls it
An optional filter you can apply to the RAW image while it’s in linear space.
Can someone provide insight into what this means and what the linear space filter is useful for? When would we use this linear space filter instead of a filter on the output of CIRAWFilter?
Thank you.
Post not yet marked as solved
I can't get CoreImage to render an HDR image file with correct colors to a CAMetalLayer on macOS 14. I'm comparing the result with NSImageView and the SupportingHDRImagesInYourApp 'HDRDemo23' sample code, which use CVPixelBuffer. With CAMetalLayer, the images are displayed as HDR (definitely more highlights), but they're darker with some kind saturation increase & color shift.
Files I've tested include the sample ISO HDR files in the SupportingHDRImagesInYourApp sample code. Methods I've tried to render to CAMetalLayer include:
Modifying the GeneratingAnAnimationWithACoreImageRenderDestination sample code's ContentView so it uses HDRDemo23/example-ISO-HDR-images/image_01.heic, loaded using CIImage(contentsOf:)
Creating a test AppKit app that uses MTKView and CIRenderDestination the same way. I have NSImageView and the MTKView in the same window for comparison.
Using CIRAWFilter > CIRenderDestination > IOSurface > MTKView/CAMetalLayer
All these methods produce the image with the exact same appearance; a dark HDR image with some saturation/color shift.
The only clue I've found is that when using the Metal Debugger on the test AppKit app, the CAMetalLayer's 'Present' shows the 'input' thumbnail is HDR without the color shift, but the 'output' thumbnail looks like what I actually see. I tried changing the color profile on the layer to various things but nothing looked more correct.
I've tried this on two Macs, an M1 Mac Studio with an LG display, and a MacBook Air M2. The MacBook Air shows the same color shift, but since it has less dynamic range overall there isn't as much difference between NSImageView and MTKView.
Post not yet marked as solved
I found that the app reported a crash of a pure virtual function call, which could not be reproduced.
A third-party library is referenced:
https://github.com/lincf0912/LFPhotoBrowser
Achieve smearing, blurring, and mosaic processing of images
Crash code:
if (![LFSmearBrush smearBrushCache]) {
[_edit_toolBar setSplashWait:YES index:LFSplashStateType_Smear];
CGSize canvasSize = AVMakeRectWithAspectRatioInsideRect(self.editImage.size, _EditingView.bounds).size;
[LFSmearBrush loadBrushImage:self.editImage canvasSize:canvasSize useCache:YES complete:^(BOOL success) {
[weakToolBar setSplashWait:NO index:LFSplashStateType_Smear];
}];
}
- (UIImage *)LFBB_patternGaussianImageWithSize:(CGSize)size orientation:(CGImagePropertyOrientation)orientation filterHandler:(CIFilter *(^ _Nullable )(CIImage *ciimage))filterHandler
{
CIContext *context = LFBrush_CIContext;
NSAssert(context != nil, @"This method must be called using the LFBrush class.");
CIImage *midImage = [CIImage imageWithCGImage:self.CGImage];
midImage = [midImage imageByApplyingTransform:[self LFBB_preferredTransform]];
midImage = [midImage imageByApplyingTransform:CGAffineTransformMakeScale(size.width/midImage.extent.size.width,size.height/midImage.extent.size.height)];
if (orientation > 0 && orientation < 9) {
midImage = [midImage imageByApplyingOrientation:orientation];
}
//图片开始处理
CIImage *result = midImage;
if (filterHandler) {
CIFilter *filter = filterHandler(midImage);
if (filter) {
result = filter.outputImage;
}
}
CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]];
UIImage *image = [UIImage imageWithCGImage:outImage];
CGImageRelease(outImage);
return image;
}
This line trigger crash:
CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]];
b9c90c7bbf8940e5aabed7f3f62a65a2-symbolicated.crash
Post not yet marked as solved
I have an iOS app that uses (camera) video feed and applies CoreImage filters to simulate a specific real world effect (for educational purposes).
Now I wanted to make a similar app for visionOS and apply the same CoreImage filters to the content (live view) users sees while wearing Apple Vision Pro headset.
Is there a way to do it with current APIs and what would you recommend?
I saw that we cannot get video feed from camera(s), is there a way to do it with ARKit and applying the filters somehow using that?
I know visionOS is a young/fresh platform but any help would be great!
Thank you!
Post not yet marked as solved
It feels like this should be easy, but I'm having conceptual problems about how to do this. Any help would be appreciated.
I have a sample app below that works exactly as expected. I'm able to use the Slider and Stepper to generate inputs to a function that uses CoreImage filters to manipulate my input image. This all works as expected, but it's doing some O(n) CI work on the main thread, and I want to move it to a background thread. I'm pretty sure this can be done using combine, here is the pseudo code I imagine would work for me:
func doStuff() {
// subscribe to options changes
// .receive on background thread
// .debounce
// .map { model.inputImage.combine(options: $0)
// return an object on the main thread.
// update an @Published property?
}
Below is the POC code for my project. Any guidance as to where I should use combine to do this would be greatly appreciate. (Also, please let me know if you think combine is not the best way to tackle this. I'd be open to alternative implementations.)
struct ContentView: View {
@State private var options = CombineOptions.basic
@ObservedObject var model = Model()
var body: some View {
VStack {
Image(uiImage: enhancedImage)
.resizable()
.aspectRatio(contentMode: .fit)
Slider(value: $options.scale)
Stepper(value: $options.numberOfImages, label:
{
Text("\(options.numberOfImages)")})
}
}
private var enhancedImage: UIImage {
return model.inputImage.combine(options: options)
}
}
class Model: ObservableObject {
let inputImage: UIImage = UIImage.init(named: "IMG_4097")!
}
struct CombineOptions: Codable, Equatable {
static let basic: CombineOptions = .init(scale: 0.3, numberOfImages: 10)
var scale: Double
var numberOfImages: Int
}
Post not yet marked as solved
This isn't just my observation but lots of people around me and also you can find tonnes of feedback on the inter webs.
The processing of images taken with the front facing camera on the 15 (and I think 14 before) is so over processed that I'm aware of people jumping to other phones. And they're right. The 15 exacerbates that even more. You can turn off HDR (a viewing thing), you can prioritise speed over processing but really you cannot turn this off. You can take a Live Photo and then choose a different frame and the processing is less.
As a developer I look at that and think it's bonkers, it's just software so why hasn't anyone produced a camera app that makes faces look good (not AI processing) from the front camera.
I can be all enthusiastic and say I will develop one but it seems like a simple, obvious fix for Apple.
To have the settings so bad that I have friends returning their phones, seems pretty bad. And as a photographer I would agree. There's a lot to love with Apple on the 15 and the log and prores but a simple selfie produces such ugly results. That's an actual problem.
So throwing it it out there. What does everyone think?
cheers
Paul
Post not yet marked as solved
Animated AVIF is rendered slowly on Safari
Tested with MacBook pro (16" 2019) and Safari (Version 17.0 - 19616.1.27.211.1) and also on several iPhone models (14, 15 Pro) (over BrowserStack)
When using macBook pro (16" 2019) with Chrome (Version 120.0.6099.129) it is rendered OK
example for 720p@25FPS:
https://res.cloudinary.com/yaronshmueli/image/upload/cases/animated_AVIF_Apple/world_flight_fast_decode_tile_clmn_btiolg.avif
Post not yet marked as solved
Customers are reporting after the update to mac OS Sonoma 14.2 bitmap images have a black background! When we run the same application on Sonoma 14.1.1 the bitmap images appear as expected
Like I said at the beginning, it happened after upgrading to Sonoma 14.2 so it introduced the issue.
Post not yet marked as solved
Hello everyone,
I'm currently facing a challenging issue with my macOS application that involves HEIF image processing. The application uses an OperationQueue to handle HEIF compression tasks. However, I've observed a significant delay in processing when a screen recording is active. This delay doesn't occur under normal circumstances.
Here's a brief overview of the implementation:
The HEIF processing task is encapsulated within an Operation added to an OperationQueue.
The task involves using CIContext for image processing.
When screen recording is initiated, the operation's execution becomes unusually slow or gets delayed extensively.
After some research and community feedback, I learned that screen recording might be affecting the system's resource allocation, particularly impacting tasks that utilize GPU resources, like CIContext operations in my case.
To address this, I tried the following:
Switching to a custom dispatch queue with a .userInitiated QoS.
Using GCD instead of OperationQueue.
Despite these attempts, the issue persists during screen recording. It seems like the screen recording process is given higher priority by macOS, leading to resource reallocation and thus affecting my application's performance.
I'm looking for insights or suggestions on how to handle this scenario more effectively. Specifically, I am interested in:
Understanding how screen recording impacts resource allocation in macOS.
Exploring ways to ensure that my HEIF processing task is not severely impacted by other system processes like screen recording.
Any best practices or alternative approaches for handling image processing tasks that are sensitive to system resource availability.
Here's a snippet of the HEIF processing function for reference:
import CoreImage
struct CommandResult: CustomStringConvertible {
let output: String
let error: Process.TerminationReason
let status: Int32
var description: String {
return "error:\(error.rawValue), output:\(output), status:\(status)"
}
}
func heif(at sourceURL: URL, to destinationURL: URL, as quality: Int = 75) -> CommandResult {
let compressionQuality = CGFloat(quality) / 100.0
guard let ciImage = CIImage(contentsOf: sourceURL) else {
return CommandResult(output: "Load heic image failed \(sourceURL)", error: .exit, status: -1)
}
let context = CIContext(options: nil)
let heifOptions = [kCGImageDestinationLossyCompressionQuality: compressionQuality] as! [CIImageRepresentationOption: Any]
do {
try context.writeHEIFRepresentation(of: ciImage,
to: destinationURL,
format: .RGBA8,
colorSpace: ciImage.colorSpace!,
options: heifOptions)
} catch {
return CommandResult(output: "Compress and write heic image failed \(sourceURL)", error: .exit, status: -1)
}
return CommandResult(output: "Compress and write heic image successfully \(sourceURL)", error: .exit, status: 0)
}
Thank you for your time and any assistance you can provide!
Post not yet marked as solved
I am fetching an image from the photo library and fetch the GPS Location data, but it's not working.
This needs to work on iOS 17 as well, so I used PHPicker. kCGImagePropertyGPSDictionary is always returning nil.
The code I tried:
import CoreLocation
import MobileCoreServices
import PhotosUI
class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
@IBOutlet weak var selectedImageView:UIImageView!
@IBAction func selectTheImage() {
self.pickImageFromLibrary_PH()
}
func pickImageFromLibrary_PH() {
var configuration = PHPickerConfiguration(photoLibrary: PHPhotoLibrary.shared())
configuration.filter = .images
let picker = PHPickerViewController(configuration: configuration)
picker.delegate = self
present(picker, animated: true, completion: nil)
}
func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
if let itemProvider = results.first?.itemProvider, itemProvider.canLoadObject(ofClass: UIImage.self) {
itemProvider.loadObject(ofClass: UIImage.self) { (image, error) in
if let image = image as? UIImage {
self.fetchLocation(for: image)
}
}
}
picker.dismiss(animated: true, completion: nil)
}
func fetchLocation(for image: UIImage) {
let locationManager = CLLocationManager()
guard let imageData = image.jpegData(compressionQuality: 1.0) else {
print("Unable to fetch image data.")
return
}
guard let source = CGImageSourceCreateWithData(imageData as CFData, nil) else {
print("Unable to create image source.")
return
}
guard let properties = CGImageSourceCopyPropertiesAtIndex(source, 0, nil) as? [String: Any] else {
print("Unable to fetch image properties.")
return
}
print(properties)
if let gpsInfo = properties[kCGImagePropertyGPSDictionary as String] as? [String: Any],
let latitude = gpsInfo[kCGImagePropertyGPSLatitude as String] as? CLLocationDegrees,
let longitude = gpsInfo[kCGImagePropertyGPSLongitude as String] as? CLLocationDegrees {
let location = CLLocation(latitude: latitude, longitude: longitude)
print("Image was taken at \(location.coordinate.latitude), \(location.coordinate.longitude)")
} else {
print("PHPicker- Location information not found in the image.")
}
}
}
Properties available in that image:
Exif/Meta-Data is available, I expect GPS location data
ColorSpace = 65535;
PixelXDimension = 4032;
PixelYDimension = 3024;
}, "DPIWidth": 72, "Depth": 8, "PixelHeight": 3024, "ColorModel": RGB, "DPIHeight": 72, "{TIFF}": {
Orientation = 1;
ResolutionUnit = 2;
XResolution = 72;
YResolution = 72;
}, "PixelWidth": 4032, "Orientation": 1, "{JFIF}": {
DensityUnit = 0;
JFIFVersion = (
1,
0,
1
);
XDensity = 72;
YDensity = 72;
}]
Note: I'm trying in Xcode 15 and iOS 17.
In photos app location data is available, but in code, it's returning nil.
Post not yet marked as solved
I am trying to create a custom CGColorSpace in Swift on macOS but am not sure I really understand the concepts.
I want to use a custom color space called Spot1 and if I extract the spot color from a PDF I get the following:
"ColorSpace<Dictionary>" = {
"Cs2<Array>" = (
Separation,
Spot1,
DeviceCMYK,
{
"BitsPerSample<Integer>" = 8;
"Domain<Array>" = (
0,
1
);
"Filter<Name>" = FlateDecode;
"FunctionType<Integer>" = 0;
"Length<Integer>" = 526;
"Range<Array>" = (
0,
1,
0,
1,
0,
1,
0,
1
);
"Size<Array>" = (
1024
);
}
);
};
How can I create this same color space using the CGColorSpace(propertyListPlist: CFPropertyList) API
func createSpot1() -> CGColorSpace? {
let dict0 : NSDictionary = [
"BitsPerSample": 8,
"Domain" : [0,1],
"Filter" : "FlateDecode",
"FunctionType" : 0,
"Length" : 526,
"Range" : [0,1,0,1,0,1,0,1],
"Size" : [1024]]
let dict : NSDictionary = [
"Cs2" : ["Separation","Spot1", "DeviceCMYK", dict0]
]
let space = CGColorSpace(propertyListPlist: dict as CFPropertyList)
if space == nil {
DebugLog("Spot1 color space is nil!")
}
return space
}
Hi, I'm trying to find an explanation to strange behaviour of .clampedToExtent() method:
I'm doing pretty strait forward thing, clamp the image and then crop it with some insets, so as a result I expert same image as original with padding on every side with repeating last pixel of each edge (to apply CIPixellate filter then), here is the code:
originalImage
.clampedToExtent()
.cropped(to: originalImage.extent.insetBy(dx: -50, dy: -50))
The result is strange: In the result image image has padding as specified, but only there sides have content there (left, right, bottom) and top side has transparent padding. Sometimes right side has transparency instead of content.
So the question is why this happens and how to get all sides filled with last pixel data?
I tested on two different devices with iOS 16 and 17.
What is the purpose of the new .memoryTarget option in CIContextOption added in iOS 17? And it is interesting that this option is added only in swift interface.
https://developer.apple.com/documentation/coreimage/cicontextoption/4172811-memorytarget
Post not yet marked as solved
For better memory usage when working with MTLTextures (editing + displaying in render passes, compute shaders, etc.) is it possible to save the texture to the app's Documents folder, and then use an UnsafeMutablePointer to access/modify the contents of the texture before displaying in a render pass? And would this be performant (i.e 60fps)? That way the texture won't be directly in memory all the time, but the contents can still be edited and displayed when needed.