DownSampling / Scaling Image Quality. UIImage versus ImageIO

I have been doing some tests with the code to DownSampling / Scaling Image an Image from WWDC18 Session 416 iOS Memory Deep Dive.


In this session they said that:


- "Memory use if related the DIMENSION of the image, NOT file size.”


-

UIImage
is expensive for sizing and resizing (will decompress memory first, internal coordinate space transforms are expensive)


- Use

ImageIO
, it will work with out dirty memory (also API is faster)



For my tests I used High Resolution Images with dimensions:


1920 × 1080, 2560 × 1600, 3840 × 2160, 2560 × 1600, 4712 × 3133, 2560 × 1600 and 3072 × 2048.


The following is the code that I used.


import UIKit

struct ImageConverter{

    static func resize(image: UIImage)-> UIImage{
        let size = CGSize(width: 300, height: 300)

        let renderer = UIGraphicsImageRenderer(size: size)
        let resizedImage = renderer.image { context in
            image.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
        }
        
        return resizedImage
    }
}



import UIKit
import ImageIO

struct ImageIOConverter{
    static func resize(url: URL)-> UIImage{
        
        guard let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil) else {
            fatalError("Can not get imageSource")
        }
        
        let options: [NSString: Any]  = [
            kCGImageSourceThumbnailMaxPixelSize: 300,
            kCGImageSourceCreateThumbnailFromImageAlways: true 
        ]
        
        guard let scaledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options as CFDictionary) else {
            fatalError("Can not get scaledImage")
        }
        
        return UIImage(cgImage: scaledImage)
    }
}



import UIKit

class ViewController: UIViewController {

    @IBOutlet weak var imageView: UIImageView! // Constraints Width: 300, Height: 300

    let resource = "1"
    let ext = ".jpg"

    var imageNamed : String {
        return resource + ext
    }
    
  
    @IBAction func uploadOriginalImage(_ sender: Any) {
        guard let image = UIImage(named: imageNamed) else {  
            fatalError("Can not get image")
        }
        imageView.image = image
    }
    
    
    @IBAction func uploadImageScaledUsingUIImage(_ sender: Any) {
        guard let image = UIImage(named: imageNamed) else {
            fatalError("Can not get image")
        }
        imageView.image = ImageConverter.resize(image: image)
    }
    
    @IBAction func uploadImageUsingImageIO(_ sender: Any) {
        guard let url = Bundle.main.url(forResource:resource, withExtension: ext) else {
            fatalError("Can not get image url")
        }
         imageView.image = ImageIOConverter.resize(url: url)
    }
}


Effectively, I found that with ImageIO, the amount of memory is smaller, but at the same time I also noticed that using ImageIO the final image look to have lower quality than using UIImage.


I mean, using UIImage the final image looks more to the original image than using ImageIO.


I wonder if, Is this the expected result? (Lower image quality but also lower memory).

It's been a while since you asked this question, but I'm gonna try to answer it anyway:

I think this is due to your image view being 300 points in size, and your image being 300 pixels wide. Points =/= Pixels on iOS.
For Example, on standard Retina displays 1pt == 2 pixels. To get the full resolution, youd have to multiply the amount of points you want by the displayScale property of your view's traitCollection.

That way you get the correct amount of pixels so that your image does not look blurry on retina displays.
DownSampling / Scaling Image Quality. UIImage versus ImageIO
 
 
Q