HDR Image capture/conversion

Hello! After recent talk on the WWDC2023 about HDR support and finding this documentation page on Applying Apple HDR effect on photos, I became very interested in the HDR Gain Map format. From documentation page it is clear how we can restore original HDR from SDR and Gain Map representation, but my question is - how from HDR we can convert back to the SDR + Gain Map representation? As I understand right know, conversion from HDR to SDR + Gain Map includes two steps:

  1. Tone mapping of HDR for getting correct SDR
  2. When we have both HDR and SDR, from equation in the documentation page we can calculate Gain Map

Am I correct? If so, what tone mapping algorithm for HDR -> SDR conversion is used right know? Can't find any information about this in the internet:(

Would be very grateful for your response!

Post not yet marked as solved Up vote post of yarmus Down vote post of yarmus
1.1k views

Replies

I'm also investigating Gain Map's, and in addition to the documentation you are referring to, I also found this PDF from Adobe: https://helpx.adobe.com/content/dam/help/en/camera-raw/using/gain-map/jcr_content/root/content/flex/items/position/position-par/table/row-3u03dx0-column-4a63daf/download_section/download-1/Gain_Map_1_0d12.pdf

In here here you will find pseudocode for both encoding and decoding.

However, the problem is that Adobe's encoding calculates some values, that you are supposed to pass as metadata, and use in the decoding. At this point I'm unsure if Apple supports Adobe decoding using this metadata or only their own Maps and decoding?

Also for both Apple and Adobe Gain Maps the real problem is that we have no way to write Gain Maps 100% using Apple API's. I found this discussion: https://gist.github.com/kiding/fa4876ab4ddc797e3f18c71b3c2eeb3a They suggest the usage of a non-public CIImageOption called: kCIImageRepresentationHDRGainMapImage.

This brings us half the way, however we still miss some XMP based metadata, that seems to have some effect when Apple decode an Image containing a GainMap. It seems to be in this form:

<x:xmpmeta xmlns:x="adobe:ns:meta/" x:xmptk="XMP Core 6.0.0">
   <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
      <rdf:Description rdf:about=""
            xmlns:HDRGainMap="http://ns.apple.com/HDRGainMap/1.0/">
         <HDRGainMap:HDRGainMapVersion>65536</HDRGainMap:HDRGainMapVersion>
      </rdf:Description>
   </rdf:RDF>
</x:xmpmeta>

So, another solution is to use CGImage:

What I found is this - when you decode a file that contains a Gain Map, you get the following info:

let gainmap = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source, 0, kCGImageAuxiliaryDataTypeHDRGainMap)
let gainDict = NSDictionary(dictionary: gainmap)
let gainData = gainDict[kCGImageAuxiliaryDataInfoData] as? Data
let gainDescription = gainDict[kCGImageAuxiliaryDataInfoDataDescription]
let gainMeta = gainDict[kCGImageAuxiliaryDataInfoMetadata]

.. I believe it's somehow possible to create a CVPixelBuffer from Dict. And if so, You should be able to deduct how to create a fresh Gain Map dict. That done the problem is that I have found no way to save this as more that 8 bpc - Apple strongly pushes 10 bit usage in iOS 17, which seems to be impossible when using the CGImage method described.

So still work to do, but hopefully this can help you get some kind of progress, and please share your findings, since I'm also researching this.

Cheers Thomas

I also got attracted to this topic and found @rued's answer very inspiring. As a newbie in Swift, with ChatGPT's help, I was able to make sense of the gainData and save it to an image, which looks reasonable. Here is a sample code snippet hoping to help others. This is an extremely interesting area, but the relevant docs and codes are extremely hard to find.

let imageURL = URL(fileURLWithPath: "/path/to/input/image.dng")

guard let source = CGImageSourceCreateWithURL(imageURL as CFURL, nil) else {
    print("Error creating image source.")
    exit(1)
}
let gainmap = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source, 0, kCGImageAuxiliaryDataTypeHDRGainMap)
let gainDict = NSDictionary(dictionary: gainmap ?? [:])
let gainData = gainDict[kCGImageAuxiliaryDataInfoData] as! Data
let gainDescription = gainDict[kCGImageAuxiliaryDataInfoDataDescription]
let gainMeta = gainDict[kCGImageAuxiliaryDataInfoMetadata]

let bitmapRep = NSBitmapImageRep(
    bitmapDataPlanes: nil,
    pixelsWide: Int((gainDescription as! [String: Any])["Width"] as! Int32),
    pixelsHigh: Int((gainDescription as! [String: Any])["Height"] as! Int32),
    bitsPerSample: 8,
    samplesPerPixel: 1,
    hasAlpha: false,
    isPlanar: false,
    colorSpaceName: .deviceWhite,
    bytesPerRow: Int((gainDescription as! [String: Any])["BytesPerRow"] as! Int32),
    bitsPerPixel: 8
)

gainData.copyBytes(to: bitmapRep!.bitmapData!, count: gainData.count)

if let pngData = bitmapRep!.representation(using: .png, properties: [:]) {
    do {
        try pngData.write(to: URL(fileURLWithPath: "/path/to/output.png"))
        print("Image saved to \(outputPath)")
    } catch {
        print("Error saving image: \(error)")
    }
} else {
    print("Failed to create PNG data.")
}

Again I'm a Swift newbie, so the code might/must have not sticked to the recommended practice.

Actually I think I've got an answer to the original question. I shared the code together with detailed docs here: https://github.com/grapeot/AppleJPEGGainMap. Hope it's helpful.