What does kCGBitmapByteOrderDefault actually mean?

The doc is fairly unhelpful and only states "The default byte order". Which could mean a few things.


I've done some experimentation and it appears that

kCGBitmapByteOrderDefault
is
kCGImageByteOrder32Big
(even on my machine where
kCGBitmapByteOrder32Host == kCGImageByteOrder32Little
). But... can/should I rely on that? My guess would be "no" and that "default" really means "what Apple (currently) deems the default in the API to be". Otherwise the doc would state that (or "network order", etc).


Interestingly, the API "knows" that it is 32Big because it successfully flips the byte order when you draw a

kCGBitmapByteOrderDefault
image into a
kCGImageByteOrder32Little
context. It would be nice if it would report back what the underlying ordering actually is or if there was a call that would dynamically report the default byte order.


Without those capabilities, I think I have the following options from a consuming perspective:

  • Assume Default == 32Big with potential breakage later (which can admittedly be mitigated somewhat by unit tests)
  • Write some one-time initialization code that examines the byte order when drawing to a 1-pixel context using
    kCGBitmapByteOrderDefault
    (assumes that the default is consistent across the system, and not per image).
  • Redraw the image using a specified context (safe, but has a performance trade off)


For what it is worth, the reason I am looking at the raw underlying data is that I am loading the image data into an

Eigen::TensorMap
.

Are you saying that, for a new CGImage, the "CGImageCreate" gives you big-endian data when you specify kCGBitmapByteOrderDefault?


Or are you saying that, for an existing CGImage, the "CGImageGetBitmapInfo" function returns 0 (== kCGBitmapByteOrderDefault) for the bits that represent the byte order?


I have a vague recollection of CGBitmapInfo being really annoying to use, for a reason something like what you're asking about. Like a lot of CG APIs, it's very old, so it may well be defective from a rational client's point of view.

Actually... both of those things.


#1 - If I create a CGContextRef with

kCGBitmapByteOrderDefault | kCGImageAlphaNoneSkipLast
the data is big-endian (e.g. Red is in the lowest memory address and Alpha is in the highest).


#2 - If I load a PNG via

CGImageCreateWithPNGDataProvider()
, then the BitmapInfo reports
kCGBitmapByteOrderDefault | kCGImageAlphaNoneSkipLast
and the data is also big-endian. (FWIW, the same thing happens if you load a UIImage via imageNamed: and access the cgImage property).


I actually started down this rabbit hole on #2 and suspected maybe "default" meant that it was the default of the "provider" or something. And since PNG always uses Big-Endian that would make sense. Then I tested that hypothesis with #1 and found that images I create were also big endian on a little-endian machine (e.g.

# define kCGBitmapByteOrder32Host kCGBitmapByteOrder32Little
). And that seems less than ideal (mainly because it seems like as a caller you have to have CG redraw any "default" ordered image to be sure of the byte order)


So... my current hypothesis is that there is a "secret CG default"... that is at least consistent... but also unknown at runtime.


Personally, I feel like

kCGBitmapByteOrderDefault
is a perfectly valid input to CG , but that it should never be returned from a call to
CGImageGetBitmapInfo()
(which should instead hand you back the specific Byte Order in memory). But I'm not sure how "rational" that is in the bigger picture (it's always easier to tell someone else how their API could work better for your own specific use case without concern for legacy code or potential side effects )

Well, there's this:


stackoverflow.com/questions/7300591/how-to-determine-and-interpret-the-pixel-format-of-a-cgimage


suggesting that the default is per-platform, not per-image. If that's correct, it has to be big endian on the Mac forever, so as to not break old apps.


This may not help, but I suggest you try creating a NSBitmapImagerRep from the CGImage, and see what its "bitmapInfo" info property reports. In some cases, the Cocoa APIs incorporate special knowledge of CG underpinnings, so the NS version may return an actual byte order. (The secondary question is whether creating the NSBitmapImageRep copies the image data, but I think in recent macOS versions it does not, so the check might be inexpensive, if it works.)


If you don't get a definitive answer out of that, I suggest you use a TSI to get an official answer from an Apple engineer about how to code a safe check for your use case.

Yep... saw that yesterday. This answer https://stackoverflow.com/a/39311919 is what let me to checking the kCGBitmapByteOrder32Host, which ended up not matching the byte order of the image. But agree with you that it being "BigEndian forever" probaby kept life sane in OSX and that possibly carrying that over to iOS would potentially make sense. Really, if it were just documented that way, it'd be good enough for me.


NSBitmapImagerRep is macOS only (and I'm using iOS -- technically the simulator). But I still might check that out if I find some time to see what interesting differences may exist between iOS and macOS (and whether NSBitmapImagerRep returns different results)


TSI is a good suggestion. I'll probably head there if no one chimes in with an answer here. Thanks for the suggestions!

What does kCGBitmapByteOrderDefault actually mean?
 
 
Q