Hopefully someone can help me explain what is happening.
In short, at certain scales, when drawing an NSImage created from a bitmap there is a stripe of black pixels near the right border of the image. This is when the image to be drawn is about half the size of the actual image.
This artefact only occurs for certain scales. For smaller scales and larger scales the artefact is not there. This suggests that the problem is not with the bitmap image.
For a specific scale the artefact occurs consistently. It occurs every time the image is drawn, and also when a newly created image is drawn. The artefact occurs for differently sized images (but similar relative drawing scales).
To further complicate things, the problem occurs in my (relatively large) application project. When I run the exact same code in a toy project, created to reproduce the problem, the artefact is not there. This toy project is built and tested on the same machine (a Mac Mini with macOS Monterey 12.0 using XCode 12.4). I tried to keep the project settings similar (e.g. both projects target macOS 10.9) but obviously something is different which impacts how image are drawn.
The code with the problem is as follows:
- (NSImage *)createTestImageFromColor {
NSSize size = NSMakeSize(640, 640);
NSImage *image = [[[NSImage alloc] initWithSize: size] autorelease];
[image lockFocus];
[NSColor.blueColor drawSwatchInRect: NSMakeRect(0, 0, size.width, size.height)];
[NSColor.whiteColor drawSwatchInRect: NSMakeRect(10, 10, size.width - 20, size.height - 20)];
[image unlockFocus];
return image;
}
- (NSImage *)createTestImageFromBitmap {
int w = 640, h = 640;
NSBitmapImageRep *bitmap;
bitmap = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: NULL
pixelsWide: w pixelsHigh: h
bitsPerSample: 8 samplesPerPixel: 3
hasAlpha: NO isPlanar: NO
colorSpaceName: NSDeviceRGBColorSpace
bytesPerRow: 0 bitsPerPixel: 32];
int bmw = (int)bitmap.bytesPerRow / sizeof(UInt32);
for (int y = 0; y < h; ++y) {
UInt32 *pos = (UInt32 *)bitmap.bitmapData + y * bmw;
for (int x = 0; x < w; ++x) {
*pos++ = 0xFF;
}
}
NSImage *image = [[NSImage alloc] initWithSize: NSMakeSize(w, h)];
[image addRepresentation: bitmap];
[bitmap release];
return [image autorelease];
}
- (void) drawScaledImages {
if (testImage == nil) {
testImage = [[self createTestImageFromColor] retain];
}
if (testImage2 == nil) {
testImage2 = [[self createTestImageFromBitmap] retain];
}
[NSColor.whiteColor setFill];
NSRectFill(self.bounds);
[testImage drawInRect: NSMakeRect(50, 50, 320, 320)
fromRect: NSZeroRect
operation: NSCompositeCopy
fraction: 1.0];
[testImage2 drawInRect: NSMakeRect(50, 100, 320, 320)
fromRect: NSZeroRect
operation: NSCompositeCopy
fraction: 1.0];
[testImage drawInRect: NSMakeRect(450, 50, 480, 480)
fromRect: NSZeroRect
operation: NSCompositeCopy
fraction: 1.0];
[testImage2 drawInRect: NSMakeRect(450, 100, 480, 480)
fromRect: NSZeroRect
operation: NSCompositeCopy
fraction: 1.0];
}
It draws two images. Both have the same size, but are created differently. The artefact only occurs for the image created from bitmap (testImage2), and only when drawn at size 320x320. At size 480x480 the artefact is not there.
This results in the following view. The black pixels at the right of the left red square are the artefact.
It may be difficult to reproduce the problem, as the same code in a minimal project works fine. This results in the following:
So does anyone have any pointers on how to troubleshoot this? I cannot step into the drawInRect code, so I am unable to determine where the code paths diverge, and what causes this.
Could it be that somehow my application is linking to a different (older) version of the framework that does the drawing, a version that contains a bug with scaled image drawing? If so, how to prevent that?
Should anyone wish to see the project where the actual problem occurs, that's possible, as it's Open Source. The code is on the branch scaled-image-bug in the following git repository: https://git.code.sf.net/p/grandperspectiv/source