Programmatically draw NSImage into NSView

Hi everyone,


I have a bit of a tricky problem.

I am writing an application plugin, In this case it is mostly a C and C++ project, which produces .bundle that is then loaded by the host application.

As part of the plugin i need to draw a image/bitmap into a view supplied by the host application.


The host application provides me with a NSImage* that I am supposed to draw into..... But HOW??


I have created an NSImage from the Image data, and i am trying to load it into the NSview, but i cannot for the life of me get it to display?


My code is as follows,

  NSImage* pImage = [[NSImage alloc] initWithCGImage:iref size:NSMakeSize(width, height)];

if (pImage)
{
  NSSize imageSize = [pImage size];
  NSRect tFrame = [hostNSView frame];

  printf("Image Size is %d x %d.   \n", (int)imageSize.width, (int)imageSize.height);
  printf("View tFrame Origin: %d x %d. Size: %d x %d \n", (int)tFrame.origin.x, (int)tFrame.origin.y, (int)tFrame.size.width, (int)tFrame.size.height);

  tFrame.size.width = MAX(tFrame.size.width, imageSize.width);
  tFrame.size.height = MAX(tFrame.size.height, imageSize.height);

  [hostNSView setFrame:tFrame];
  [hostNSView setNeedsDisplay:YES];

  [hostNSView display];
}



This code runs, and does not produce any errors that i can see, But it is also not displaying the image in the region i am expecting to see it?


Historically i am C/C++ programmer so i am unfamiliar with the design patterns of cocoa and general OSX GUI-ness,

so please excuse, but highlight any obvious errors or mis-understandings.


Cheers,

Answered by Bluefish444_dev in 35461022

Hi all,


so i finally found a solution that works.

In the end i needed to get the current context and draw into it via

CGContextDrawImage(cgcontext, [pView bounds], image);


However i also needed to make sure to use the

[[NSGraphicsContext currentContext] flushGraphics];


command to ensure that the image would get drawn correctly.


Anyway thanks to everyone for their help!

Is the host calling you at the time when it wants you to draw into the view? Do you know if it's calling you from within the view's -drawRect: method?


That's usually how/when a custom view draws. It can be marked as needing display and then Cocoa, at a time it deems appropriate, calls the view's -drawRect: method. Before doing this, Cocoa will have set up the thread's implicit current drawing context to draw into the view with the appopriate coordinate system, etc. (It has called -lockFocus on the view.) At that point you just issue Cocoa or Core Graphics drawing operations and they draw to the view.


So, if your host has called you inside of the view's -drawRect: method, you can just do:

    [pImage drawAtPoint:NSMakePoint(whateverX, whateverY) fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1];


You can use different parameters than what I wrote or use one of the other drawing methods on NSImage to achieve different effects.


If the host is calling you inside -drawRect:, then you definitely should not set the view's frame, mark it as needing display, or call -display on it. Those sorts of things should not be done within -drawRect:.


If the host is not calling you from within -drawRect:, then that's not ideal. You can call -lockFocusIfCanDraw on the view and, if that returns TRUE, draw as above. It's generally not recommended to draw to views directly/immediately like that. Also, if the view does have a -drawRect: method, then it may draw over whatever you draw.


I would still think that you shouldn't be setting the view's frame, since the host should be laying it out in the surrounding view hierarchy. I would think that the host would query you separately for how big the view should be.



One final thought: you seem to be starting with a CGImage. Given that, you might want to use Core Graphics APIs to draw the image. Once focus has been locked on the view (either it was done for your if you're called during -drawRect: or you do it yourself if not), you can obtain a CGContext by doing:

    CGContextRef cgcontext = [[NSGraphicsContext currentContext] CGContext];


Or, if you need to support 10.9 or earlier, you would use:

    CGContextRef cgcontext = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];


Once you have that, you can use Core Graphics APIs that may be more familiar to you to draw the image. You don't need to create an NSImage at all.

Hey Ken,


Thanks for the response,

I have tried all of your suggestions but i am still having no luck 😟

I have added some information that i forgot to add before and may be relevant, so perhaps you can provide some more insight into my issue.


I tried the first method you suggested ie.

[pImage drawAtPoint:NSMakePoint(10, 10) fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1];

but no luck, i got the same result as before.


Ok so i have tried the second thing you suggested by checking the result of the -lockFocusIfCanDraw call and then trying the above...

So -lockFocusIfCanDraw returns true, however when i tried the code with this ie.

if (pImage)
{
  if ([pView lockFocusIfCanDraw])
  {
       printf("DrawIntoView lockFocusIfCanDraw returned True, gonna try draw At point. \n");
       [pImage drawAtPoint:NSMakePoint(10, 10) fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1];
       [pView unlockFocus];
  }
}


I Still dont get any image displayed.

Perhaps i misunderstand here, but i cannot see how there is any connection between the NSview i am given from the host and what i am trying to display?

surely i need to tell the NSview about what i am trying to draw?**


I have also tried drawing directly via getting the current context and using CGContextDrawImage to draw the image, but no luck 😟

(this actually causes a crash in the host application) eg.

CGImageRef iref = CGImageCreate(width,
  height,
  bitsPerComponent,
  bitsPerPixel,
  bytesPerRow,
  colorSpaceRef,
  bitmapInfo,
  provider,   /
  NULL,       /
  YES,        /
  renderingIntent);
if ([pView lockFocusIfCanDraw])
{
  if (iref)
  {
       CGContextRef cgcontext = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
       CGContextDrawImage(cgcontext, CGRectMake(10, 10, 100, 100), iref);
       CGImageRelease(iref);
  }
  [pView unlockFocus];
}



NEW INFO....

Perhaps I should have mentioned this before but the plugin is updating a preview of an incoming video stream,

and is operating on a different Thread to the host application! AND may have a fully independent update/drawing schedule.


I would expect this to be updated on a fps basis, ie anywhere from 23-60 times a second.

So the host application is not actually "calling" any specific function in my plugin, instead it is making a call that starts it's own thread

and that thread has a pointer to the NSview which describes the area i want to draw into.

Sorry It's only now, after some more googling that I realise the part about different thread is significant.


Currently my code stands as such:

NSImage* pImage = [[NSImage alloc] initWithCGImage:iref size:NSMakeSize(width, height)];
if (pView.isHidden)
{
  [pView setHidden:FALSE];
}
[pView setCanDrawConcurrently:TRUE];
if (pImage)
{
  if ([pView lockFocusIfCanDraw])
  {
       printf("DrawIntoView lockFocusIfCanDraw returned True, gonna try draw At point. \n");
       [pImage drawAtPoint:NSMakePoint(10, 10) fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1];
       [pView unlockFocus];
  }
  else
  printf("DrawIntoView lockFocusIfCanDraw returned FALSE!!!!!. \n");
}


Which i beleive is the best/closest thing to a working solution that i have,

but of course it doesn't actually you know, work.....


When checking the canDrawConcurrently property, i found it was initially set to 0, / false.

I'm not sure if setting it to true in my plugin is going to work? does this need to be set via the main thread?


Anyway, thank you for your initial resposnse, and I look forward to any more responses,

I will continue trying to get my head around this and hopefully find a solution,

(Any info regarding the starred ** section above would be v useful too)


Cheers,

Could you tell us the name of the host app and the plug-in API, so that we can look it up and figure out what it wants you to do?


Without having a better idea of what the app expects, all we can do is make guesses.

The host applicatin is Adobe PremierePro Using a (what ithink is a) proprietry API / SDK published by Adobe.

The plugin is a capture plugin that captures incoming video frames from specialized hardware and writes these frames / to a mov file.

I also need to provide a continous preview of the incoming video frames to the host application IE. Adobe Premiere.


I'm not sure how helpful this additional info will be, but thanks for your help already,

I have one or two more things to try and if i have no more success i will try contacting the Devs @ Adobe for some help.


I really think my poor understanding of cocoa and drawing mechnisms is the cuplrit.


On a slightly positive note!!

Using my current setup, I have seen a part of the desired image drawn to part of the screen in question, but only when i grabbed and moved the window insde the GUI, and the portion of the image that was displayed did not update correctly. So i think i am close (ish). However it not repeatable...


Cheers again,

So this is using Premiere Pro CC release 2 SDK, and you're getting this info about the destination view from the recOpenParms call, right?

// recDisplayPos - Describes the display position for preview frames.
typedef struct {
     prWnd     wind;     // A Windows HWND or Mac OS NSView*
     int       originTop;
     int       originLeft;
     int       dispWidth;
     int       dispHeight;
     int       mustresize;
} recDisplayPos;


Is the NSImage coming from your hardware?

Yup thats exactly the info/struct I have to use for drawing into.


A few notes,

Ovbiously while I first get it from recOpenParams, I am actually assinging it in the recmod_Open call.

As i mentioned before I spawn a seperate thread that control the capture, creation of the NSImage and eventual display of the image to the said location.


yeah the NSimage is coming from our hardware.


Cheers,

It's probably not optimal, but since the app isn't telling you when to draw I would start with trying to add an NSImageView subview that displays the NSImage, and then marking that new NSImageView as dirty each time you finish drawing to the preview image.


By default, the coordinate system of NSView has the origin at the bottom left corner, but it supports having the origin at the top left if the view is flipped.

var dispRect = CGRect(x: originLeft, y: originTop, width: dispWidth, height: dispHeight)
if (!prWnd.flipped) {dispRect.origin.y = dispRect.origin.y - dispRect.size.height} // correct origin.y if not flipped


Create the image view, set it to display your image, and add it as a subview.

let imageView = NSImageView(frame: dispRect)
imageView.image = pImage
prWnd.addSubview(imageView)


Each time you draw a preview frame, mark the imageview as dirty.

// ... draw current preview frame image
imageView.needsDisplay = true


Hopefully that at least helps you get closer to getting it working.

Accepted Answer

Hi all,


so i finally found a solution that works.

In the end i needed to get the current context and draw into it via

CGContextDrawImage(cgcontext, [pView bounds], image);


However i also needed to make sure to use the

[[NSGraphicsContext currentContext] flushGraphics];


command to ensure that the image would get drawn correctly.


Anyway thanks to everyone for their help!

Programmatically draw NSImage into NSView
 
 
Q