10
votes

Mac OS X 10.7.4

I am drawing into an offscreen graphics context created via +[NSGraphicsContext graphicsContextWithBitmapImageRep:].

When I draw into this graphics context using the NSBezierPath class, everything works as expected.

However, when I draw into this graphics context using the CGContextRef C functions, I see no results of my drawing. Nothing works.

For reasons I won't get into, I really need to draw using the CGContextRef functions (rather than the Cocoa NSBezierPath class).

My code sample is listed below. I am attempting to draw a simple "X". One stroke using NSBezierPath, one stroke using CGContextRef C functions. The first stroke works, the second does not. What am I doing wrong?

NSRect imgRect = NSMakeRect(0.0, 0.0, 100.0, 100.0);
NSSize imgSize = imgRect.size;

NSBitmapImageRep *offscreenRep = [[[NSBitmapImageRep alloc]
   initWithBitmapDataPlanes:NULL
   pixelsWide:imgSize.width
   pixelsHigh:imgSize.height
   bitsPerSample:8
   samplesPerPixel:4
   hasAlpha:YES
   isPlanar:NO
   colorSpaceName:NSDeviceRGBColorSpace
   bitmapFormat:NSAlphaFirstBitmapFormat
   bytesPerRow:0
   bitsPerPixel:0] autorelease];

// set offscreen context
NSGraphicsContext *g = [NSGraphicsContext graphicsContextWithBitmapImageRep:offscreenRep];
[NSGraphicsContext setCurrentContext:g];

NSImage *img = [[[NSImage alloc] initWithSize:imgSize] autorelease];

CGContextRef ctx = [g graphicsPort];

// lock and draw
[img lockFocus];

// draw first stroke with Cocoa. this works!
NSPoint p1 = NSMakePoint(NSMaxX(imgRect), NSMinY(imgRect));
NSPoint p2 = NSMakePoint(NSMinX(imgRect), NSMaxY(imgRect));
[NSBezierPath strokeLineFromPoint:p1 toPoint:p2];

// draw second stroke with Core Graphics. This doesn't work!
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, 0.0, 0.0);
CGContextAddLineToPoint(ctx, imgSize.width, imgSize.height);
CGContextClosePath(ctx);
CGContextStrokePath(ctx);

[img unlockFocus];
5

5 Answers

31
votes

You don't specify how you are looking at the results. I assume you are looking at the NSImage img and not the NSBitmapImageRep offscreenRep.

When you call [img lockFocus], you are changing the current NSGraphicsContext to be a context to draw into img. So, the NSBezierPath drawing goes into img and that's what you see. The CG drawing goes into offscreenRep which you aren't looking at.

Instead of locking focus onto an NSImage and drawing into it, create an NSImage and add the offscreenRep as one of its reps.

NSRect imgRect = NSMakeRect(0.0, 0.0, 100.0, 100.0);
NSSize imgSize = imgRect.size;

NSBitmapImageRep *offscreenRep = [[[NSBitmapImageRep alloc]
   initWithBitmapDataPlanes:NULL
   pixelsWide:imgSize.width
   pixelsHigh:imgSize.height
   bitsPerSample:8
   samplesPerPixel:4
   hasAlpha:YES
   isPlanar:NO
   colorSpaceName:NSDeviceRGBColorSpace
   bitmapFormat:NSAlphaFirstBitmapFormat
   bytesPerRow:0
   bitsPerPixel:0] autorelease];

// set offscreen context
NSGraphicsContext *g = [NSGraphicsContext graphicsContextWithBitmapImageRep:offscreenRep];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:g];

// draw first stroke with Cocoa
NSPoint p1 = NSMakePoint(NSMaxX(imgRect), NSMinY(imgRect));
NSPoint p2 = NSMakePoint(NSMinX(imgRect), NSMaxY(imgRect));
[NSBezierPath strokeLineFromPoint:p1 toPoint:p2];

// draw second stroke with Core Graphics
CGContextRef ctx = [g graphicsPort];    
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, 0.0, 0.0);
CGContextAddLineToPoint(ctx, imgSize.width, imgSize.height);
CGContextClosePath(ctx);
CGContextStrokePath(ctx);

// done drawing, so set the current context back to what it was
[NSGraphicsContext restoreGraphicsState];

// create an NSImage and add the rep to it    
NSImage *img = [[[NSImage alloc] initWithSize:imgSize] autorelease];
[img addRepresentation:offscreenRep];

// then go on to save or view the NSImage
6
votes

I wonder why everyone writes such complicated code for drawing to an image. Unless you care for the exact bitmap representation of an image (and usually you don't!), there is no need to create one. You can just create a blank image and directly draw to it. In that case the system will create an appropriate bitmap representation (or maybe a PDF representation or whatever the system believes to be more suitable for drawing).

The documentation of the init method

- (instancetype)initWithSize:(NSSize)aSize

which exists since MacOS 10.0 and still isn't deprecated, clearly says:

After using this method to initialize an image object, you are expected to provide the image contents before trying to draw the image. You might lock focus on the image and draw to the image or you might explicitly add an image representation that you created.

So here's how I would have written that code:

NSRect imgRect = NSMakeRect(0.0, 0.0, 100.0, 100.0);
NSImage * image = [[NSImage alloc] initWithSize:imgRect.size];

[image lockFocus];
// draw first stroke with Cocoa
NSPoint p1 = NSMakePoint(NSMaxX(imgRect), NSMinY(imgRect));
NSPoint p2 = NSMakePoint(NSMinX(imgRect), NSMaxY(imgRect));
[NSBezierPath strokeLineFromPoint:p1 toPoint:p2];

// draw second stroke with Core Graphics
CGContextRef ctx = [[NSGraphicsContext currentContext] graphicsPort];
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, 0.0, 0.0);
CGContextAddLineToPoint(ctx, imgRect.size.width, imgRect.size.height);
CGContextClosePath(ctx);
CGContextStrokePath(ctx);
[image unlockFocus];

That's all folks.

graphicsPort is actually void *:

@property (readonly) void * graphicsPort 

and documented as

The low-level, platform-specific graphics context represented by the graphic port.

Which may be pretty much everything, but the final note says

In OS X, this is the Core Graphics context, a CGContextRef object (opaque type).

This property has been deprecated in 10.10 in favor of the new property

@property (readonly) CGContextRef CGContext

which is only available in 10.10 and later. If you have to support older systems, it's fine to still use graphicsPort.

5
votes

The solution by @Robin Stewart worked well for me. I was able to condense it to an NSImage extension.

extension NSImage {
    convenience init(size: CGSize, actions: (CGContext) -> Void) {
        self.init(size: size)
        lockFocusFlipped(false)
        actions(NSGraphicsContext.current!.cgContext)
        unlockFocus()
    }
}

Usage:

let image = NSImage(size: CGSize(width: 100, height: 100), actions: { ctx in
    // Drawing commands here for example:
    // ctx.setFillColor(.white)
    // ctx.fill(pageRect)
})
1
votes

Swift 4: I use this code, which replicates the convenient API from UIKit (but runs on macOS):

public class UIGraphicsImageRenderer {
    let size: CGSize

    init(size: CGSize) {
        self.size = size
    }

    func image(actions: (CGContext) -> Void) -> NSImage {
        let image = NSImage(size: size)
        image.lockFocusFlipped(true)
        actions(NSGraphicsContext.current!.cgContext)
        image.unlockFocus()
        return image
    }
}

Usage:

let renderer = UIGraphicsImageRenderer(size: imageSize)
let image = renderer.image { ctx in
    // Drawing commands here
}
1
votes

Here are 3 ways of drawing same image (Swift 4).

The method suggested by @Mecki produces an image without blurring artefacts (like blurred curves). But this can be fixed by adjusting CGContext settings (not included in this example).

public struct ImageFactory {

   public static func image(size: CGSize, fillColor: NSColor, rounded: Bool = false) -> NSImage? {
      let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
      return drawImage(size: size) { context in
         if rounded {
            let radius = min(size.height, size.width)
            let path = NSBezierPath(roundedRect: rect, xRadius: 0.5 * radius, yRadius: 0.5 * radius).cgPath
            context.addPath(path)
            context.clip()
         }
         context.setFillColor(fillColor.cgColor)
         context.fill(rect)
      }
   }

}

extension ImageFactory {

   private static func drawImage(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
      return drawImageInLockedImageContext(size: size, drawingCalls: drawingCalls)
   }

   private static func drawImageInLockedImageContext(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
      let image = NSImage(size: size)
      image.lockFocus()
      guard let context = NSGraphicsContext.current else {
         image.unlockFocus()
         return nil
      }
      drawingCalls(context.cgContext)
      image.unlockFocus()
      return image
   }

   // Has scalling or antialiasing issues, like blurred curves.
   private static func drawImageInBitmapImageContext(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
      guard let offscreenRep = NSBitmapImageRep(pixelsWide: Int(size.width), pixelsHigh: Int(size.height),
                                                bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true,
                                                isPlanar: false, colorSpaceName: .deviceRGB) else {
                                                   return nil
      }
      guard let context = NSGraphicsContext(bitmapImageRep: offscreenRep) else {
         return nil
      }
      NSGraphicsContext.saveGraphicsState()
      NSGraphicsContext.current = context
      drawingCalls(context.cgContext)
      NSGraphicsContext.restoreGraphicsState()
      let img = NSImage(size: size)
      img.addRepresentation(offscreenRep)
      return img
   }

   // Has scalling or antialiasing issues, like blurred curves.
   private static func drawImageInCGContext(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
      let colorSpace = CGColorSpaceCreateDeviceRGB()
      let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
      guard let context = CGContext(data: nil, width: Int(size.width), height: Int(size.height), bitsPerComponent: 8,
                                    bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) else {
         return nil
      }
      drawingCalls(context)
      guard let image = context.makeImage() else {
         return nil
      }
      return NSImage(cgImage: image, size: size)
   }
}