I'm attempting to create an image "loupe" for my application that can be used to inspect images at different magnification levels, and I have run into a bit of a road bump. I'm using Quartz to create a CGImageRef
snapshot of a selected portion of the display my app's window is on. The problem is, the nomenclature used by all of the different OS X technologies has me really confused.
The function I'm using is CGDisplayCreateImageForRect(CGDirectDisplayID display, CGRect rect)
. The documentation states the CGRect rect
parameter is the "rectangle, specified in display space, for the portion of the display being copied into the image." The problem is, the output I'm getting in my console isn't what I was expecting. I've tried so many different conversions, transformations, coord flips, etc., that I'm 100% confused now.
So, I guess my question boils down to this: Does screen space == device space == display space, and how does one properly convert his or her view's frame (which happens to be a custom borderless window) coordinates (bottom-left origin) to match the display's coordinates (top-left origin)?
If someone could set me straight, or point me in the right direction, I'd be forever appreciative. Thanks!