As a courtesy, this is a full free rendering of my book, Programming iOS 6, by Matt Neuburg. Copyright 2013 Matt Neuburg. Please note that this edition is outdated; the current books are iOS 9 Programming Fundamentals with Swift and Programming iOS 9. If my work has been of help to you, please consider purchasing one or both of them. Thank you!

Chapter 15. Drawing

Many UIView subclasses, such as a UIButton or a UITextField, know how to draw themselves; sooner or later, though, you’re going to want to do some drawing of your own. A class like UIImageView will display a static image; you can generate that image dynamically by drawing it in code. And a pure UIView does little or no drawing of its own; you can draw its appearance.

Drawing is not difficult, but it is a very large topic. This chapter will make you comfortable with the basic principles, so that you can consult and understand Apple’s documentation when you need further details.

UIImage and UIImageView

The basic general UIKit image class is UIImage. UIImage can read a file from disk, so if an image does not need to be created dynamically, but has already been created before your app runs, then drawing may be as simple as providing an image file as a resource in your app’s bundle. The system knows how to work with many standard image file types, such as TIFF, JPEG, GIF, and PNG. You can also obtain image data in some other way, such as by downloading it, and transform this into a UIImage. Conversely, you can draw your own image for display in your interface or for saving to disk (image file output is discussed in Chapter 36).

In the very simplest case, an image file in your app’s bundle can be obtained through the UIImage class method imageNamed:. This method looks at the top level of your app’s bundle for an image file with the supplied name, including the file extension, and reads it as a UIImage instance. A nice thing about this approach is that memory management is handled for you: the image data may be cached in memory, and if you ask for the same image by calling imageNamed: again later, the cached data may be supplied immediately. You can also read an image file from anywhere in your app’s bundle using the class method imageWithContentsOfFile: or the instance method initWithContentsOfFile:, both of which expect a pathname string; you can get a reference to your app’s bundle with [NSBundle mainBundle], and NSBundle then provides instance methods for getting the pathname of a file within the bundle, such as pathForResource:ofType:.

Many built-in Cocoa interface objects will accept a UIImage as part of how they draw themselves; for example, a UIButton can display an image, and a UINavigationBar or a UITabBar can have a background image. I’ll discuss those in Chapter 25. But when you simply want an image to appear in your interface, you’ll probably hand it to a UIImageView, which has the most knowledge and flexibility with regard to displaying images and is intended for this purpose. If a UIImageView instance begins life in a nib and is to display a UIImage from a file in your app’s bundle, you won’t even need any code; the UIImageView can be set to that file directly in the nib. (This mechanism works most easily if the file will be at the top level of the app’s bundle.)

A UIImageView can actually have two images, one assigned to its image property and the other assigned to its highlightedImage property; the value of the UIImageView’s highlighted property dictates which of the two is displayed. A UIImageView does not automatically highlight itself, the way a button does, for example, merely because the user taps it. However, there are certain situations where a UIImageView will respond to the highlighting of its surroundings; for example, within a table view cell, a UIImageView will show its highlighted image when the cell is highlighted. You can, of course, also use the notion of UIImageView highlighting yourself however you like.

When an image is obtained by name from the bundle, as with imageNamed: or the name you enter in the nib for a UIImageView’s image, a file with the same name extended by ~ipad will automatically be used if the app is running on an iPad. You can use this in a universal app to supply different images automatically depending on whether the app runs on an iPhone or iPod touch, on the one hand, or on an iPad, on the other. This is true not just for images but for any resource obtained by name from the bundle. See Apple’s Resource Programming Guide.

Similarly, on a device with a double-resolution screen, when an image is obtained by name from the bundle, a file with the same name extended by @2x, if there is one, will be used automatically, with the resulting UIImage marked as double-resolution by assigning it a scale property value of 2.0. In this way, your app can contain both a single-resolution and a double-resolution version of an image file; on the double-resolution display device, the double-resolution version of the image is used, and is drawn at the same size as the single-resolution image. Thus, on the double-resolution screen, your code continues to work without change, but your images look sharper.

A UIImageView is a UIView, so it can have a background color in addition to its image, it can have an alpha (transparency) value, and so forth (see Chapter 14). A UIImageView without a background color is invisible except for its image, so the image simply appears in the interface, without the user being aware that it resides in a rectangular host. An image may have areas that are transparent, and a UIImageView will respect this; thus an image of any shape can appear. A UIImageView without an image and without a background color is invisible, so you could start with an empty UIImageView in the place where you will later need an image and subsequently assign the image in code. You can assign a new image to substitute one image for another.

How a UIImageView draws its image depends upon the setting of its contentMode property. (The contentMode property is inherited from UIView; I’ll discuss its more general purpose later in this chapter.) For example, UIViewContentModeScaleToFill means the image’s width and height are set to the width and height of the view, thus filling the view completely even if this alters the image’s aspect ratio; UIViewContentModeCenter means the image is drawn centered in the view without altering its size. The best way to get a feel for the meanings of the various contentMode settings is to assign a UIImageView a small image in a nib and then, in the Attributes inspector, change the Mode pop-up menu, and see where and how the image draws itself.

When creating a UIImageView in code, you can take advantage of a convenience initializer, initWithImage: (or initWithImage:highlightedImage:). The default contentMode is UIViewContentModeScaleToFill, but the image is not initially scaled; rather, the view itself is sized to match to the image. You will still probably need to position the UIImageView correctly in its superview. In this example, I’ll put a picture of the planet Mars in the center of the app’s interface (Figure 15.1):

UIImageView* iv =
    [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"Mars.png"]];
[self.window.rootViewController.view addSubview: iv];
iv.center = CGPointMake(CGRectGetMidX(iv.superview.bounds),
                        CGRectGetMidY(iv.superview.bounds));
iv.frame = CGRectIntegral(iv.frame);
figs/pios_1501.png

Figure 15.1. Mars appears in my interface


If we have a second image file called Mars@2x.png, it will be used on a double-resolution device.

Under autolayout (Chapter 14), the size of an image assigned to a UIImageView becomes that UIImageView’s intrinsicContentSize — even if the UIImageView already exists. This can lead to new behavior if your code adopts autolayout. Previously, assigning an image to an existing UIImageView (as opposed to creating the UIImageView with initWithImage:) had no effect on the UIImageView’s bounds; under autolayout, it calls setNeedsLayout and, at layout time, the UIImageView’s bounds are changed. Thus, this code, too, will display Mars in the center of the interface:

UIImageView* iv = [UIImageView new];
[self.window.rootViewController.view addSubview: iv];
iv.translatesAutoresizingMaskIntoConstraints = NO;
[iv.superview addConstraint:
 [NSLayoutConstraint
  constraintWithItem:iv attribute:NSLayoutAttributeCenterX
  relatedBy:0
  toItem:iv.superview attribute:NSLayoutAttributeCenterX
  multiplier:1 constant:0]];
[iv.superview addConstraint:
 [NSLayoutConstraint
  constraintWithItem:iv attribute:NSLayoutAttributeCenterY
  relatedBy:0
  toItem:iv.superview attribute:NSLayoutAttributeCenterY
  multiplier:1 constant:0]];
iv.image = [UIImage imageNamed:@"Mars.png"];

(If a UIImageView is assigned both an image and a highlightedImage, and if they are of different sizes, the view’s intrinsicContentSize adopts the size of image.)

A UIImage can be transformed into a resizable image, by sending it the resizableImageWithCapInsets:resizingMode: message. (This method is new in iOS 6, superseding the less flexible resizableImageWithCapInsets: introduced in iOS 5 — which itself superseded the notion of a stretchable image from previous system versions.) The capInsets: argument is a UIEdgeInsets, a struct consisting of four floats representing inset values starting at the top and moving counterclockwise — top, left, bottom, right. They represent distances inwards from the edges of the image. In a context (such as a UIImageView) larger than the image, a resizable image can behave in one of two ways, depending on the resizingMode: value:

UIImageResizingModeTile
The interior rectangle of the inset area is tiled (repeated) in the interior; each edge is formed by tiling the corresponding edge rectangle outside the inset area. The four corner rectangles outside the inset area are shown unchanged at the four corners.
UIImageResizingModeStretch
The interior rectangle of the inset area is stretched once to fill the interior; each edge is formed by stretching the corresponding edge rectangle outside the inset area once. The four corner rectangles outside the inset area are shown unchanged at the four corners.

Resizable images may seem like a curiosity at first, but in fact they are extremely useful. Some places in the interface require them; for example, a custom image that serves as the background of a slider or progress view (Chapter 25) must be resizable, so that it can fill a space of any length. And there can frequently be situations where you want to fill a background by tiling a texture or by extending an existing image.

First, I’ll illustrate tiling:

UIImage* mars = [UIImage imageNamed:@"Mars.png"];
UIImage* marsTiled = [mars resizableImageWithCapInsets:UIEdgeInsetsZero
                      resizingMode: UIImageResizingModeTile];
UIImageView* iv = [[UIImageView alloc] initWithFrame:
                   CGRectMake(20,5,mars.size.width*2,mars.size.height*4)];
iv.image = marsTiled;

The image view is eight times the size of the Mars image, and the inset area is the entire image, so we see eight complete copies of the Mars image (Figure 15.2).

figs/pios_1502.png

Figure 15.2. Tiling the entire image of Mars


Now we’ll tile the interior of the image, changing the capInsets: argument from the previous code (Figure 15.3):

UIImage* marsTiled = [mars resizableImageWithCapInsets:
                      UIEdgeInsetsMake(mars.size.height/4.0,
                                       mars.size.width/4.0,
                                       mars.size.height/4.0,
                                       mars.size.width/4.0)
                      resizingMode: UIImageResizingModeTile];
figs/pios_1503.png

Figure 15.3. Tiling the interior of Mars


Next, I’ll illustrate stretching. We’ll start by changing just the resizingMode: from the previous code (Figure 15.4):

UIImage* marsTiled = [mars resizableImageWithCapInsets:
                      UIEdgeInsetsMake(mars.size.height/4.0,
                                       mars.size.width/4.0,
                                       mars.size.height/4.0,
                                       mars.size.width/4.0)
                      resizingMode: UIImageResizingModeStretch];
figs/pios_1504.png

Figure 15.4. Stretching the interior of Mars


A common stretching strategy is to make almost half the original image serve as a cap inset, leaving just a pixel or two in the center to fill the entire interior of the resulting image (Figure 15.5); this generates a stretched border whose corners are the corners of the original image:

UIImage* marsTiled = [mars resizableImageWithCapInsets:
                      UIEdgeInsetsMake(mars.size.height/2.0 - 1,
                                       mars.size.width/2.0 - 1,
                                       mars.size.height/2.0 - 1,
                                       mars.size.width/2.0 - 1)
                      resizingMode: UIImageResizingModeStretch];
figs/pios_1505.png

Figure 15.5. Stretching a single pixel at the interior of Mars


Graphics Contexts

UIImageView draws an image for you and takes care of all the details; in many cases, it will be all you’ll need. Eventually, though, you may want to create some drawing yourself, directly, in code. To do so, you will always need a graphics context.

A graphics context is basically a place you can draw. Conversely, you can’t draw in code unless you’ve got a graphics context. There are several ways in which you might obtain a graphics context; in this chapter I will concentrate on two, which have proven in my experience to be far and away the most common:

You create an image context
The function UIGraphicsBeginImageContextWithOptions creates a graphics context suitable for use as an image. You then draw into this context to generate the image. When you’ve done that, you call UIGraphicsGetImageFromCurrentImageContext to turn the context into a UIImage, and then UIGraphicsEndImageContext to dismiss the context. Now you have a UIImage that you can display in your interface or draw into some other graphics context or save as a file.
Cocoa hands you a graphics context
You subclass UIView and implement drawRect:. At the time your drawRect: implementation is called, Cocoa has already created a graphics context and is asking you to draw into it, right now; whatever you draw is what the UIView will display. (A slight variant of this situation is that you subclass a CALayer and implement drawInContext:, or make some object the delegate of a layer and implement drawLayer:inContext:; layers are discussed in Chapter 16.)

Moreover, at any given moment there either is or is not a current graphics context:

  • UIGraphicsBeginImageContextWithOptions not only creates an image context, it also makes that context the current graphics context.
  • When drawRect: is called, the UIView’s drawing context is already the current graphics context.
  • Callbacks with a context: argument have not made any context the current graphics context; rather, that argument is a reference to a graphics context.

What beginners find most confusing about drawing is that there are two separate sets of tools with which you can draw, and they take different attitudes towards the context in which they will draw:

UIKit

Various Objective-C classes know how to draw themselves; these include UIImage, NSString (for drawing text), UIBezierPath (for drawing shapes), and UIColor. Some of these classes provide convenience methods with limited abilities; others are extremely powerful. In many cases, UIKit will be all you’ll need.

With UIKit, you can draw only into the current context. So if you’re in a UIGraphicsBeginImageContextWithOptions or drawRect: situation, you can use the UIKit convenience methods directly; there is a current context and it’s the one you want to draw into. If you’ve been handed a context: argument, on the other hand, then if you want to use the UIKit convenience methods, you’ll have to make that context the current context; you do this by calling UIGraphicsPushContext (and be sure to restore things with UIGraphicsPopContext later).

Core Graphics

This is the full drawing API. Core Graphics, often referred to as Quartz, or Quartz 2D, is the drawing system that underlies all iOS drawing — UIKit drawing is built on top of it — so it is low-level and consists of C functions. There are a lot of them! This chapter will familiarize you with the fundamentals; for complete information, you’ll want to study Apple’s Quartz 2D Programming Guide.

With Core Graphics, you must specify a graphics context (a CGContextRef) to draw into, explicitly, in every function call. If you’ve been handed a context: argument, then, hey presto, you have a graphics context, and it’s probably the graphics context you want to draw into. But in a UIGraphicsBeginImageContextWithOptions or drawRect: situation, you have no reference to a context; to use Core Graphics, you need to get such a reference. Since the context you want to draw into is the current graphics context, you call UIGraphicsGetCurrentContext to get the needed reference.

So we have two sets of tools and three ways in which a context might be supplied; that makes six ways of drawing, and in case you’re confused, I’ll now demonstrate all six of them. Without worrying just yet about the actual drawing commands, focus your attention on how the context is specified and on whether we’re using UIKit or Core Graphics. First, I’ll draw a blue circle by implementing a UIView subclass’s drawRect:, using UIKit to draw into the current context, which Cocoa has already prepared for me:

- (void) drawRect: (CGRect) rect {
    UIBezierPath* p =
        [UIBezierPath bezierPathWithOvalInRect:CGRectMake(0,0,100,100)];
    [[UIColor blueColor] setFill];
    [p fill];
}

Now I’ll do the same thing with Core Graphics; this will require that I first get a reference to the current context:

- (void) drawRect: (CGRect) rect {
    CGContextRef con = UIGraphicsGetCurrentContext();
    CGContextAddEllipseInRect(con, CGRectMake(0,0,100,100));
    CGContextSetFillColorWithColor(con, [UIColor blueColor].CGColor);
    CGContextFillPath(con);
}

Next, I’ll implement a UIView subclass’s drawLayer:inContext:. In this case, we’re handed a reference to a context, but it isn’t the current context. So I have to make it the current context in order to use UIKit:

- (void)drawLayer:(CALayer*)lay inContext:(CGContextRef)con {
    UIGraphicsPushContext(con);
    UIBezierPath* p =
        [UIBezierPath bezierPathWithOvalInRect:CGRectMake(0,0,100,100)];
    [[UIColor blueColor] setFill];
    [p fill];
    UIGraphicsPopContext();
}

To use Core Graphics in drawLayer:inContext:, I simply keep referring to the context I was handed:

- (void)drawLayer:(CALayer*)lay inContext:(CGContextRef)con {
    CGContextAddEllipseInRect(con, CGRectMake(0,0,100,100));
    CGContextSetFillColorWithColor(con, [UIColor blueColor].CGColor);
    CGContextFillPath(con);
}

Finally, for the sake of completeness, let’s make a UIImage of a blue circle. We can do this at any time (we don’t need to wait for some particular method to be called) and in any class (we don’t need to be in a UIView subclass). First, I’ll use UIKit:

UIGraphicsBeginImageContextWithOptions(CGSizeMake(100,100), NO, 0);
UIBezierPath* p =
    [UIBezierPath bezierPathWithOvalInRect:CGRectMake(0,0,100,100)];
[[UIColor blueColor] setFill];
[p fill];
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// im is the blue circle image, do something with it here ...

Here’s the same thing using Core Graphics:

UIGraphicsBeginImageContextWithOptions(CGSizeMake(100,100), NO, 0);
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextAddEllipseInRect(con, CGRectMake(0,0,100,100));
CGContextSetFillColorWithColor(con, [UIColor blueColor].CGColor);
CGContextFillPath(con);
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// im is the blue circle image, do something with it here ...

You may be wondering about the arguments to UIGraphicsBeginImageContextWithOptions. The first argument is obviously the size of the image to be created. The second argument declares whether the image should be opaque; if I had passed YES instead of NO here, my image would have a black background, which I don’t want. The third argument specifies the image scale, corresponding to the UIImage scale property I discussed earlier; by passing 0, I’m telling the system to set the scale for me in accordance with the main screen resolution, so my image will look good on both single-resolution and double-resolution devices.

You don’t have to use UIKit or Core Graphics exclusively; on the contrary, you can intermingle UIKit calls and Core Graphics calls to operate on the same graphics context. They merely represent two different ways of talking about the same graphics context.

UIImage Drawing

A UIImage provides methods for drawing itself into the current context. We now know how to obtain an image context and make it the current context, so we can experiment with these methods. Here, I’ll make a UIImage consisting of two pictures of Mars side by side:

UIImage* mars = [UIImage imageNamed:@"Mars.png"];
CGSize sz = [mars size];
UIGraphicsBeginImageContextWithOptions(
    CGSizeMake(sz.width*2, sz.height), NO, 0);
[mars drawAtPoint:CGPointMake(0,0)];
[mars drawAtPoint:CGPointMake(sz.width,0)];
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

The resulting UIImage im is suitable anywhere you would use a UIImage. For instance, you could hand it over to a visible UIImageView, thus causing the image to appear onscreen (Figure 15.6).

figs/pios_1506.png

Figure 15.6. Two images of Mars combined side by side


Additional UIImage methods let you scale an image into a desired rectangle as you draw, and specify the compositing (blend) mode whereby the image should combine with whatever is already present. To illustrate, I’ll create an image showing Mars centered in another image of Mars that’s twice as large, using the Multiply blend mode (Figure 15.7):

UIImage* mars = [UIImage imageNamed:@"Mars.png"];
CGSize sz = [mars size];
UIGraphicsBeginImageContextWithOptions(
    CGSizeMake(sz.width*2, sz.height*2), NO, 0);
[mars drawInRect:CGRectMake(0,0,sz.width*2,sz.height*2)];
[mars drawInRect:CGRectMake(sz.width/2.0, sz.height/2.0, sz.width, sz.height)
       blendMode:kCGBlendModeMultiply alpha:1.0];
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
figs/pios_1507.png

Figure 15.7. Two images of Mars in different sizes, composited


There is no UIImage drawing method for specifying the source rectangle — that is, for specifying that you want to extract a smaller region of the original image. You can work around this by specifying a smaller graphics context and positioning the image drawing so that the desired region falls into it. For example, to obtain an image of the right half of Mars, you’d make a graphics context half the width of the mars image, and then draw mars shifted left, so that only its right half intersects the graphics context. There is no harm in doing this, and it’s a perfectly standard device; the left half of mars simply isn’t drawn (Figure 15.8):

UIImage* mars = [UIImage imageNamed:@"Mars.png"];
CGSize sz = [mars size];
UIGraphicsBeginImageContextWithOptions(
    CGSizeMake(sz.width/2.0, sz.height), NO, 0);
[mars drawAtPoint:CGPointMake(-sz.width/2.0, 0)];
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
figs/pios_1508.png

Figure 15.8. Half the original image of Mars


CGImage Drawing

The Core Graphics version of UIImage is CGImage (actually a CGImageRef). They are easily converted to one another: a UIImage has a CGImage property that accesses its Quartz image data, and you can make a UIImage from a CGImage using imageWithCGImage: or initWithCGImage: (in real life, you are likely to use their more configurable siblings, imageWithCGImage:scale:orientation: and in⁠it⁠With⁠CG⁠Im⁠age:scale:​ori⁠en⁠ta⁠ti⁠on:).

A CGImage lets you create a new image directly from a rectangular region of the original image, which you can’t do with UIImage. (A CGImage has other powers a UIImage doesn’t have; for example, you can apply an image mask to a CGImage.) I’ll demonstrate by splitting the image of Mars in half and drawing the two halves separately (Figure 15.9). Observe that we are now operating in the CFTypeRef world and must take care to manage memory manually; ARC won’t help us here (Chapter 12):

UIImage* mars = [UIImage imageNamed:@"Mars.png"];
// extract each half as a CGImage
CGSize sz = [mars size];
CGImageRef marsLeft = CGImageCreateWithImageInRect([mars CGImage],
                       CGRectMake(0,0,sz.width/2.0,sz.height));
CGImageRef marsRight = CGImageCreateWithImageInRect([mars CGImage],
                        CGRectMake(sz.width/2.0,0,sz.width/2.0,sz.height));
// draw each CGImage into an image context
UIGraphicsBeginImageContextWithOptions(
    CGSizeMake(sz.width*1.5, sz.height), NO, 0);
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con,
                   CGRectMake(0,0,sz.width/2.0,sz.height), marsLeft);
CGContextDrawImage(con,
                   CGRectMake(sz.width,0,sz.width/2.0,sz.height), marsRight);
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(marsLeft); CGImageRelease(marsRight);
figs/pios_1509.png

Figure 15.9. Image of Mars split in half


But there’s a problem with that example: the drawing is upside-down! It isn’t rotated; it’s mirrored top to bottom, or, to use the technical term, flipped. This phenomenon can arise when you create a CGImage and then draw it with CGContextDrawImage, and is due to a mismatch in the native coordinate systems of the source and target contexts.

There are various ways of compensating for this mismatch between the coordinate systems. One is to draw the CGImage into an intermediate UIImage and extract another CGImage from that. Example 15.1 presents a utility function for doing this.

Example 15.1. Utility for flipping an image drawing

CGImageRef flip (CGImageRef im) {
    CGSize sz = CGSizeMake(CGImageGetWidth(im), CGImageGetHeight(im));
    UIGraphicsBeginImageContextWithOptions(sz, NO, 0);
    CGContextDrawImage(UIGraphicsGetCurrentContext(),
                       CGRectMake(0, 0, sz.width, sz.height), im);
    CGImageRef result = [UIGraphicsGetImageFromCurrentImageContext() CGImage];
    UIGraphicsEndImageContext();
    return result;
}

Armed with the utility function from Example 15.1, we can now draw the halves of Mars the right way up in the previous example:

CGContextDrawImage(con, CGRectMake(0,0,sz.width/2.0,sz.height),
                   flip(marsLeft));
CGContextDrawImage(con, CGRectMake(sz.width,0,sz.width/2.0,sz.height),
                   flip(marsRight));

However, we’ve still got a problem: on a double-resolution device, if there is a high-resolution (@2x) version of our image file, the drawing comes out all wrong. The reason is that we are loading our starting Mars image using imageNamed:, which automatically substitutes the high-resolution version of the image on the high-resolution device. The UIImage compensates for the doubled size of the image by setting its own scale property to match. But a CGImage doesn’t have a scale property, and knows nothing of the fact that the image dimensions are doubled!

When you call a UIImage’s CGImage method, therefore, you can’t assume that the resulting CGImage is the same size as the original UIImage; a UIImage’s size property is the same for a single-resolution image and its double-resolution counterpart, because the scale tells it how to compensate, but the CGImage of a double-resolution UIImage is twice as large in both dimensions as the CGImage of the corresponding single-resolution image.

So, in extracting a desired piece of the CGImage, we must either multiply all appropriate values by the scale or express ourselves in terms of the CGImage’s dimensions. In this case, as we are extracting the left and right halves of the image, the latter is obviously the simpler course. So here’s a version of our original code that draws correctly on either a single-resolution or a double-resolution device, and compensates for flipping:

UIImage* mars = [UIImage imageNamed:@"Mars.png"];
CGSize sz = [mars size];
// Derive CGImage and use its dimensions to extract its halves
CGImageRef marsCG = [mars CGImage];
CGSize szCG = CGSizeMake(CGImageGetWidth(marsCG), CGImageGetHeight(marsCG));
CGImageRef marsLeft =
    CGImageCreateWithImageInRect(
        marsCG, CGRectMake(0,0,szCG.width/2.0,szCG.height));
CGImageRef marsRight =
    CGImageCreateWithImageInRect(
        marsCG, CGRectMake(szCG.width/2.0,0,szCG.width/2.0,szCG.height));
UIGraphicsBeginImageContextWithOptions(
    CGSizeMake(sz.width*1.5, sz.height), NO, 0);
// The rest is as before, calling flip() to compensate for flipping
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con, CGRectMake(0,0,sz.width/2.0,sz.height),
                   flip(marsLeft));
CGContextDrawImage(con, CGRectMake(sz.width,0,sz.width/2.0,sz.height),
                   flip(marsRight));
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(marsLeft); CGImageRelease(marsRight);

If this is starting to look rather clumsy and involved, don’t worry; I have up my sleeve another flipping solution that simplifies things considerably. Instead of calling our flip utility, you can wrap your CGImage in a UIImage before drawing. This has two big advantages:

  • The UIImage compensates for flipping automatically as it draws.
  • The UIImage can be formed in such a way as to compensate for scale: call imageWithCGImage:scale:orientation: as you form the UIImage from the CGImage.

So here’s a self-contained approach that deals with both flipping and scale:

UIImage* mars = [UIImage imageNamed:@"Mars.png"];
CGSize sz = [mars size];
// Derive CGImage and use its dimensions to extract its halves
CGImageRef marsCG = [mars CGImage];
CGSize szCG = CGSizeMake(CGImageGetWidth(marsCG), CGImageGetHeight(marsCG));
CGImageRef marsLeft =
    CGImageCreateWithImageInRect(
        marsCG, CGRectMake(0,0,szCG.width/2.0,szCG.height));
CGImageRef marsRight =
    CGImageCreateWithImageInRect(
        marsCG, CGRectMake(szCG.width/2.0,0,szCG.width/2.0,szCG.height));
UIGraphicsBeginImageContextWithOptions(
    CGSizeMake(sz.width*1.5, sz.height), NO, 0);
[[UIImage imageWithCGImage:marsLeft
                     scale:[mars scale]
               orientation:UIImageOrientationUp]
 drawAtPoint:CGPointMake(0,0)];
[[UIImage imageWithCGImage:marsRight
                     scale:[mars scale]
               orientation:UIImageOrientationUp]
 drawAtPoint:CGPointMake(sz.width,0)];
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(marsLeft); CGImageRelease(marsRight);

Yet another solution to flipping is to apply a transform to the graphics context before drawing the CGImage, effectively flipping the context’s internal coordinate system. This is elegant, but can be confusing if there are other transforms in play. I’ll talk more about graphics context transforms later in this chapter.

CIFilter and CIImage

The “CI” in CIFilter and CIImage stands for Core Image, a technology for transforming images through mathematical filters. Core Image started life on the desktop (Mac OS X), and migrated initially to iOS 5, bringing only a limited subset of the desktop filters, as iOS devices are not suitable for certain heavily intensive mathematical operations. In iOS 6, many more filters are provided; of about 140 filters available on the desktop, only about 40 are absent from iOS 6. To use Core Image, you’ll have to link your target to CoreImage.framework.

A filter is a CIFilter. The available filters fall naturally into several categories:

Patterns and gradients
These filters create CIImages that can then be combined with other CIImages, such as a single color, a checkerboard, stripes, or a gradient.
Compositing
These filters combine one image with another, using compositing blend modes familiar from image processing programs such as Photoshop.
Color
These filters adjust or otherwise modify the colors of an image. Thus you can alter an image’s saturation, hue, brightness, contrast, gamma and white point, exposure, shadows and highlights, and so on.
Geometric
These filters perform basic geometric transformations on an image, such as scaling, rotation, and cropping.
Transformation
These filters distort, blur, or stylize an image. They are the most intensive filters, so relatively few of them are available on iOS.
Transition
These filters provide a frame of a transition between one image and another; by asking for frames in sequence, you can animate the full transition.

The basic use of a CIFilter is quite simple; it essentially works as if a filter were a kind of dictionary consisting of keys and values. You create the filter by supplying the string name of a filter; to learn what these names are, consult Apple’s Core Image Filter Reference, or call the CIFilter class method filterNamesInCategories: with a nil argument. Each filter has a small number of keys and values that determine its behavior; for each key that you’re interested in, you supply a key–value pair, either by calling setValue:forKey: or by supplying all the keys and values as you specify the filter name. In supplying values, a number must be wrapped up as an NSNumber, and there are a few supporting classes such as CIVector (like CGPoint and CGRect combined) and CIColor, whose use is easy to grasp.

A CIFilter’s keys include any image or images on which the filter is to operate; such an image must be a CIImage. You can obtain a CIImage from a CGImage with initWithCGImage:; we already know how to obtain a CGImage from a UIImage. You can also obtain a CGImage as the output of a filter; thus filters can be chained together.

As you build a chain of filters, nothing actually happens. The only calculation-intensive move comes at the very end, when you produce the result of the entire chain as a CGImage. You do this by creating a CIContext (by calling contextWithOptions:) and then calling createCGImage:fromRect:. The only mildly tricky thing here is that a CIImage doesn’t have a frame or bounds; it has an extent. You will often use this as the second argument to createCGImage:fromRect:. The final output CGImage is ready for any purpose, such as for display in your app, for transformation into a UIImage, or for use in further drawing.

To illustrate, I’ll start with an ordinary photo of myself (it’s true I’m wearing a motorcycle helmet, but it’s still ordinary) and create a circular vignette effect. We start by generating a clear color. Then we make a radial gradient. Finally, we treat the radial gradient as a mask for blending between the photo of me and the clear color: where the radial gradient is white (everything inside the gradient’s inner radius) we see just me, and where the radial gradient is black (everything outside the gradient’s outer radius) we see just the clear color, with a gradation in between, so that the image fades away in the circular band between the gradient’s radii. The result is the UIImage moi4; displaying it an an image view (Figure 15.10), we see behind it the image view’s background color — or, if the image view’s background is clear, whatever is behind the image view:

CIFilter* col = [CIFilter filterWithName:@"CIConstantColorGenerator"];
CIColor* cicol = [[CIColor alloc] initWithColor:[UIColor clearColor]];
[col setValue:cicol forKey:@"inputColor"];
CIImage* colorimage = [col valueForKey: @"outputImage"];

CIFilter* grad = [CIFilter filterWithName:@"CIRadialGradient"];
CIVector* center = [CIVector vectorWithX:moi.size.width/2.0
                                       Y:moi.size.height/2.0];
[grad setValue:center forKey:@"inputCenter"];
[grad setValue:@85 forKey:@"inputRadius0"];
[grad setValue:@100 forKey:@"inputRadius1"];
CIImage *gradimage = [grad valueForKey: @"outputImage"];

CIFilter* blend = [CIFilter filterWithName:@"CIBlendWithMask"];
[blend setValue:moi2 forKey:@"inputImage"];
[blend setValue:colorimage forKey:@"inputBackgroundImage"];
[blend setValue:gradimage forKey:@"inputMaskImage"];

CGImageRef moi3 = [[CIContext contextWithOptions:nil]
                    createCGImage:blend.outputImage
                    fromRect:moi2.extent];
moi4 = [UIImage imageWithCGImage:moi3];
CGImageRelease(moi3);
figs/pios_1510.png

Figure 15.10. A photo of me, vignetted


In this example, we use the same image of me to generate a kaleidoscopic tile effect (Figure 15.11):

CIFilter* tile = [CIFilter filterWithName:@"CIFourfoldRotatedTile"];
[tile setValue:moi2 forKey:@"inputImage"];
CIVector* center = [CIVector vectorWithX:moi.size.width/2.0-60
                                       Y:moi.size.height/2.0-70];
[tile setValue:center forKey:@"inputCenter"];
[tile setValue:@50 forKey:@"inputWidth"];

CGImageRef moi3 = [[CIContext contextWithOptions:nil]
                    createCGImage:tile.outputImage
                    fromRect:moi2.extent];
moi4 = [UIImage imageWithCGImage:moi3];
CGImageRelease(moi3);
figs/pios_1511.png

Figure 15.11. A photo of me, tiled


It is also possible to draw a filter’s output directly into an OpenGL context, but OpenGL is outside the scope of this book. Core Image can also perform automatic face detection in an image.

Drawing a UIView

The most flexible way to draw a UIView is to draw it yourself. As I’ve already said, you don’t actually draw a UIView; you subclass UIView and endow the subclass with the ability to draw itself. When a UIView needs drawing, its drawRect: method is called. Overriding that method is your chance to draw. At the time that drawRect: is called, the current graphics context has already been set to the view’s own graphics context. You can use Core Graphics functions or UIKit convenience methods to draw into that context. Thus, everything I did earlier generating a UIImage and displaying it somehow in the interface could have been done instead by putting into my interface a UIView subclass that knows how to display itself as desired.

Warning

You should never call drawRect: yourself! If a view needs updating and you want its drawRect: called, send the view the setNeedsDisplay message. This will cause drawRect: to be called at the next proper moment. Also, don’t override drawRect: unless you are assured that this is legal. For example, it is not legal to override drawRect: in a subclass of UIImageView; you cannot combine your drawing with that of the UIImageView.

So, for example, let’s say we have a UIView subclass called MyView. How this class gets instantiated, and how the instance gets into our view hierarchy, isn’t important. One possibility would be to drag a UIView into a view in the nib and set its class to MyView in the identity inspector; another would be to create the MyView instance and put it into the interface in code.

Let’s suppose that MyView’s job is to draw the two halves of Mars, one at each end of the view. We can readily adapt the earlier example of doing this. There is no need for an image context; we just draw directly into the current context, which is the view’s own graphics context:

- (void)drawRect:(CGRect)rect {
    CGRect b = self.bounds;
    UIImage* mars = [UIImage imageNamed:@"Mars.png"];
    CGSize sz = [mars size];
    CGImageRef marsCG = [mars CGImage];
    CGSize szCG =
        CGSizeMake(CGImageGetWidth(marsCG), CGImageGetHeight(marsCG));
    CGImageRef marsLeft =
        CGImageCreateWithImageInRect(
            marsCG, CGRectMake(0,0,szCG.width/2.0,szCG.height));
    CGImageRef marsRight =
        CGImageCreateWithImageInRect(
            marsCG, CGRectMake(szCG.width/2.0,0,szCG.width/2.0,szCG.height));
    [[UIImage imageWithCGImage:marsLeft
                         scale:[mars scale]
                   orientation:UIImageOrientationUp]
     drawAtPoint:CGPointMake(0,0)];
    [[UIImage imageWithCGImage:marsRight
                         scale:[mars scale]
                   orientation:UIImageOrientationUp]
     drawAtPoint:CGPointMake(b.size.width-sz.width/2.0,0)];
    CGImageRelease(marsLeft); CGImageRelease(marsRight);
}

There is no need to call super, because the superclass here is UIView, whose drawRect: does nothing.

The need to draw in real time, on demand, surprises some beginners, who worry that drawing may be a time-consuming operation. Where drawing is extensive and can be compartmentalized into sections, you may be able to gain some efficiency by paying attention to the rect parameter passed into drawRect:. It designates the region of the view’s bounds that needs refreshing. Normally, this is the view’s entire bounds; but if you called setNeedsDisplayInRect:, it will be the CGRect that you passed in as argument. You could respond by drawing only what goes into those bounds; but even if you don’t, your drawing will be clipped to those bounds, so, while you may not spend less time drawing, the system will draw more efficiently.

In general, however, you should not optimize prematurely. The code for a drawing operation may appear verbose and yet be extremely fast. Moreover, the iOS drawing system is efficient; it doesn’t call drawRect: unless it has to (or is told to, through a call to setNeedsDisplay), and once a view has drawn itself, the result is cached so that the cached drawing can be reused instead of repeating the drawing operation from scratch. (Apple refers to this cached drawing as the view’s bitmap backing store.) You can readily satisfy yourself of this fact with some caveman debugging, logging in your drawRect: implementation; you may be amazed to discover that your code is called only once in the entire lifetime of the app! In fact, moving code to drawRect: is a common way to increase efficiency. This is because it is more efficient for the drawing engine to render directly onto the screen than for it to render offscreen and then copy those pixels onto the screen.

When creating a custom UIView subclass instance in code, you may be surprised and annoyed to find that the view has a black background. This can be frustrating if what you expected and wanted was a transparent background; this is a source of considerable confusion among beginners. The black background arises when two things are true:

  • The view’s backgroundColor is nil.
  • The view’s opaque is YES.

Unfortunately, when creating a UIView in code, both those things are true by default! So if you don’t want the black background, you must do something about one or the other of them (or both). For example, you might eliminate the black background by setting the view’s backgroundColor to [UIColor clearColor]. But then you should still set its opaque to NO, because the view isn’t opaque, and it’s up to you to tell the drawing system this.

With a UIView created in the nib, on the other hand, the black background problem doesn’t arise. This is because such a UIView’s backgroundColor is not nil. The nib assigns it some actual background color, even if that color is [UIColor clearColor].

Of course, if a view fills its rectangle with opaque drawing or has an opaque background color, you can leave opaque set to YES and gain some drawing efficiency (see Chapter 14).

Graphics Context Settings

When you draw in a graphics context, the drawing obeys the context’s current settings. Thus, the procedure is always to configure the context’s settings first, and then draw. For example, to draw a red line followed by a blue line, you would first set the context’s line color to red, and then draw the first line; then you’d set the context’s line color to blue, and then draw the second line. To the eye, it appears that the redness and blueness are properties of the individual lines, but in fact, at the time you draw each line, line color is a feature of the entire graphics context. This is true regardless of whether you use UIKit methods or Core Graphics functions.

A graphics context thus has, at every moment, a state, which is the sum total of all its settings; the way a piece of drawing looks is the result of what the graphics context’s state was at the moment that piece of drawing was performed. To help you manipulate entire states, the graphics context provides a stack for holding states. Every time you call CGContextSaveGState, the context pushes the entire current state onto the stack; every time you call CGContextRestoreGState, the context retrieves the state from the top of the stack (the state that was most recently pushed) and sets itself to that state.

Thus, a common pattern is: call CGContextSaveGState; manipulate the context’s settings, thus changing its state; draw; call CGContextRestoreGState to restore the state and the settings to what they were before you manipulated them. You do not have to do this before every manipulation of a context’s settings, however, because settings don’t necessarily conflict with one another or with past settings. You can set the context’s line color to red and then later to blue without any difficulty. But in certain situations you do want your manipulation of settings to be undoable, and I’ll point out several such situations later in this chapter.

Many of the settings that constitute a graphics context’s state, and that determine the behavior and appearance of drawing performed at that moment, are similar to those of any drawing application. Here are some of them, along with some of the commands that determine them; I provide Core Graphics functions here, but keep in mind that UIKit commands are actually calling these same functions and manipulating the context’s state in the same ways:

Line thickness and dash style
CGContextSetLineWidth, CGContextSetLineDash
Line end-cap style and join style
CGContextSetLineCap, CGContextSetLineJoin, CGContextSetMiterLimit
Line color or pattern
CGContextSetRGBStrokeColor, CGContextSetGrayStrokeColor, CGContextSetStrokeColorWithColor, CGContextSetStrokePattern
Fill color or pattern
CGContextSetRGBFillColor, CGContextSetGrayFillColor, CGContextSetFillColorWithColor, CGContextSetFillPattern
Shadow
CGContextSetShadow, CGContextSetShadowWithColor
Blend mode
CGContextSetBlendMode (this determines how drawing that you do now will be composited with drawing already present)
Overall transparency
CGContextSetAlpha (individual colors also have an alpha component)
Text features
CGContextSelectFont, CGContextSetFont, CGContextSetFontSize, CGContextSetTextDrawingMode, CGContextSetCharacterSpacing
Whether anti-aliasing and font smoothing are in effect
CGContextSetShouldAntialias, CGContextSetShouldSmoothFonts

Additional settings include:

Clipping area
Drawing outside the clipping area is not physically drawn.
Transform (or “CTM,” for “current transform matrix”)
Changes how points that you specify in subsequent drawing commands are mapped onto the physical space of the canvas.

Many (but not all) of these settings will be illustrated by examples later in this chapter.

Paths and Drawing

By issuing a series of instructions for moving an imaginary pen, you trace out a path. Such a path does not constitute drawing! First you provide a path; then you draw. Drawing can mean stroking the path or filling the path, or both. Again, this should be a familiar notion from certain drawing applications.

A path is constructed by tracing it out from point to point. Think of the drawing system as holding a pen. Then you must first tell that pen where to position itself, setting the current point; after that, you issue a series of commands telling it how to trace out each subsequent piece of the path. Each additional piece of the path starts at the current point; its end becomes the new current point.

Here are some path-drawing commands you’re likely to give:

Position the current point
CGContextMoveToPoint
Trace a line
CGContextAddLineToPoint, CGContextAddLines
Trace a rectangle
CGContextAddRect, CGContextAddRects
Trace an ellipse or circle
CGContextAddEllipseInRect
Trace an arc
CGContextAddArcToPoint, CGContextAddArc
Trace a Bezier curve with one or two control points
CGContextAddQuadCurveToPoint, CGContextAddCurveToPoint
Close the current path
CGContextClosePath. This appends a line from the last point of the path to the first point. There’s no need to do this if you’re about to fill the path, since it’s done for you.
Stroke or fill the current path

CGContextStrokePath, CGContextFillPath, CGContextEOFillPath, CGContextDrawPath. Stroking or filling the current path clears the path. Use CGContextDrawPath if you want both to fill and to stroke the path in a single command, because if you merely stroke it first with CGContextStrokePath, the path is cleared and you can no longer fill it.

There are also a lot of convenience functions that create a path and stroke or fill it all in a single move: CGContextStrokeLineSegments, CGContextStrokeRect, CGContextStrokeRectWithWidth, CGContextFillRect, CGContextFillRects, CGContextStrokeEllipseInRect, CGContextFillEllipseInRect.

A path can be compound, meaning that it consists of multiple independent pieces. For example, a single path might consist of two separate closed shapes: a rectangle and a circle. When you call CGContextMoveToPoint in the middle of constructing a path (that is, after tracing out a path and without clearing it by filling, stroking, or calling CGContextBeginPath), you pick up the imaginary pen and move it to a new location without tracing a segment, thus preparing to start an independent piece of the same path. If you’re worried, as you begin to trace out a path, that there might be an existing path and that your new path might be seen as a compound part of that existing path, you can call CGContextBeginPath to specify that this is a different path; many of Apple’s examples do this, but in practice I usually do not find it necessary.

There is also a function for erasing an area: CGContextClearRect. This erases all existing drawing in a rectangle; combined with clipping, though, it can erase an area of any shape. The result can “punch a hole” through all existing drawing.

The behavior of CGContextClearRect depends on whether the context is transparent or opaque. This is particularly obvious and intuitive when drawing into an image context. If the image context is transparent — the second argument to UIGraphicsBeginImageContextWithOptions is NO — CGContextClearRect erases to transparent; otherwise it erases to black.

When drawing directly into a view (as with drawRect: or drawLayer:inContext:), if the view’s background color is nil or a color with even a tiny bit of transparency, the result of CGContextClearRect will appear to be transparent, punching a hole right through the view including its background color; if the background color is completely opaque, the result of CGContextClearRect will be black. This is because the view’s background color determines whether the view’s graphics context is transparent or opaque; thus, this is essentially the same behavior that I described in the preceding paragraph.

Figure 15.12 illustrates; the blue square on the left has been partly cut away to black, while the blue square on the right has been partly cut away to transparency. Yet these are instances of the same UIView subclass, drawn with exactly the same code! The difference between the views is that the backgroundColor of the first view is set in the nib to solid red with an alpha of 1, while the backgroundColor of the second view is set in the nib to solid red with an alpha of 0.99. This difference is utterly imperceptible to the eye (not to mention that the red color never appears, as it is covered with a blue fill), but it completely changes the effect of CGContextClearRect. The UIView subclass’s drawRect: looks like this:

CGContextRef con = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(con, [UIColor blueColor].CGColor);
CGContextFillRect(con, rect);
CGContextClearRect(con, CGRectMake(0,0,30,30));
figs/pios_1512.png

Figure 15.12. The very strange behavior of CGContextClearRect


To illustrate the typical use of path-drawing commands, I’ll generate the up-pointing arrow shown in Figure 15.13. This might not be the best way to create the arrow, and I’m deliberately avoiding use of the convenience functions, but it’s clear and shows a nice basic variety of typical commands:

// obtain the current graphics context
CGContextRef con = UIGraphicsGetCurrentContext();

// draw a black (by default) vertical line, the shaft of the arrow
CGContextMoveToPoint(con, 100, 100);
CGContextAddLineToPoint(con, 100, 19);
CGContextSetLineWidth(con, 20);
CGContextStrokePath(con);

// draw a red triangle, the point of the arrow
CGContextSetFillColorWithColor(con, [[UIColor redColor] CGColor]);
CGContextMoveToPoint(con, 80, 25);
CGContextAddLineToPoint(con, 100, 0);
CGContextAddLineToPoint(con, 120, 25);
CGContextFillPath(con);

// snip a triangle out of the shaft by drawing in Clear blend mode
CGContextMoveToPoint(con, 90, 101);
CGContextAddLineToPoint(con, 100, 90);
CGContextAddLineToPoint(con, 110, 101);
CGContextSetBlendMode(con, kCGBlendModeClear);
CGContextFillPath(con);
figs/pios_1513.png

Figure 15.13. A simple path drawing


Properly speaking, we should probably surround our drawing code with calls to CGContextSaveGState and CGContextRestoreGState, just in case. It probably wouldn’t make any difference in this particular example, as the context does not persist between calls to drawRect:, but it can’t hurt.

If a path needs to be reused or shared, you can encapsulate it as a CGPath, which is actually a CGPathRef. You can either create a new CGMutablePathRef and construct the path using various CGPath functions that parallel the graphics path-construction functions, or you can copy the graphics context’s current path using CGContextCopyPath. There are also a number of CGPath functions for creating a path based on simple geometry (CGPathCreateWithRect, CGPathCreateWithEllipseInRect) or based on an existing path (CGPathCreateCopyByStrokingPath, CGPathCreateCopyByDashingPath, CGPathCreateCopyByTransformingPath).

A UIKit class, UIBezierPath, wraps CGPath. It provides methods for drawing certain path shapes, as well as for stroking, filling, and for accessing certain settings of the current graphics context state. Similarly, UIColor provides methods for setting the current graphics context’s stroke and fill colors. Thus we could rewrite our arrow-drawing routine like this:

UIBezierPath* p = [UIBezierPath bezierPath];
[p moveToPoint:CGPointMake(100,100)];
[p addLineToPoint:CGPointMake(100, 19)];
[p setLineWidth:20];
[p stroke];

[[UIColor redColor] set];
[p removeAllPoints];
[p moveToPoint:CGPointMake(80,25)];
[p addLineToPoint:CGPointMake(100, 0)];
[p addLineToPoint:CGPointMake(120, 25)];
[p fill];

[p removeAllPoints];
[p moveToPoint:CGPointMake(90,101)];
[p addLineToPoint:CGPointMake(100, 90)];
[p addLineToPoint:CGPointMake(110, 101)];
[p fillWithBlendMode:kCGBlendModeClear alpha:1.0];

There’s no savings of code in this particular case, but UIBezierPath still might be useful if you need object features, and it does offer one convenience method, bezierPathWithRoundedRect:cornerRadius:, that is particularly attractive; drawing a rectangle with rounded corners using only Core Graphics functions is rather tedious.

Clipping

Another use of a path is to mask out areas, protecting them from future drawing. This is called clipping. By default, a graphics context’s clipping region is the entire graphics context: you can draw anywhere within the context.

The clipping area is a feature of the context as a whole, and any new clipping area is applied by intersecting it with the existing clipping area; so if you apply your own clipping region, the way to remove it from the graphics context later is to plan ahead and wrap things with calls to CGContextSaveGState and CGContextRestoreGState.

To illustrate, I’ll rewrite the code that generated our original arrow (Figure 15.13) to use clipping instead of a blend mode to “punch out” the triangular notch in the tail of the arrow. This is a little tricky, because what we want to clip to is not the region inside the triangle but the region outside it. To express this, we’ll use a compound path consisting of more than one closed area — the triangle, and the drawing area as a whole (which we can obtain with CGContextGetClipBoundingBox).

Both when filling a compound path and when using it to express a clipping region, the system follows one of two rules:

Winding rule
The fill or clipping area is denoted by an alternation in the direction (clockwise or counterclockwise) of the path demarcating each region.
Even-odd rule (EO)
The fill or clipping area is denoted by a simple count of the paths demarcating each region.

Our situation is extremely simple, so it’s easier to use the even-odd rule. So we set up the clipping area using CGContextEOClip and then draw the arrow:

// obtain the current graphics context
CGContextRef con = UIGraphicsGetCurrentContext();

// punch triangular hole in context clipping region
CGContextMoveToPoint(con, 90, 100);
CGContextAddLineToPoint(con, 100, 90);
CGContextAddLineToPoint(con, 110, 100);
CGContextClosePath(con);
CGContextAddRect(con, CGContextGetClipBoundingBox(con));
CGContextEOClip(con);

// draw the vertical line
CGContextMoveToPoint(con, 100, 100);
CGContextAddLineToPoint(con, 100, 19);
CGContextSetLineWidth(con, 20);
CGContextStrokePath(con);

// draw the red triangle, the point of the arrow
CGContextSetFillColorWithColor(con, [[UIColor redColor] CGColor]);
CGContextMoveToPoint(con, 80, 25);
CGContextAddLineToPoint(con, 100, 0);
CGContextAddLineToPoint(con, 120, 25);
CGContextFillPath(con);

Gradients

Gradients can range from the simple to the complex. A simple gradient (which is all I’ll describe here) is determined by a color at one endpoint along with a color at the other endpoint, plus (optionally) colors at intermediate points; the gradient is then painted either linearly between two points in the context or radially between two circles in the context.

You can’t use a gradient as a path’s fill color, but you can restrict a gradient to a path’s shape by clipping, which amounts to the same thing.

To illustrate, I’ll redraw our arrow, using a linear gradient as the “shaft” of the arrow (Figure 15.14):

// obtain the current graphics context
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextSaveGState(con);

// punch triangular hole in context clipping region
CGContextMoveToPoint(con, 90, 100);
CGContextAddLineToPoint(con, 100, 90);
CGContextAddLineToPoint(con, 110, 100);
CGContextClosePath(con);
CGContextAddRect(con, CGContextGetClipBoundingBox(con));
CGContextEOClip(con);

// draw the vertical line, add its shape to the clipping region
CGContextMoveToPoint(con, 100, 100);
CGContextAddLineToPoint(con, 100, 19);
CGContextSetLineWidth(con, 20);
CGContextReplacePathWithStrokedPath(con);
CGContextClip(con);

// draw the gradient
CGFloat locs[3] = { 0.0, 0.5, 1.0 };
CGFloat colors[12] = {
    0.3,0.3,0.3,0.8, // starting color, transparent gray
    0.0,0.0,0.0,1.0, // intermediate color, black
    0.3,0.3,0.3,0.8 // ending color, transparent gray
};
CGColorSpaceRef sp = CGColorSpaceCreateDeviceGray();
CGGradientRef grad =
    CGGradientCreateWithColorComponents (sp, colors, locs, 3);
CGContextDrawLinearGradient (
    con, grad, CGPointMake(89,0), CGPointMake(111,0), 0);
CGColorSpaceRelease(sp);
CGGradientRelease(grad);

CGContextRestoreGState(con); // done clipping

// draw the red triangle, the point of the arrow
CGContextSetFillColorWithColor(con, [[UIColor redColor] CGColor]);
CGContextMoveToPoint(con, 80, 25);
CGContextAddLineToPoint(con, 100, 0);
CGContextAddLineToPoint(con, 120, 25);
CGContextFillPath(con);
figs/pios_1514.png

Figure 15.14. Drawing with a gradient


The call to CGContextReplacePathWithStrokedPath pretends to stroke the current path, using the current line width and other line-related context state settings, but then creates a new path representing the outside of that stroked path. Thus, instead of a thick line we have a rectangular region that we can use as the clip region.

We then create the gradient and paint it. The procedure is verbose but simple; everything is boilerplate. We describe the gradient as a set of locations on the continuum between one endpoint (0.0) and the other endpoint (1.0), along with the colors corresponding to each location; in this case, I want the gradient to be lighter at the edges and darker in the middle, so I use three locations, with the dark one at 0.5. We must also supply a color space in order to create the gradient. Finally, we create the gradient, paint it into place, and release the color space and the gradient.

Colors and Patterns

A color is a CGColor (actually a CGColorRef). CGColor is not difficult to work with, and is bridged to UIColor through UIColor’s colorWithCGColor: and CGColor methods.

A pattern, on the other hand, is a CGPattern (actually a CGPatternRef). You can create a pattern and stroke or fill with it. The process is rather elaborate. As an extremely simple example, I’ll replace the red triangular arrowhead with a red-and-blue striped triangle (Figure 15.15). To do so, remove this line:

figs/pios_1515.png

Figure 15.15. A patterned fill


CGContextSetFillColorWithColor(con, [[UIColor redColor] CGColor]);

In its place, put the following:

CGColorSpaceRef sp2 = CGColorSpaceCreatePattern(nil);
CGContextSetFillColorSpace (con, sp2);
CGColorSpaceRelease (sp2);
CGPatternCallbacks callback = {
    0, drawStripes, nil
};
CGAffineTransform tr = CGAffineTransformIdentity;
CGPatternRef patt = CGPatternCreate(nil,
                      CGRectMake(0,0,4,4),
                      tr,
                      4, 4,
                      kCGPatternTilingConstantSpacingMinimalDistortion,
                      true,
                      &callback);
CGFloat alph = 1.0;
CGContextSetFillPattern(con, patt, &alph);
CGPatternRelease(patt);

That code is verbose, but it is almost entirely boilerplate. To understand it, it almost helps to read it backward. What we’re leading up to is the call to CGContextSetFillPattern; instead of setting a fill color, we’re setting a fill pattern, to be used the next time we fill a path (in this case, the triangular arrowhead). The third parameter to CGContextSetFillPattern is a pointer to a CGFloat, so we have to set up the CGFloat itself beforehand. The second parameter to CGContextSetFillPattern is a CGPatternRef, so we have to create that CGPatternRef beforehand (and release it afterward).

So now let’s talk about the call to CGPatternCreate. A pattern is a drawing in a rectangular “cell”; we have to state both the size of the cell (the second argument) and the spacing between origin points of cells (the fourth and fifth arguments). In this case, the cell is 4×4, and every cell exactly touches its neighbors both horizontally and vertically. We have to supply a transform to be applied to the cell (the third argument); in this case, we’re not doing anything with this transform, so we supply the identity transform. We supply a tiling rule (the sixth argument). We have to state whether this is a color pattern or a stencil pattern; it’s a color pattern, so the seventh argument is true. And we have to supply a pointer to a callback function that actually draws the pattern into its cell (the eighth argument).

Except that that’s not what we have to supply as the eighth argument. To make matters more complicated, what we actually have to supply here is a pointer to a CGPatternCallbacks struct. This struct consists of the number 0 and pointers to two functions, one called to draw the pattern into its cell, the other called when the pattern is released. We’re not specifying the second function, however; it is for memory management, and we don’t need it in this simple example.

We have almost worked our way backward to the start of the code. It turns out that before you can call CGContextSetFillPattern with a colored pattern, you have to set the context’s fill color space to a pattern color space. If you neglect to do this, you’ll get an error when you call CGContextSetFillPattern. So we create the color space, set it as the context’s fill color space, and release it.

But we are still not finished, because I haven’t shown you the function that actually draws the pattern cell! This is the function whose address is taken as drawStripes in our code. Here it is:

void drawStripes (void *info, CGContextRef con) {
    // assume 4 x 4 cell
    CGContextSetFillColorWithColor(con, [[UIColor redColor] CGColor]);
    CGContextFillRect(con, CGRectMake(0,0,4,4));
    CGContextSetFillColorWithColor(con, [[UIColor blueColor] CGColor]);
    CGContextFillRect(con, CGRectMake(0,0,4,2));
}

As you can see, the actual pattern-drawing code is very simple. The only tricky issue is that the call to CGPatternCreate must be in agreement with the pattern-drawing function as to the size of a cell, or the pattern won’t come out the way you expect. We know in this case that the cell is 4×4. So we fill it with red, and then fill its lower half with blue. When these cells are tiled touching each other horizontally and vertically, we get the stripes that you see in Figure 15.15.

Note, finally, that the code as presented has left the graphics context in an undesirable state, with its fill color space set to a pattern color space. This would cause trouble if we were later to try to set the fill color to a normal color. The solution, as usual, is to wrap the code in calls to CGContextSaveGState and CGContextRestoreGState.

You may have observed in Figure 15.15 that the stripes do not fit neatly inside the triangle of the arrow-head: the bottommost stripe is something like half a blue stripe. This is because a pattern is positioned not with respect to the shape you are filling (or stroking), but with respect to the graphics context as a whole. We could shift the pattern position by calling CGContextSetPatternPhase before drawing.

Graphics Context Transforms

Just as a UIView can have a transform, so can a graphics context. However, applying a transform to a graphics context has no effect on the drawing that’s already in it; it affects only the drawing that takes place after it is applied, altering the way the coordinates you provide are mapped onto the graphics context’s area. A graphics context’s transform is called its CTM, for “current transformation matrix.”

It is quite usual to take full advantage of a graphics context’s CTM to save yourself from performing even simple calculations. You can multiply the current transform by any CGAffineTransform using CGContextConcatCTM; there are also convenience functions for applying a translate, scale, or rotate transform to the current transform.

The base transform for a graphics context is already set for you when you obtain the context; this is how the system is able to map context drawing coordinates onto screen coordinates. Whatever transforms you apply are applied to the current transform, so the base transform remains in effect and drawing continues to work. You can return to the base transform after applying your own transforms by wrapping your code in calls to CGContextSaveGState and CGContextRestoreGState.

For example, we have hitherto been drawing our upward-pointing arrow with code that knows how to place that arrow at only one location: the top left of its rectangle is hard-coded at {80,0}. This is silly. It makes the code hard to understand, as well as inflexible and difficult to reuse. Surely the sensible thing would be to draw the arrow at {0,0}, by subtracting 80 from all the x-values in our existing code. Now it is easy to draw the arrow at any position, simply by applying a translate transform beforehand, mapping {0,0} to the desired top-left corner of the arrow. So, to draw it at {80,0}, we would say:

CGContextTranslateCTM(con, 80, 0);
// now draw the arrow at (0,0)

A rotate transform is particularly useful, allowing you to draw in a rotated orientation without any nasty trigonometry. However, it’s a bit tricky because the point around which the rotation takes place is the origin. This is rarely what you want, so you have to apply a translate transform first, to map the origin to the point around which you really want to rotate. But then, after rotating, in order to figure out where to draw you will probably have to reverse your translate transform.

To illustrate, here’s code to draw our arrow repeatedly at several angles, pivoting around the end of its tail (Figure 15.16). First, we’ll encapsulate the drawing of the arrow as a UIImage. Then we simply draw that UIImage repeatedly:

UIGraphicsBeginImageContextWithOptions(CGSizeMake(40,100), NO, 0.0);
CGContextRef con = UIGraphicsGetCurrentContext();

// draw the arrow into the image context
// draw it at (0,0)! adjust all x-values by subtracting 80
// ... actual code omitted ...

UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

con = UIGraphicsGetCurrentContext();

[im drawAtPoint:CGPointMake(0,0)];
for (int i=0; i<3; i++) {
    CGContextTranslateCTM(con, 20, 100);
    CGContextRotateCTM(con, 30 * M_PI/180.0);
    CGContextTranslateCTM(con, -20, -100);
    [im drawAtPoint:CGPointMake(0,0)];
}
figs/pios_1516.png

Figure 15.16. Drawing rotated with a CTM


A transform is also one more solution for the “flip” problem we encountered earlier with CGContextDrawImage. Instead of reversing the drawing, we can reverse the context into which we draw it. Essentially, we apply a “flip” transform to the context’s coordinate system. You move the context’s top downward, and then reverse the direction of the y-coordinate by applying a scale transform whose y-multiplier is -1:

CGContextTranslateCTM(con, 0, theHeight);
CGContextScaleCTM(con, 1.0, -1.0);

How far down you move the context’s top depends on how you intend to draw the image. So, for example, we could draw the two halves of Mars (from the example earlier in this chapter) without flipping, like this:

CGContextTranslateCTM(con, 0, sz.height); // sz is [mars size]
CGContextScaleCTM(con, 1.0, -1.0);
CGContextDrawImage(con,
    CGRectMake(0,0,sz.width/2.0,sz.height),
    marsLeft);
CGContextDrawImage(con,
    CGRectMake(b.size.width-sz.width/2.0, 0, sz.width/2.0, sz.height),
    marsRight);

Shadows

To add a shadow to a drawing, give the context a shadow value before drawing. The shadow position is expressed as a CGSize, where the positive direction for both values indicates down and to the right. The blur value is an open-ended positive number; Apple doesn’t explain how the scale works, but experimentation shows that 12 is nice and blurry, 99 is so blurry as to be shapeless, and higher values become problematic.

Figure 15.17 shows the result of the same code that generated Figure 15.16, except that before we start drawing the arrow repeatedly, we give the context a shadow:

con = UIGraphicsGetCurrentContext();
CGContextSetShadow(con, CGSizeMake(7, 7), 12);
[im drawAtPoint:CGPointMake(0,0)]; // ... and so on
figs/pios_1517.png

Figure 15.17. Drawing with a shadow


However, there’s a subtle cosmetic problem with this approach. It may not be evident from Figure 15.17, but we are adding a shadow each time we draw. Thus the arrows are able to cast shadows on one another. What we want, however, is for all the arrows to cast a single shadow collectively. The way to achieve this is with a transparency layer; this is basically a subcontext that accumulates all drawing and then adds the shadow. Our code for drawing the shadowed arrows would thus look like this:

CGContextSetShadow(con, CGSizeMake(7, 7), 12);
CGContextBeginTransparencyLayer(con, nil);
[im drawAtPoint:CGPointMake(0,0)];
for (int i=0; i<3; i++) {
    CGContextTranslateCTM(con, 20, 100);
    CGContextRotateCTM(con, 30 * M_PI/180.0);
    CGContextTranslateCTM(con, -20, -100);
    [im drawAtPoint:CGPointMake(0,0)];
}
CGContextEndTransparencyLayer(con);

Points and Pixels

A point is a dimensionless location described by an x-coordinate and a y-coordinate. When you draw in a graphics context, you specify the points at which to draw, and this works regardless of the device’s resolution, because Core Graphics maps your drawing nicely onto the physical output (using the base CTM, along with any anti-aliasing and smoothing). Therefore, throughout this chapter I’ve concerned myself with graphics context points, disregarding their relationship to screen pixels.

However, pixels do exist. A pixel is a physical, integral, dimensioned unit of display in the real world. Whole-numbered points effectively lie between pixels, and this can matter if you’re fussy, especially on a single-resolution device. For example, if a vertical path with whole-number coordinates is stroked with a line width of 1, half the line falls on each side of the path, and the drawn line on the screen of a single-resolution device will seem to be 2 pixels wide (because the device can’t illuminate half a pixel).

You will sometimes encounter advice suggesting that if this effect is objectionable, you should try shifting the line’s position by 0.5, to center it in its pixels. This advice may appear to work, but it makes some simple-minded assumptions. A more sophisticated approach is to obtain the UIView’s contentScaleFactor property. This value will be either 1.0 or 2.0, so you can divide by it to convert from pixels to points. Consider also that the most accurate way to draw a vertical or horizontal line is not to stroke a path but to fill a rectangle. So this UIView subclass code will draw a perfect 1-pixel-wide vertical line on any device:

CGContextFillRect(con, CGRectMake(100,0,1.0/self.contentScaleFactor,100));

Content Mode

A view that draws something within itself, as opposed to merely having a background color and subviews (as in the previous chapter), has content. This means that its contentMode property becomes important whenever the view is resized. As I mentioned earlier, the drawing system will avoid asking a view to redraw itself from scratch if possible; instead, it will use the cached result of the previous drawing operation (the bitmap backing store). So, if the view is resized, the system may simply stretch or shrink or reposition the cached drawing, if your contentMode setting instructs it to do so.

It’s a little tricky to illustrate this point, because I have to arrange for the view to be resized without also causing it to be redrawn (that is, without triggering a call to drawRect:). Here’s how I’ll do that. As the app starts up, I’ll create an instance of a UIView subclass that knows how to draw our arrow. Then I’ll use delayed performance to resize the instance after the window has shown and the interface has been initially displayed:

void (^resize) (void) = ^{
    CGRect f = mv.bounds; // mv is the MyView instance
    f.size.height *= 2;
    mv.bounds = f;
};
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, NSEC_PER_SEC);
dispatch_after(popTime, dispatch_get_main_queue(), resize);

We double the height of the view without causing drawRect: to be called. The result is that the view’s drawing appears at double its correct height. For example, if our view’s drawRect: code is the same as the code that generated Figure 15.14, we get Figure 15.18.

figs/pios_1518.png

Figure 15.18. Automatic stretching of content


This, however, is almost certainly not what we want. Sooner or later drawRect: will be called, and the drawing will be refreshed in accordance with our code. Our code doesn’t say to draw the arrow at a height that is relative to the height of the view’s bounds; it draws the arrow at a fixed height. Thus, not only has the arrow stretched, but at some future time, it will snap back to its original size.

The moral is that our view’s contentMode property needs to be in agreement with how the view draws itself. For example, our drawRect: code dictates the size and position of the arrow relative to the view’s bounds origin, its top left. So we could set its contentMode to UIViewContentModeTopLeft. Alternatively, and more likely, we could set it to UIViewContentModeRedraw; this will cause automatic scaling and repositioning of the cached content to be turned off, and instead the view’s setNeedsDisplay method will be called, ultimately triggering drawRect: to redraw the content.

On the other hand, if a view might be resized only momentarily — say, as part of an animation — then stretching behavior might be exactly what you want. Suppose we’re going to animate the view by making it get a little larger for a moment and then returning it to its original size, perhaps as a way of attracting the user’s attention. Then presumably we do want the view’s content to stretch and shrink as the view stretches and shrinks; that’s the whole point of the animation. This is precisely what the default contentMode value, UIViewContentModeScaleToFill, does for us. And remember, it does it efficiently; what’s being stretched and shrunk is just a cached image of our view’s content.