Adventures in Wide Color: An iOS Exploration

I used to think the reddest red around was 0xFF0000. Not much more to say.

And then a few weeks ago, I watched one of Apple’s videos about working with Wide Color. It drove home the point that many visible colors simply can’t be rendered on certain devices, and, by implication, that there was a whole world of reds (and oranges and greens) that I just hadn’t been seeing on my iPhone 6s.

A few days later, I got my iPhone X — and suddenly I could capture these formerly hidden colors, and see them rendered up close, on a gorgeous OLED display.

It was like a veil had been lifted on my perception and appreciation of color.

To help me understand wide color better, I decided to write an experimental iOS app to identify these colors around me, in real time. The basic idea, inspired by this sample code from Apple, was this: Make an app that streams live images from the camera and, for each frame, highlights all the colors outside the standard range for legacy displays. Colors inside the standard range would be converted to grayscale; colors outside would be allowed to pass through unchanged. (Skip to the end for example screenshots.)

First, some background: Until the release of the iPhone 7, iPhone screens used the standard Red Green Blue (sRGB) color space, which is more than 20 years old. Starting with iPhone 7, iPhones began supporting the Display P3 color space, a superset of sRGB that can display more of the visual color spectrum.

How much more? Here’s a 3-D rendering of how they compare:

P3’s color gamut is about 25% larger than sRGB’s

As this makes clear, while P3 and sRGB converge near the “poles” of white and black, P3 extends much further near the “equator,” where the brightest colors lie. (To be clear, both spaces only cover a portion of all colors visible to the human eye.)

While the “reddest” corner of the sRGB gamut (the lower left of the inner cube) would be represented in sRGB by the color coordinates (r: 1.0, g: 0.0, b: 0.0) — where 1.0 represents the maximum value of the space’s red channel — the same point converted into P3 space would be (r: 0.9175, b: 0.2003, g: 0.1387).

Conversely, the corresponding corner of the outer P3 gamut, described in that space as (1.0, 0.0, 0.0), lies outside of sRGB and cannot be expressed in that color space at all.

But enough theory. Back to my project. Here’s a rough outline what I did:

  • Set up an AVCaptureSession that streams pixel buffers from the camera, in the P3 color space, if it’s supported.
  • Created a CIContext whose workingColorSpace is Apple’s extended sRGB color space. Using the extended sRGB format is crucial because “wide” color information will be both preserved and easily identifiable after converting from P3. Unlike sRGB, which clamps values to a range from 0.0 to 1.0 and thus discards any wide-color information, extended sRGB allows values outside of that range, which leaves open the possibility that wide-color-aware displays can use them.
  • Write a Metal fragment shader that allows wide colors to pass through unchanged, but converts “narrow” colors to a shade of gray.
  • Using the CIContext and a custom CIFilter, built with the Metal shader, take each pixel buffer in the stream, filter it and render it to the screen.

Step 1: Creating the AVCaptureSession

Apple’s AVCam sample project is an excellent template for how to capture images from the camera, and I was able to adapt it for my project with few changes.

In my case, though, I needed more than what the sample code’s AVCaptureVideoPreviewLayer could provide: I needed access to the video capture itself, so I could process each pixel buffer in real time. At the same time, though, I needed to make sure I was preserving wide-color information.

This added a small complication, which forced me to understand how an AVCaptureSession decides whether or not to capture wide color by default.

Left to its own devices (pun intended), an AVCaptureSession will try to do the “right thing” as relates to wide color, thanks to a tongue-twisting property introduced in iOS 10 called automaticallyConfiguresCaptureDeviceForWideColor. When set to true (the default), the session automatically sets the device’s active color space to P3 if a) the device supports wide color and b) the session configuration suggests that wide color makes sense.

But when, according to the default behavior, does wide color “makes sense”?

For starters, an AVCapturePhotoOutput must be attached to the AVCaptureSession. But if you also attach AVCaptureVideoDataOutput — as I did, because I wanted to capture a live stream — you need to be careful. Because Display P3 is not well-supported in video, the automatic configuration will revert to sRGB if it thinks the destination is a movie file.

The trick for staying in the P3 color space, in this case, is to make your non-movie intentions clear by doing this:

session.sessionPreset = .photo

With that done, I confirmed the capture of wide color by checking that, once session.commitConfiguration was called, device.activeColorSpace changed from sRGB to P3_D65.

Step 2: Creating the CIContext

It’s easy to lose wide-color information when rendering an image. As Mike Krieger of Instagram points out in this great blog post, iOS 10 introduced a piece of wide-color-aware API called UIGraphicsImageRenderer to help with the rendering of wide-color images in Core Graphics.

With Core Image, on the other hand, you need to make sure your CIContext’s working color space and pixel format are configured correctly.

Here’s the setup that worked for me: the working color space had to support extended sRGB, as you’d expect (to handle values below 0.0 or above 1.0), and the pixel format had to use floats (for similar reasons).

private lazy var ciContext: CIContext = {
 let space = CGColorSpace(name: CGColorSpace.extendedSRGB)
 let format = NSNumber(value: kCIFormatRGBAh) // full-float pixels
 var options = [String: Any]()
 options[kCIContextWorkingColorSpace] = space
 options[kCIContextWorkingFormat] = format
 return CIContext(options: options)
}()

Set up in this way, a CIContext can preserve extended sRGB data when it renders and image.

Step 3: Creating the CIFilter

The next step was building a filter to convert “non-wide” pixels to shades of gray. I decided an interesting way to do this would be to create a custom CIFilter that was backed by a Metal shader. The basic steps were:

  1. Write the Metal shader
  2. Create a CIKernel from the shader
  3. Create a CIFilter subclass to apply the CIKernel

Steps 2 & 3 are pretty well covered in this WWDC 2017 video. As for creating the shader, I was able to borrow some code from Apple’s very cool Color Gamut Showcase sample app.

It’s wonderfully simple: If the inbound color is greater than 1.0 or below 0.0, leave it alone. Otherwise, convert it to grayscale.

static bool isWideGamut(float value) {
    return value > 1.0 || value < 0.0;
}
namespace coreimage {
    float4 wide_color_kernel(sampler src) {
        float4 color = src.sample(src.coord());
        if (isWideGamut(color[0]) 
        || isWideGamut(color[1]) 
        || isWideGamut(color[2])) {
            return color;
        } else {
            float3 grayscale = float3(0.3, 0.59, 0.11);
            float luminance = dot(grayscale, color.rgb);
            return float4(float3(grayscale), 1.);
        }
    }
}

Step 4: Putting It Together

With that working, the last step was to grab each pixel buffer as it arrives, apply the filter, and then display it to the screen. This involved implementing a AVCaptureVideoDataOutputSampleBufferDelegate callback method, which I set up to be called on a dedicated, serial background queue.

After turning the CMSampleBuffer into a CIImage, I moved to a dedicated rendering queue and used my CIContext to render the CIImage to a CGImage, which then became a UIImage and was displayed on the screen, thanks to a plain old UIImageView.

Some disclaimers on this last part: I didn’t spend much time worrying about performance here, and it’s quite possible that on slow devices, the render queue could fail to keep up and become swamped with rendering tasks. In the real world, there would need to be a way to slow down the capture frame rate if the renderer couldn’t keep up.

Also, there are surely more efficient ways to display each CMSampleBuffer then creating a UIImage and assigning it to a UIImageView. For one thing, a more performant implementation would resize the image to the exact size of the display view during the rendering pass. (This sample Apple code turned each pixel buffer into an Open GLES texture, which frankly seemed like a lot of work for this little experiment.) I’m interested to hear how others would have approached this!

Up and Running

In any event, the experiment app ran very smoothly on my iPhone X: Core Image seemed more than capable of handling the 30 camera frames per second it was being asked to render. Meanwhile, I was surprised how much wide color I found in the world — even on a gray day in downtown Manhattan.

You can see a few examples of screenshots at the top of this post.

And here’s a link to my WideColorViewer project.

(Cross posted from “Adventures in Wide Color: An iOS Exploration” on my Medium blog.)

How to really slow down a Core Data fetch

Core Data is a bit of a mysterious thing. Sometimes, patterns that seem helpful in theory can be disastrous in practice. Consider, for example, the instinct not to save changes to disk, or to do so very infrequently, for fear of slowing down processing or blocking the main thread. This is something I’ve done, and seen others do, in projects using Core Data.

What’s easy to overlook is that unsaved changed in Core Data can make fetch requests slower. Sometimes, orders of magnitude slower. Which could undo all the benefits of deferred saves. Here’s a real-life example with some numbers.

The code below was used retrieve all the Word entities in an object graph whose string attribute was equal to one of the strings in an array, stringsToFetch. There were 28,720 words in the graph, and the fetch matched 2,044 of them.

NSFetchRequest *fetch = [NSFetchRequest fetchRequestWithEntityName:@"Word"];
fetch.predicate = [NSPredicate predicateWithFormat:@"string IN %@", stringsToFetch];
NSArray *results = [context executeFetchRequest:fetch error:nil];

In one test, the objects had been inserted in the context but save: had not yet been called. In the other, the inserted objects had been saved to disk. (The code was run on an iPad Air.)

Saved objects Unsaved objects Fetch time
0 28,720 4.55 secs
28,720 0 0.07 secs

The performance of the fetch with unsaved changes was, in a word, hideous. This is obviously an extreme case — nearly 30,000 unsaved insertions — but the effect was quite linear. Even 3,000 or so unsaved objects slowed the fetch down to a still-needlessly-long 0.5 seconds.

The reason for the slowdown is clear if you run the same code using the Time Profiler. When the unsaved objects are present, about 70% of the processor time is spent on string comparisons that descend from the call to executeFetchRequest:, in which our predicate is being evaluated. So in essence there are two fetches: One is a super-fast SQL query, and the other is a ponderously slow series of in-memory string comparisons.

Screen Shot 2014-09-12 at 10.54.17 AM

Keep in mind: You won’t uncover this problem by using the “Fetch Duration” data from the Core Data Fetches tool in Instruments. That’s because this tool seems to return the duration of the SQL query only: It doesn’t account for the time that was spent evaluating in-memory objects as well. You need to put a timer around the actual call to executeFetchRequest: to see the true processing time.

Every scenario is different, but I highly suspect that for some people who complain that “Core Data fetching is slow”, the problem isn’t a trip to disk, but the opposite: too many inserted, updated and/or deleted objects in memory.

Additive animations: animateWithDuration in iOS 8

There’s new behavior involving animateWithDuration: in iOS 8 that can help make certain “interruptible” animations a lot smoother.

The classic use case is a togglable animation that can be reversed mid-flight, like a drawer that opens or closes when a button is tapped. The gist of the change is that in iOS 8, when calls to animateWithDuration: overlap, any previously scheduled, in-flight animations on the same properties will no longer be yanked out of the view’s layer, but instead be allowed to finish even as the new animation takes effect and is blended with the old one(s). (For properties that adopt this additive animation behavior, it will happen whether or not you use the UIViewAnimationOptionBeginFromCurrentState option.)

Consider the example of a view whose center.y is being animated from 0 to 100 over 1 second using animateWithDuration:. Halfway though, at the 0.5-second mark, a second animateWithDuration: block, also with a 1-second duration, sends the view back to 0.

In iOS 7 and earlier, using the UIViewAnimationOptionBeginFromCurrentState option and the default animation curve (UIViewAnimationOptionCurveEaseInOut), the complete animation would look like this:

Non-Additive Animation Curve

At 0.5 seconds, when the second animation block is called, a new CABasicAnimation gets added to the animating view’s CALayer with the key “position” and the keypath “position”, replacing the previous one still in flight. The starting position for the new animation is animating view’s current position — that is, the position of its layer’s presentationLayer.

The resulting 1.5-second animation is continuous, in the sense that the view does not jump to a new position. But the speed changes abruptly in both magnitude and direction at 0.5 seconds. Not so pretty.

In iOS 8, however, the same sequence produces a very different animation — see the dotted blue line below:

Additive Animation Curve (Ease In, Ease Out)

At 0.5 seconds, a second CABasicAnimation is added, but with a different key than the first one — the system happens to use “position-2” — and both animations are allowed to run their course. Because both animations have the additive property set to YES, the position changes are added together. (The red and yellow lines don’t add up to the blue line because the animation values are relative — to the model position — and not absolute; the actual math involves positive and negative values that offset each other.)

The result is a smooth curve that, in this example, peaks at 0.75 seconds, as the animating view overshoots and then reverses itself.

You can continue to add animations in rapid succession using animateWithDuration:, and the layer will accumulate additive animations with keys like position-3, position-4, etc. The visual effect is generally quite smooth and natural.

This new behavior isn’t so pretty, however, for animations using a linear timing function. In this simple example, if the UIViewAnimationOptionCurveLinear option were used instead of the default ease-in-ease-out, the additive animations would cancel eachother out, resulting in the view being “frozen” until the previous animation ended. This definitely looks weird. See the 0.5-second plateau in the blue curve:

Additive Animation Curve, Linear

Since you apparently can’t opt out of additive animations in iOS 8, you’d need to do a bit of extra work to restore the old, non-additive behavior. In the simplest case, you could simply rip out any in-flight animations yourself before the new call to animateWithDuration:, making sure to manually reset the layer’s position to sync up with the presentation layer. Something like this, right before the new animation block, seems to work:

CALayer *presLayer = (CALayer *)self.animatingView.layer.presentationLayer;
self.animatingView.layer.position = [presLayer position];
[self.animatingView.layer removeAllAnimations];

In most cases, though, I assume the additive animations will be welcome as an easy way to smooth out overlapping transitions.

Check out this WWDC 2014 video for more on additive animations in iOS 8.