Adventures in Wide Color: An iOS Exploration

I used to think the reddest red around was 0xFF0000. Not much more to say.

And then a few weeks ago, I watched one of Apple’s videos about working with Wide Color. It drove home the point that many visible colors simply can’t be rendered on certain devices, and, by implication, that there was a whole world of reds (and oranges and greens) that I just hadn’t been seeing on my iPhone 6s.

A few days later, I got my iPhone X — and suddenly I could capture these formerly hidden colors, and see them rendered up close, on a gorgeous OLED display.

It was like a veil had been lifted on my perception and appreciation of color.

To help me understand wide color better, I decided to write an experimental iOS app to identify these colors around me, in real time. The basic idea, inspired by this sample code from Apple, was this: Make an app that streams live images from the camera and, for each frame, highlights all the colors outside the standard range for legacy displays. Colors inside the standard range would be converted to grayscale; colors outside would be allowed to pass through unchanged. (Skip to the end for example screenshots.)

First, some background: Until the release of the iPhone 7, iPhone screens used the standard Red Green Blue (sRGB) color space, which is more than 20 years old. Starting with iPhone 7, iPhones began supporting the Display P3 color space, a superset of sRGB that can display more of the visual color spectrum.

How much more? Here’s a 3-D rendering of how they compare:

P3’s color gamut is about 25% larger than sRGB’s

As this makes clear, while P3 and sRGB converge near the “poles” of white and black, P3 extends much further near the “equator,” where the brightest colors lie. (To be clear, both spaces only cover a portion of all colors visible to the human eye.)

While the “reddest” corner of the sRGB gamut (the lower left of the inner cube) would be represented in sRGB by the color coordinates (r: 1.0, g: 0.0, b: 0.0) — where 1.0 represents the maximum value of the space’s red channel — the same point converted into P3 space would be (r: 0.9175, b: 0.2003, g: 0.1387).

Conversely, the corresponding corner of the outer P3 gamut, described in that space as (1.0, 0.0, 0.0), lies outside of sRGB and cannot be expressed in that color space at all.

But enough theory. Back to my project. Here’s a rough outline what I did:

  • Set up an AVCaptureSession that streams pixel buffers from the camera, in the P3 color space, if it’s supported.
  • Created a CIContext whose workingColorSpace is Apple’s extended sRGB color space. Using the extended sRGB format is crucial because “wide” color information will be both preserved and easily identifiable after converting from P3. Unlike sRGB, which clamps values to a range from 0.0 to 1.0 and thus discards any wide-color information, extended sRGB allows values outside of that range, which leaves open the possibility that wide-color-aware displays can use them.
  • Write a Metal fragment shader that allows wide colors to pass through unchanged, but converts “narrow” colors to a shade of gray.
  • Using the CIContext and a custom CIFilter, built with the Metal shader, take each pixel buffer in the stream, filter it and render it to the screen.

Step 1: Creating the AVCaptureSession

Apple’s AVCam sample project is an excellent template for how to capture images from the camera, and I was able to adapt it for my project with few changes.

In my case, though, I needed more than what the sample code’s AVCaptureVideoPreviewLayer could provide: I needed access to the video capture itself, so I could process each pixel buffer in real time. At the same time, though, I needed to make sure I was preserving wide-color information.

This added a small complication, which forced me to understand how an AVCaptureSession decides whether or not to capture wide color by default.

Left to its own devices (pun intended), an AVCaptureSession will try to do the “right thing” as relates to wide color, thanks to a tongue-twisting property introduced in iOS 10 called automaticallyConfiguresCaptureDeviceForWideColor. When set to true (the default), the session automatically sets the device’s active color space to P3 if a) the device supports wide color and b) the session configuration suggests that wide color makes sense.

But when, according to the default behavior, does wide color “makes sense”?

For starters, an AVCapturePhotoOutput must be attached to the AVCaptureSession. But if you also attach AVCaptureVideoDataOutput — as I did, because I wanted to capture a live stream — you need to be careful. Because Display P3 is not well-supported in video, the automatic configuration will revert to sRGB if it thinks the destination is a movie file.

The trick for staying in the P3 color space, in this case, is to make your non-movie intentions clear by doing this:

session.sessionPreset = .photo

With that done, I confirmed the capture of wide color by checking that, once session.commitConfiguration was called, device.activeColorSpace changed from sRGB to P3_D65.

Step 2: Creating the CIContext

It’s easy to lose wide-color information when rendering an image. As Mike Krieger of Instagram points out in this great blog post, iOS 10 introduced a piece of wide-color-aware API called UIGraphicsImageRenderer to help with the rendering of wide-color images in Core Graphics.

With Core Image, on the other hand, you need to make sure your CIContext’s working color space and pixel format are configured correctly.

Here’s the setup that worked for me: the working color space had to support extended sRGB, as you’d expect (to handle values below 0.0 or above 1.0), and the pixel format had to use floats (for similar reasons).

private lazy var ciContext: CIContext = {
 let space = CGColorSpace(name: CGColorSpace.extendedSRGB)
 let format = NSNumber(value: kCIFormatRGBAh) // full-float pixels
 var options = [String: Any]()
 options[kCIContextWorkingColorSpace] = space
 options[kCIContextWorkingFormat] = format
 return CIContext(options: options)
}()

Set up in this way, a CIContext can preserve extended sRGB data when it renders and image.

Step 3: Creating the CIFilter

The next step was building a filter to convert “non-wide” pixels to shades of gray. I decided an interesting way to do this would be to create a custom CIFilter that was backed by a Metal shader. The basic steps were:

  1. Write the Metal shader
  2. Create a CIKernel from the shader
  3. Create a CIFilter subclass to apply the CIKernel

Steps 2 & 3 are pretty well covered in this WWDC 2017 video. As for creating the shader, I was able to borrow some code from Apple’s very cool Color Gamut Showcase sample app.

It’s wonderfully simple: If the inbound color is greater than 1.0 or below 0.0, leave it alone. Otherwise, convert it to grayscale.

static bool isWideGamut(float value) {
    return value > 1.0 || value < 0.0;
}
namespace coreimage {
    float4 wide_color_kernel(sampler src) {
        float4 color = src.sample(src.coord());
        if (isWideGamut(color[0]) 
        || isWideGamut(color[1]) 
        || isWideGamut(color[2])) {
            return color;
        } else {
            float3 grayscale = float3(0.3, 0.59, 0.11);
            float luminance = dot(grayscale, color.rgb);
            return float4(float3(grayscale), 1.);
        }
    }
}

Step 4: Putting It Together

With that working, the last step was to grab each pixel buffer as it arrives, apply the filter, and then display it to the screen. This involved implementing a AVCaptureVideoDataOutputSampleBufferDelegate callback method, which I set up to be called on a dedicated, serial background queue.

After turning the CMSampleBuffer into a CIImage, I moved to a dedicated rendering queue and used my CIContext to render the CIImage to a CGImage, which then became a UIImage and was displayed on the screen, thanks to a plain old UIImageView.

Some disclaimers on this last part: I didn’t spend much time worrying about performance here, and it’s quite possible that on slow devices, the render queue could fail to keep up and become swamped with rendering tasks. In the real world, there would need to be a way to slow down the capture frame rate if the renderer couldn’t keep up.

Also, there are surely more efficient ways to display each CMSampleBuffer then creating a UIImage and assigning it to a UIImageView. For one thing, a more performant implementation would resize the image to the exact size of the display view during the rendering pass. (This sample Apple code turned each pixel buffer into an Open GLES texture, which frankly seemed like a lot of work for this little experiment.) I’m interested to hear how others would have approached this!

Up and Running

In any event, the experiment app ran very smoothly on my iPhone X: Core Image seemed more than capable of handling the 30 camera frames per second it was being asked to render. Meanwhile, I was surprised how much wide color I found in the world — even on a gray day in downtown Manhattan.

You can see a few examples of screenshots at the top of this post.

And here’s a link to my WideColorViewer project.

(Cross posted from “Adventures in Wide Color: An iOS Exploration” on my Medium blog.)

How to really slow down a Core Data fetch

Core Data is a bit of a mysterious thing. Sometimes, patterns that seem helpful in theory can be disastrous in practice. Consider, for example, the instinct not to save changes to disk, or to do so very infrequently, for fear of slowing down processing or blocking the main thread. This is something I’ve done, and seen others do, in projects using Core Data.

What’s easy to overlook is that unsaved changed in Core Data can make fetch requests slower. Sometimes, orders of magnitude slower. Which could undo all the benefits of deferred saves. Here’s a real-life example with some numbers.

The code below was used retrieve all the Word entities in an object graph whose string attribute was equal to one of the strings in an array, stringsToFetch. There were 28,720 words in the graph, and the fetch matched 2,044 of them.

NSFetchRequest *fetch = [NSFetchRequest fetchRequestWithEntityName:@"Word"];
fetch.predicate = [NSPredicate predicateWithFormat:@"string IN %@", stringsToFetch];
NSArray *results = [context executeFetchRequest:fetch error:nil];

In one test, the objects had been inserted in the context but save: had not yet been called. In the other, the inserted objects had been saved to disk. (The code was run on an iPad Air.)

Saved objects Unsaved objects Fetch time
0 28,720 4.55 secs
28,720 0 0.07 secs

The performance of the fetch with unsaved changes was, in a word, hideous. This is obviously an extreme case — nearly 30,000 unsaved insertions — but the effect was quite linear. Even 3,000 or so unsaved objects slowed the fetch down to a still-needlessly-long 0.5 seconds.

The reason for the slowdown is clear if you run the same code using the Time Profiler. When the unsaved objects are present, about 70% of the processor time is spent on string comparisons that descend from the call to executeFetchRequest:, in which our predicate is being evaluated. So in essence there are two fetches: One is a super-fast SQL query, and the other is a ponderously slow series of in-memory string comparisons.

Screen Shot 2014-09-12 at 10.54.17 AM

Keep in mind: You won’t uncover this problem by using the “Fetch Duration” data from the Core Data Fetches tool in Instruments. That’s because this tool seems to return the duration of the SQL query only: It doesn’t account for the time that was spent evaluating in-memory objects as well. You need to put a timer around the actual call to executeFetchRequest: to see the true processing time.

Every scenario is different, but I highly suspect that for some people who complain that “Core Data fetching is slow”, the problem isn’t a trip to disk, but the opposite: too many inserted, updated and/or deleted objects in memory.