Swift’s Codable and Stringly-Typed JSON Objects

So. Let’s say you’re in charge of making an iPhone app and a wearable device that work together to track your workouts and share them on social media.

And let’s say you expect to the app and the device to send, and receive, respectively, a fixed set of JSON commands with very different parameters in their payloads.

Each command will have special key that lets us know the command type we’re dealing with. (This key is commonly something like type.) But beyond that, the structure of these various command types will be quite unrelated.

For example, here’s a hypothetical “Start Workout” command, which, in addition to command_type, has three additional fields:

    "command_type": "start_workout",
    "location": "Gary's Gym",
    "date": "2020-03-16 19:45:13 +0000",
    "intensityLevel": 5

And here’s an “End Workout” command, which has no extra info:

    "command_type": "end_workout"

And here’s a “Share Workout” command, which has one additional field:

    "command_type": "share_workout",
    "service": "twitter"

The challenge here is that you don’t know the type of command to parse from the JSON until you’ve read a string from a previously agreed-upon key. (In this example, that key is command_type.) This string completely determines which other fields (if any) to expect–and, more broadly, what type of command you are dealing with.

It’s not such an uncommon scenario. You might also imagine, say, a push notification whose payload contains a key describing the event that triggered the push (e.g. "push_type": "account_updated") and several other key-value pairs that are totally specific to that push trigger.

How can we used Swift to simplify the task of encoding and decoding these “stringly-typed” JSON commands in a type-safe way?

Obviously, the Codable protocol is a handy choice here. Used with the JSONEncoder and JSONDecoder types, we’ll get a lot of the encoding and decoding implementation for free.

But in this case, because the object we’re trying to represent — let’s call it a Command — takes many heterogenous forms, there’s some additional complexity.

Of course, we could always just create a single type, conforming to Codable, that includes all of the properties of all of the command types. For example:

struct Command: Codable {
    let commandType: CommandType
    let location: String?
    let date: String?
    let intensityLevel: Int?
    let service: Service?

This doesn’t feel so great, though, if only because we’d be forced to make all of these properties Optional, since any given command type might only use a small subset.

If, instead, we made a totally separate type, conforming to Codable, for every command type, this solves the problem of unused properties. But in this arrangement, we’d need to look into each JSON object in advance, inspecting the command_type key, before deciding which of these unrelated types to pass into JSONDecoder.decode(_:from:).

Alternatively, we could make several classes that descend from a common Codable ancestor — and I’ve seen some good implementations of this inheritance-based setup, including one here. This makes a lot of sense if the various types share certain properties in common.

With that approach, there is one disadvantage: we wouldn’t be able to exhaustively switch through the resulting subclasses, which means we might forget to handle new command types as they are added. (Unlike in Kotlin, Swift doesn’t have a concept of “sealed classes,” and so the compiler can’t check to make sure we’ve exhaustively handled every possible subclass.)

For this exercise, we’d really like command parsing to look like this:

    do {
        let command = try JSONDecoder().decode(Command.self, from: data)
        switch command {
            case .startWorkout(let workout):
                print("Starting workout at \(workout.location)")
            case .endWorkout:
                print("Ending workout")
            case .shareWorkout(let service):
                print("Sharing workout to \(service)")
    } catch {
        // Handle the error

In this approach, we’d like to make a single call to JSONDecoder.decode(_:from:), and then switch on all of the possible cases to extract the specific, fully-typed payload for each case. (There’s no need for a default branch here; if the command type is unrecognized, we can handle that in the catch block.)

We can make this possible by declaring Command to be an enum whose cases have associated values, each of which (if it exists) conforms to Codable.

So the overarching type becomes something like this:

enum Command {
    case startWorkout(Workout)
    case endWorkout
    case shareWorkout(to: ShareService)

With the associated value types looking like this:

struct Workout: Codable {
        let location: String
        let date: String
        let intensityLevel: Int

    struct ShareService: Codable {
        enum Service: String, Codable {
            case facebook, instagram, twitter

        let service: Service

Now, we just need to write Command.encode() and Command.decode(from:) to make this happen.

Let’s start with the decoding.

First off, we’ll create a single new type conforming to CodingKey — called CommandKeys — that specifies the all-important key used to determine which kind of command we are parsing.

The second type we’ll create is CommandType, which specifies all the allowable values this key can have.

extension Command {
    enum CommandKeys: String, CodingKey {
        case commandType = "command_type"

    enum CommandType: String, Codable {
        case start = "start_workout"
        case end = "end_workout"
        case share = "share_workout"

With that preparation, all we need to do is implement init(from:), which does the actual parsing. Here’s the whole thing:

extension Command: Decodable {
    enum CommandKeys: String, CodingKey {
        case commandType = "command_type"

    enum CommandType: String, Codable {
        case start = "start_workout"
        case end = "end_workout"
        case share = "share_workout"

    init(from decoder: Decoder) throws {
        let values = try decoder.container(keyedBy: CommandTypeKeys.self)
        let commandType = try values.decode(CommandType.self, 
                                            forKey: .commandType)
        switch commandType {
        case .start: 
            self = .startWorkout(try Workout(from: decoder))
        case .end:
            self = .endWorkout
        case .share: 
            self = .shareWorkout(to: try ShareService(from: decoder))

The first two lines are standard for any custom override of Decodable.init(from:): Get a keyed container, and start decoding values for keys — in this case, the commandType key.

At that point, we’re almost done. We just switch over the resulting enum and decode the object we need as an associated value. For example, the associated value type for the start command is a Workout — which itself is fully decodable, so we just need to call Workout(from: decoder).

Encoding is equally easy. We start be encoding the all-important commandType key, and finish by encoding the entire associated value (if there is one).

extension Command: Encodable {
    func encode(to encoder: Encoder) throws {
        var container = encoder.container(keyedBy: CommandKeys.self)
        switch self {
        case .startWorkout(let workoutInfo):
            try container.encode(CommandType.start, forKey: .commandType)
            try workoutInfo.encode(to: encoder)
        case .endWorkout:
            try container.encode(CommandType.end, forKey: .commandType)
        case .shareWorkout(let shareInfo):
            try container.encode(CommandType.share, forKey: .commandType)
            try shareInfo.encode(to: encoder)

(Note: After writing this up, I found this blog post that beautifully explains this same concept of coding heterogeneous JSON. The author’s example assumes each object type’s properties are gathered under an attributes property — this example shows what you might do if these properties were instead at the top level.)

Hello Triangle, Meet Swift! (And Wide Color)

Two triangles rendered with Metal

The colors at left were gamma encoded after interpolation; those on the right were not.

For an iOS developer wanting to get their feet wet with Metal, a natural place to start is Apple’s Hello Triangle demo.

It is truly the “Hello World” of Metal. All it does is render a two-dimensional triangle, whose corners are red, green and blue, into an MTKView. The vertex and fragment shaders are about as simple as you can get. Even so, it’s a great way to start figuring out how the pieces of the pipeline fit together.

The only thing is—it’s written in Objective C.

As a Swift developer, I found myself wishing I could see a version of Hello Triangle in that language. So I decided to convert it to Swift. (The conversion itself was pretty straightforward: You can see the code in this repo.)

To spice things up a little, I also updated the demo to support wide color, which in Apple’s ecosystem means using the Display P3 color space. (Wide color refers to the ability to display colors outside of the traditional gamut, known as sRGB; it’s something I explored in this earlier post.)

Supporting wide color in Hello Triangle is conceptually simple: Instead of setting the vertices to pure red, green and blue as defined in sRGB, set them to the pure red, green and blue as defined in Display P3. On devices that support it, the corners of the triangle will appear brighter and more vivid.

But as a Metal novice, I found it a bit tricky. In MacOS, the MTKView class has a settable colorspace property, which presumably makes things fairly simple—but in iOS, that property isn’t available.

For that reason, it wasn’t immediately clear to me where in the Metal pipeline to make adjustments for wide color support.

I found an answer in this excellent Stack Overflow reply and related blog post. The author explains how to convert Display P3 color values (which range from 0.0 to 1.0, but actually refer to a wider-than-normal color space) to extended sRGB values (which is comparable to normal sRGB except the values can be negative or greater than 1.0) with the help of a matrix transform. The exact math depends on the colorPixelFormat of the MTKView, which determines where the gamma gets applied.

OK, so about gamma: the gist of gamma correction is that color intensities are often passed through a non-linear function before saving an image. Because most images have only 256 luminance levels, and the human eye is very sensitive to changes in dark colors, the gamma function helps store more darks, sacrificing bright intensities. The values are then passed through an inverse function when presenting on a display.

Because gamma encoding is not linear, values that are evenly spaced before encoding (also known as “compression”) won’t be evenly spaced after the encoding. (This blog post has a superb explanation of gamma correction for those who aren’t familiar.)

There’s a lot of implicit gamma encoding and decoding that can happen in the Metal pipeline, and if you manipulate values without knowing which state you’re in, things can get screwed up fast.

As I learned from those earlier blog posts, there are a couple of options for handling gamma when rendering in wide color to a MTKView:

  1. convert your Display P3 color values to their linear (non-encoded) counterpart in sRGB, and allow the MTKView to apply the gamma encoding for you (by choosing pixel format .bgra10_xr_srgb), or
  2. convert the P3 values to linear sRGB and then pre-apply the gamma encoding yourself mathematically, choosing the pixel format .bgra10_xr.

In this demo, this is the difference between converting the left corner’s “extended” color to 1.2249, -0.04203, -0.0196 (which is P3’s reddest red, converted to linear sRGB) and converting it to 1.0930, -0.2267, -0.1501 (P3’s reddest red as sRGB with gamma encoding applied; these are the numbers you would get if you used Apple’s ColorSync utility to convert to sRGB).

While these conversions are probably best done in a shader, I only had three vertices to handle, so I did it in Swift code using matrix math (see below).

After trying options 1 and 2 above, I noticed an interesting difference in the visual results: when I let the MTKView apply gamma compression to my vertex colors (option 1, pictured above at left), the interior of the triangle was much lighter than when I used the technique in option 2 (right).

The issue was this: In option 1, not only were my triangle’s corners being assigned gamma-compressed values, but so were all of the pixels in between.

The way GPUs work is that values in between the defined vertices are computed automatically using a linear interpolation (or, strictly speaking, a barycentric interpolation) before being passed to the fragment shader.

After the interpolation (which occured in linear space), the gamma encoding moved all of the pixels toward lighter intensities (higher numbers, closer to 1.0).

But when I applied gamma encoding to the converted vertex colors “by hand” (option 2) and set the MTKView to the colorPixelFormat of .bgra10_xr, only the corners were gamma encoded, and the interpolation was effectively done in gamma space. The result was a triangle whose corners were the same color as in option 1, but whose interior values were biased toward the dark end, because of the nature of the gamma function described above.

While neither option is necessarily wrong, you might argue that option 1 (interpolating in linear space) seems more natural, because light is additive in linear space.

Some specifics below:

Using this matrix and conversion functions from endavid

private static let linearP3ToLinearSRGBMatrix: matrix_float3x3 = {
    let col1 = float3([1.2249,  -0.2247,  0])
    let col2 = float3([-0.0420,   1.0419,  0])
    let col3 = float3([-0.0197,  -0.0786,  1.0979])
    return matrix_float3x3([col1, col2, col3])

extension float3 {
    var gammaDecoded: float3 {
        let f = {(c: Float) -> Float in
            if abs(c) <= 0.04045 { return c / 12.92 } return sign(c) * powf((abs(c) + 0.055) / 1.055, 2.4) } return float3(f(x), f(y), f(z)) } var gammaEncoded: float3 { let f = {(c: Float) -> Float in
            if abs(c) <= 0.0031308 {
                return c * 12.92
            return sign(c) * (powf(abs(c), 1/2.4) * 1.055 - 0.055)
        return float3 (f(x), f(y), f(z))

…and a conversion function like this…

func toSRGB(_ p3: float3) -> float4 {
    // Note: gamma decoding not strictly necessary in this demo
    // because 0 and 1 always decode to 0 and 1
    let linearSrgb = p3.gammaDecoded * linearP3ToLinearSRGBMatrix
    let srgb = linearSrgb.gammaEncoded
    return float4(x: srbg.x, y: srbg.y, z: srbg.z, w: 1.0)

…the color adjustment went like this:

let p3red = float3([1.0, 0.0, 0.0])
let p3green = float3([0.0, 1.0, 0.0])
let p3blue = float3([0.0, 0.0, 1.0])

let vertex1 = Vertex(position: leftCorner, color: toSRGB(p3red))
let vertex2 = Vertex(position: top, color: toSRGB(p3green))
let vertex3 = Vertex(position: rightCorner, color: toSRGB(p3blue))

let myWideColorVertices = [vertex1, vertex2, vertex3]

I hope this port helps someone out there. And huge thanks to David Gavilan for his informative blog posts and for his incredible helpful feedback on this post.

Hello Triangle Swift

Adventures in Wide Color: An iOS Exploration

I used to think the reddest red around was 0xFF0000. Not much more to say.

And then a few weeks ago, I watched one of Apple’s videos about working with Wide Color. It drove home the point that many visible colors simply can’t be rendered on certain devices, and, by implication, that there was a whole world of reds (and oranges and greens) that I just hadn’t been seeing on my iPhone 6s.

A few days later, I got my iPhone X — and suddenly I could capture these formerly hidden colors, and see them rendered up close, on a gorgeous OLED display.

It was like a veil had been lifted on my perception and appreciation of color.

To help me understand wide color better, I decided to write an experimental iOS app to identify these colors around me, in real time. The basic idea, inspired by this sample code from Apple, was this: Make an app that streams live images from the camera and, for each frame, highlights all the colors outside the standard range for legacy displays. Colors inside the standard range would be converted to grayscale; colors outside would be allowed to pass through unchanged. (Skip to the end for example screenshots.)

First, some background: Until the release of the iPhone 7, iPhone screens used the standard Red Green Blue (sRGB) color space, which is more than 20 years old. Starting with iPhone 7, iPhones began supporting the Display P3 color space, a superset of sRGB that can display more of the visual color spectrum.

How much more? Here’s a 3-D rendering of how they compare:

P3’s color gamut is about 25% larger than sRGB’s

As this makes clear, while P3 and sRGB converge near the “poles” of white and black, P3 extends much further near the “equator,” where the brightest colors lie. (To be clear, both spaces only cover a portion of all colors visible to the human eye.)

While the “reddest” corner of the sRGB gamut (the lower left of the inner cube) would be represented in sRGB by the color coordinates (r: 1.0, g: 0.0, b: 0.0) — where 1.0 represents the maximum value of the space’s red channel — the same point converted into P3 space would be (r: 0.9175, b: 0.2003, g: 0.1387).

Conversely, the corresponding corner of the outer P3 gamut, described in that space as (1.0, 0.0, 0.0), lies outside of sRGB and cannot be expressed in that color space at all.

But enough theory. Back to my project. Here’s a rough outline what I did:

  • Set up an AVCaptureSession that streams pixel buffers from the camera, in the P3 color space, if it’s supported.
  • Created a CIContext whose workingColorSpace is Apple’s extended sRGB color space. Using the extended sRGB format is crucial because “wide” color information will be both preserved and easily identifiable after converting from P3. Unlike sRGB, which clamps values to a range from 0.0 to 1.0 and thus discards any wide-color information, extended sRGB allows values outside of that range, which leaves open the possibility that wide-color-aware displays can use them.
  • Write a Metal fragment shader that allows wide colors to pass through unchanged, but converts “narrow” colors to a shade of gray.
  • Using the CIContext and a custom CIFilter, built with the Metal shader, take each pixel buffer in the stream, filter it and render it to the screen.

Step 1: Creating the AVCaptureSession

Apple’s AVCam sample project is an excellent template for how to capture images from the camera, and I was able to adapt it for my project with few changes.

In my case, though, I needed more than what the sample code’s AVCaptureVideoPreviewLayer could provide: I needed access to the video capture itself, so I could process each pixel buffer in real time. At the same time, though, I needed to make sure I was preserving wide-color information.

This added a small complication, which forced me to understand how an AVCaptureSession decides whether or not to capture wide color by default.

Left to its own devices (pun intended), an AVCaptureSession will try to do the “right thing” as relates to wide color, thanks to a tongue-twisting property introduced in iOS 10 called automaticallyConfiguresCaptureDeviceForWideColor. When set to true (the default), the session automatically sets the device’s active color space to P3 if a) the device supports wide color and b) the session configuration suggests that wide color makes sense.

But when, according to the default behavior, does wide color “makes sense”?

For starters, an AVCapturePhotoOutput must be attached to the AVCaptureSession. But if you also attach AVCaptureVideoDataOutput — as I did, because I wanted to capture a live stream — you need to be careful. Because Display P3 is not well-supported in video, the automatic configuration will revert to sRGB if it thinks the destination is a movie file.

The trick for staying in the P3 color space, in this case, is to make your non-movie intentions clear by doing this:

session.sessionPreset = .photo

With that done, I confirmed the capture of wide color by checking that, once session.commitConfiguration was called, device.activeColorSpace changed from sRGB to P3_D65.

Step 2: Creating the CIContext

It’s easy to lose wide-color information when rendering an image. As Mike Krieger of Instagram points out in this great blog post, iOS 10 introduced a piece of wide-color-aware API called UIGraphicsImageRenderer to help with the rendering of wide-color images in Core Graphics.

With Core Image, on the other hand, you need to make sure your CIContext’s working color space and pixel format are configured correctly.

Here’s the setup that worked for me: the working color space had to support extended sRGB, as you’d expect (to handle values below 0.0 or above 1.0), and the pixel format had to use floats (for similar reasons).

private lazy var ciContext: CIContext = {
 let space = CGColorSpace(name: CGColorSpace.extendedSRGB)
 let format = NSNumber(value: kCIFormatRGBAh) // full-float pixels
 var options = [String: Any]()
 options[kCIContextWorkingColorSpace] = space
 options[kCIContextWorkingFormat] = format
 return CIContext(options: options)

Set up in this way, a CIContext can preserve extended sRGB data when it renders and image.

Step 3: Creating the CIFilter

The next step was building a filter to convert “non-wide” pixels to shades of gray. I decided an interesting way to do this would be to create a custom CIFilter that was backed by a Metal shader. The basic steps were:

  1. Write the Metal shader
  2. Create a CIKernel from the shader
  3. Create a CIFilter subclass to apply the CIKernel

Steps 2 & 3 are pretty well covered in this WWDC 2017 video. As for creating the shader, I was able to borrow some code from Apple’s very cool Color Gamut Showcase sample app.

It’s wonderfully simple: If the inbound color is greater than 1.0 or below 0.0, leave it alone. Otherwise, convert it to grayscale.

static bool isWideGamut(float value) {
    return value > 1.0 || value < 0.0;
namespace coreimage {
    float4 wide_color_kernel(sampler src) {
        float4 color = src.sample(src.coord());
        if (isWideGamut(color[0]) 
        || isWideGamut(color[1]) 
        || isWideGamut(color[2])) {
            return color;
        } else {
            float3 grayscale = float3(0.3, 0.59, 0.11);
            float luminance = dot(grayscale, color.rgb);
            return float4(float3(grayscale), 1.);

Step 4: Putting It Together

With that working, the last step was to grab each pixel buffer as it arrives, apply the filter, and then display it to the screen. This involved implementing a AVCaptureVideoDataOutputSampleBufferDelegate callback method, which I set up to be called on a dedicated, serial background queue.

After turning the CMSampleBuffer into a CIImage, I moved to a dedicated rendering queue and used my CIContext to render the CIImage to a CGImage, which then became a UIImage and was displayed on the screen, thanks to a plain old UIImageView.

Some disclaimers on this last part: I didn’t spend much time worrying about performance here, and it’s quite possible that on slow devices, the render queue could fail to keep up and become swamped with rendering tasks. In the real world, there would need to be a way to slow down the capture frame rate if the renderer couldn’t keep up.

Also, there are surely more efficient ways to display each CMSampleBuffer then creating a UIImage and assigning it to a UIImageView. For one thing, a more performant implementation would resize the image to the exact size of the display view during the rendering pass. (This sample Apple code turned each pixel buffer into an Open GLES texture, which frankly seemed like a lot of work for this little experiment.) I’m interested to hear how others would have approached this!

Up and Running

In any event, the experiment app ran very smoothly on my iPhone X: Core Image seemed more than capable of handling the 30 camera frames per second it was being asked to render. Meanwhile, I was surprised how much wide color I found in the world — even on a gray day in downtown Manhattan.

You can see a few examples of screenshots below.

And here’s a link to my WideColorViewer project.

(Cross posted from “Adventures in Wide Color: An iOS Exploration” on my Medium blog.)

How to really slow down a Core Data fetch

Core Data is a bit of a mysterious thing. Sometimes, patterns that seem helpful in theory can be disastrous in practice. Consider, for example, the instinct not to save changes to disk, or to do so very infrequently, for fear of slowing down processing or blocking the main thread. This is something I’ve done, and seen others do, in projects using Core Data.

What’s easy to overlook is that unsaved changed in Core Data can make fetch requests slower. Sometimes, orders of magnitude slower. Which could undo all the benefits of deferred saves. Here’s a real-life example with some numbers.

The code below was used retrieve all the Word entities in an object graph whose string attribute was equal to one of the strings in an array, stringsToFetch. There were 28,720 words in the graph, and the fetch matched 2,044 of them.

NSFetchRequest *fetch = [NSFetchRequest fetchRequestWithEntityName:@"Word"];
fetch.predicate = [NSPredicate predicateWithFormat:@"string IN %@", stringsToFetch];
NSArray *results = [context executeFetchRequest:fetch error:nil];

In one test, the objects had been inserted in the context but save: had not yet been called. In the other, the inserted objects had been saved to disk. (The code was run on an iPad Air.)

Saved objects Unsaved objects Fetch time
0 28,720 4.55 secs
28,720 0 0.07 secs

The performance of the fetch with unsaved changes was, in a word, hideous. This is obviously an extreme case — nearly 30,000 unsaved insertions — but the effect was quite linear. Even 3,000 or so unsaved objects slowed the fetch down to a still-needlessly-long 0.5 seconds.

The reason for the slowdown is clear if you run the same code using the Time Profiler. When the unsaved objects are present, about 70% of the processor time is spent on string comparisons that descend from the call to executeFetchRequest:, in which our predicate is being evaluated. So in essence there are two fetches: One is a super-fast SQL query, and the other is a ponderously slow series of in-memory string comparisons.

Screen Shot 2014-09-12 at 10.54.17 AM

Keep in mind: You won’t uncover this problem by using the “Fetch Duration” data from the Core Data Fetches tool in Instruments. That’s because this tool seems to return the duration of the SQL query only: It doesn’t account for the time that was spent evaluating in-memory objects as well. You need to put a timer around the actual call to executeFetchRequest: to see the true processing time.

Every scenario is different, but I highly suspect that for some people who complain that “Core Data fetching is slow”, the problem isn’t a trip to disk, but the opposite: too many inserted, updated and/or deleted objects in memory.

Additive animations: animateWithDuration in iOS 8

There’s new behavior involving animateWithDuration: in iOS 8 that can help make certain “interruptible” animations a lot smoother.

The classic use case is a togglable animation that can be reversed mid-flight, like a drawer that opens or closes when a button is tapped. The gist of the change is that in iOS 8, when calls to animateWithDuration: overlap, any previously scheduled, in-flight animations on the same properties will no longer be yanked out of the view’s layer, but instead be allowed to finish even as the new animation takes effect and is blended with the old one(s). (For properties that adopt this additive animation behavior, it will happen whether or not you use the UIViewAnimationOptionBeginFromCurrentState option.)

Consider the example of a view whose center.y is being animated from 0 to 100 over 1 second using animateWithDuration:. Halfway though, at the 0.5-second mark, a second animateWithDuration: block, also with a 1-second duration, sends the view back to 0.

In iOS 7 and earlier, using the UIViewAnimationOptionBeginFromCurrentState option and the default animation curve (UIViewAnimationOptionCurveEaseInOut), the complete animation would look like this:

Non-Additive Animation Curve

At 0.5 seconds, when the second animation block is called, a new CABasicAnimation gets added to the animating view’s CALayer with the key position and the keypath position, replacing the previous one still in flight. The starting position for the new animation is animating view’s current position — that is, the position of its layer’s presentationLayer.

The resulting 1.5-second animation is continuous, in the sense that the view does not jump to a new position. But the speed changes abruptly in both magnitude and direction at 0.5 seconds. Not so pretty.

In iOS 8, however, the same sequence produces a very different animation — see the dotted blue line below:

Additive Animation Curve (Ease In, Ease Out)

At 0.5 seconds, a second CABasicAnimation is added, but with a different key than the first one — the system happens to use position-2 — and both animations are allowed to run their course. Because both animations have the additive property set to YES, the position changes are added together. (The red and yellow lines don’t add up to the blue line because the animation values are relative — to the model position — and not absolute; the actual math involves positive and negative values that offset each other.)

The result is a smooth curve that, in this example, peaks at 0.75 seconds, as the animating view overshoots and then reverses itself.

You can continue to add animations in rapid succession using animateWithDuration:, and the layer will accumulate additive animations with keys like position-3, position-4, etc. The visual effect is generally quite smooth and natural.

This new behavior isn’t so pretty, however, for animations using a linear timing function. In this simple example, if the UIViewAnimationOptionCurveLinear option were used instead of the default ease-in-ease-out, the additive animations would cancel eachother out, resulting in the view being “frozen” until the previous animation ended. This definitely looks weird. See the 0.5-second plateau in the blue curve:

Additive Animation Curve, Linear

Since you apparently can’t opt out of additive animations in iOS 8, you’d need to do a bit of extra work to restore the old, non-additive behavior. In the simplest case, you could simply rip out any in-flight animations yourself before the new call to animateWithDuration:, making sure to manually reset the layer’s position to sync up with the presentation layer. Something like this, right before the new animation block, seems to work:

CALayer *presLayer = (CALayer *)self.animatingView.layer.presentationLayer;
self.animatingView.layer.position = [presLayer position];
[self.animatingView.layer removeAllAnimations];

In most cases, though, I assume the additive animations will be welcome as an easy way to smooth out overlapping transitions.

Check out this WWDC 2014 video for more on additive animations in iOS 8.