Maps on iOS: Design Explosions #1


Hullo! Design Explosions is series started in 2014 by me, Jon Bell. This essay is 10,000 words and uses Google Maps and Apple Maps as a design lesson. It was originally written in 2014, so many of the UI/UX details are different now, but it's still one of the most popular things I've ever written. Enjoy!

Google Maps on the left, Apple Maps on the right

Guidelines

Design Explosions are written with three things in mind while we analyze the work of other teams:

  1. The team that built these products are smart and talented.
  2. There are many things we can’t know without joining the team.
  3. Design Explosions is here to teach, not to judge.

So you’re not going to see us rolling our eyes or shouting fail. We’re going to assume that every flow and feature came into the world after a lot of debate and careful tradeoffs. In short, we’re going to give the designers the benefit of the doubt and focus on pulling lessons from the resulting design.


Hold Up, Hold Up—

Before We Get Started

If you’re reading this to see a head-to-head comparison where we crown a winner, you’re going to be disappointed. That’s not what this is about. Let’s use Medium’s handy pull-quote feature to make sure no one misses this:

This is not a contest, it’s a detailed design lesson.

Also, we’re not going to be spending any time on the accuracy of Apple’s map data, which is considered less accurate than Google’s. We’ll be focusing on the flows in the apps themselves, because those are details the designers can actually control.

Onward!




Canvas Layout

To kick things off, let’s take a look at the main screen for Google Maps in both high fidelity and as a low fidelity silhouette.

And here’s Apple Maps with the same treatment:

Right off the bat we have an interesting difference to explore: how the canvas is being used. Let’s compare the low fidelity versions side-by side:

Google Maps (l) and Apple Maps (r) as silhouettes

The screen size of an iPhone 5 is 640 pixels wide and 1136 pixels tall. In that space, both Apple and Google have a map with several buttons and options. But it’s interesting to look at how differently they approach the canvas.

Apple has a very conservative style (four right angles, no appreciable transparency) whereas Google is cutting everything back as much as possible (floating buttons and transparency). Let’s dive into the math for a moment to see exactly how many pixels they’re using.

The math for the Apple design is easy — 640 horizontal pixels times 920 vertical pixels is 588,800. Google’s is a bit more tricky because they’re using circles and some transparency. But our calculations come to 620,814 pixels. So the Google design has approximately 7% more real estate for the map.

Case closed, right? Google has more space for the map, the map is the primary element, so Google is better. Right? Well, it’s not that simple.

Kudos to Google for opening up more pixels for the maps canvas. But there’s also a visual simplicity to the Apple model. It’s not that one or the other is better, it’s that each team is optimizing in different directions. Which leads us to one of the biggest lessons of platform level UX design. Much of what we’ll be talking about today is something that many people don’t fully understand: 1st party app design.

1st Party App Design, Y’all

Millions of people have done 3rd party application design. If you expand the pool to people who have sketched out app ideas, the number is probably closer to tens of millions. But how many people have designed 1st party apps? Ones that come on the device rather than being a download? I’d wager the number is less than a thousand.

To count this number, you count Google, Apple, Microsoft, plus companies like Blackberry and Samsung, each with their own design teams. And even at large companies, the design teams aren’t very large. No where near as large as the engineer corps. There’s just not that many of us. And as someone who’s been in this tiny club, let me tell you, it’s absolutely nuts .

See, as a first party designer, you don’t get to do whatever you want. Because everything you do is imitated (whether explicitly or implicitly) by potentially millions of other designers and developers. And you need apps on your platform to be coherent and follow certain rules of the road.

I love using metaphors, but it’s hard finding one for how it feels to be a first party designer. One is the concept of democracy. Democracy may be a fine system, but when compared to communism or a monarchy it’s sometimes referred to as “fighting with one hand behind your back”. That’s like being a 1st party app designer. You actually have less control than you might think.

Another metaphor is becoming a parent. When you have children, it’s humbling to see them imitate everything about you. Because they’re not choosy. They don’t zero in on the good stuff, at least not exclusively. Your kids see and imitate all of you. Once you realize this, it’s a good motivator to become a better person. Even if that new you is a lot more boring.

When you’re a first party app developer, there’s no saying “do as I say, not as I do”. If you create some crazy interaction for an app, it will be seen as Officially Sanctioned By Your Company Forever And Ever Amen. Other people will imitate anything you do. So you have to be extremely careful about the example you’re setting. And trust me, this makes your designs a lot more boring than when you’re in a small 3rd party startup.

Until you’ve been in the room with a team agonizing over issues like this, it’s hard to fully understand how hard it can be. Imagine coming up with a great design. Something everyone on the team is proud of. But you soon realize it’s using an interaction that’s alien to the system your company has worked to standardize on. So you consider another design, one that’s more standard. Everyone agrees it’s watered down, suboptimal, and nothing like what a third party could get away with. No one loves it. So what do you do?

The safe, standard, example-setting, long term, big picture one. Every time.

And it’s painful, because you want to yell “but if we broke with the conventions, just this once, we could make things so much better for our app!” But it doesn’t matter. You’re a first party designer. You have an example to set. So the question is not “how much can you optimize for your single app”, but rather “how well can you champion the principles of an entire OS?” Which is why it’s fun to be a 1st party app designer, but it’s also tough. And it goes a long way towards explaining why all the sex appeal for apps tends to live in 3rd party software design, not 1st party apps.


Let’s look back to what Google and Apple have done with the canvas. On the Google side, you see a perfect alignment with their new Material design language . On the iOS side, you see perfect alignment with iOS 8's design principles . This is not a coincidence.

Simply put, both sides are “right”, because both app teams need to uphold the principles set forth by their platform teams. Apple Maps would look wrong if it embraced Material, despite the 7% gain in real estate, and Google Maps wouldn’t feel “Google-like” if it embraced a pure iOS design. They’re both doing exactly the right thing here.

So that’s the canvas. Let’s move on to the buttons and the features.


Affordances

In design, affordance just means “a thing you can take action on”. A button is an affordance. A link is an affordance. A scrolling list is an affordance. There’s a ton of science and art behind figuring out what to place where. It reminds me a lot of ikebana, the Japanese art of flower arrangement.

Let’s take close look at the arrangements Google and Apple have presented us with. There’s a lot to learn from here.

Before we get into visuals or placement, let’s just list what each design is letting you do in this screen.

Similarities

  • A search field for typing an address or location
  • A way to center the map on a user’s current location
  • A “directions” button

Google Only

  • The “hamburger” (the icon with three horizontal lines)
  • The microphone icon for voice input

Apple Only

  • The standard iOS share button (the box with an arrow pointing up)
  • An options menu (the lowercase “i” in a circle)

There are great reasons for every one of these differences. And even the items in the similarities bucket have different executions. Let’s go through them one by one, shall we?

Google’s Microphone

If something is related to something else, they should be near each other. This sounds obvious at first, but lack of grouping is a surprisingly common mistake. But not for Google! Here, and in many other apps, they put the microphone directly to the right of the text field. This clearly communicates that a user can tap the text field or the microphone, and either approach will affect the single text field.

Why have a microphone at all? Is it really used 80% of the time? Or 50%? Maybe it’s not even used 20% of the time. Does it really need to be front and center? Yes, but not because of frequency of use; that’s only one metric amongst many to consider. In this case we need to consider an important environmental context: driving a car.

There are a ton of scenarios where talking into your phone is awkward. Transcribing a long email. Having a private conversation on a bus. Tweeting from the bathroom. But while driving, voice input is huge. Huge! Which (partially) explains why Google makes it so prominent. It may be a minority use case, but when you need it, you need it to be great.

But wait — Apple doesn’t put it anywhere on the front screen. They tuck it a screen away as part of the keyboard. Why? Platform conventions. Remember our discussion before about 1st party apps? Google has decided, in many places in Android and their iOS apps, to feature search prominently. And that prominent placement puts the microphone to the right of the text field. Apple, on the other hand, puts their microphone in the keyboard itself.

We could go deeper into the analysis of these choices and the resulting tradeoffs, but for now let’s just say both Google and Apple are staying true to the precedent that their design teams have established. Next let’s talk about color.

In the Google image above, the microphone and hamburger are dark but the Google text is light grey. Why? Because if the word Google was dark, it would look like a logo, so people wouldn’t think to tap it. Let’s compare it to what Apple does:

This design is far more obvious, pixel-wise. There’s a surrounding grey box that looks like a button, and they’re using the same light grey text that’s standard for text fields. And even better, the text literally says “Search or enter an address”, which doesn’t leave a lot to the imagination.

But while Apple and Google agree on a few things like making the side buttons vivid while dimming the text field, they’re doing something pretty different in the default state of the text field. Why?

The Search Field

Google is search. It’s how the company began, and it’s the first thing people associate with the brand. It makes sense that Google doesn’t need to write “Click here to search for things” since that’s pretty much what the logo means to people. So as a Google designer, you can consider putting a Google logo there and call it a day. (I’d still test it, of course, but it’s a reasonable thing to attempt even though it’s gutsy for most companies)

But Apple is not in the same privileged situation. Their maps aren’t Google-powered, and the Apple services that power them don’t have a brand such as “Apple Search”. But even if the product did have a name, it’d be risky to put the logo up there in place of descriptive text. A company known for search can consider it, but I doubt it’d work for anyone else.

So Apple is left treating the search field like any other search field in the OS. You see it in Maps, in the App Store, in Mail, in SMS, and it’s always the same: grey text clearly explaining how to use the field. This is not an accident, because the more places an iOS user sees search used the same way, the more the user cognitively gets “for free”. They learn to assume that all search on iOS will work the same way, and that reduces friction.

But Google’s not in the same situation. They don’t want to use Apple’s standard search control. They’re rightfully proud of their search technology, and by extension their brand, so they put their logo front and center. It’s a reasonable tradeoff, though it doesn’t look as predictable or actionable:

Now compare it to the Apple version:

In Apple’s version they use that rounded grey box, a standard control on iOS. It’s very purposely designed to look like a (flat) button. Whereas the Google logo is floating there. Why? Because they can … as long as the tap targets are generous enough. Let’s double-check just to make sure they did their tap target homework.

On the original iPhone, Apple recommended that tappable areas be no smaller than 44 pixels by 44 pixels . But remember, that’s the smallest they recommend — bigger is better. Let’s overlay 44x44 measurements on Google’s design to see what’s going on here:

Interesting! The microphone and hamburger fit snugly within 44x44. Not that I’m surprised — Google has great designers and this is a common consideration when designing for touch screens. But the news is even better when you consider that the tappable area is probably far wider on both icons, stretching to the edges of the canvas. Good for them! Hurray for generous tap targets!

But. Wait. This is interesting. The second “g” in the Google logo is hanging below the baseline thanks to its descender . Take a look above. See it? That’s proof that the text area is taller than 44 pixels. (or that the whole area is positioned about 9 pixels lower than the other two items)

What happens when a user taps the text field? Here’s the next screen:

Now let’s draw some guides to see how things line up when I overlay both screens on each other.

That looks pretty messy because I’m putting a line underneath everything. Let’s just concentrate on the Google logo that converts to the “Search” helper text:

Ok, let’s unpack this. The blue bar on the left is the standard iOS cursor, so we can anchor our size analysis there. The word “search” is vertically aligned to the middle of the cursor. Makes sense. But then the world Google is larger, rendered in a different typeface, and has that pesky descender.

Should they have abandoned the logo to make things line up more precisely? Should they have made the word “search” larger? Made the logo smaller? Repositioned the search text to hit the exact middle of the logo? Nope. I think the way they landed makes sense. How it looks mathematically doesn’t matter as much as how it feels, and it feels right.

(But even if it didn’t feel right to me, remember rules #1 and #2 of Design Explosions: assume the design team is smart, and that we can’t know all the tradeoffs that went into any one decision.)

So that’s the search box. Shall we move on?

The Hamburger

You’ve probably seen the hamburger everywhere. Here it is again, the far left icon on Google’s bar but nowhere to be found on Apple Maps.

There’s a lot of discussion about the hamburger, but I’ll just say Apple recommends against it (as does Microsoft, last I checked) and Google uses it a lot. What does it do? Here’s what happens when you tap it in Google Maps, which is similar to how it’s used in most Google apps:

This is the kind of interaction that inspires lots of passion. Some people think it’s brilliant, others think of it as a junk drawer and A Very Bad Idea That’s Currently Trendy Despite Being A Very Bad Idea PS It’s Bad. Let’s imagine we’re a Google designer and the team is split 50/50 on whether or not to use the hamburger. Let’s reverse engineer how they got here. We’ll start by recording everything it’s doing:

  • Switch Google profile
  • See “Your Places”
  • Explore nearby
  • Toggle traffic on/off
  • Switch to the transit map
  • Switch to the biking map
  • Switch to satellite view
  • Switch to terrain view
  • Launch Google Earth (or go to the App Store if not installed)
  • Settings (for the app, not the current trip you’re trying to take)
  • Help & Feedback
  • Tips and Tricks

Now let’s try bucketing the features into related options:

  • Personalization stuff
  • A bunch of presentation toggles
  • The ability to explore around you
  • A link to the Google Earth app
  • Miscellaneous

Speaking of toggles, several of them can be enabled at the same time. For example, here I’ve turned on traffic and the satellite map simultaneously:

So, back to our role playing. We’re pretending we’re a Google Designer assigned to think about this. To hamburger or not to hamburger?

First off, let’s assume that Google isn’t interested in losing core functionality. For example, Google Maps without transit or biking directions is a non-starter. It’s somewhere they’re differentiated and strong, so there’s no reason to remove it from the app.

We also need to keep all the toggles like terrain and traffic because that’s expected in any mapping app. In fact, let’s peek over at Apple for a second to compare. When you tap the information icon, you see this:

Ah ha! There’s the traffic toggle, there’s the terrain/satellite toggle, plus some other stuff that Google doesn’t have. (Drop a pin, a 3D map toggle, and a way to report a problem) So both mapping apps handled the important stuff with a drawer one tap away. Apple deploys theirs from the bottom, Google deploys theirs from the side (which means it can also be triggered with a left-to-right swipe gesture) So these are similar needs, addressed by similar features, displayed in similar ways. None of these features should be dropped, obviously.

So what’s left on the Google side? What does it have that Apple doesn’t? Some pretty powerful stuff, actually. Personalization features like profiles and saved places, plus the “Explore Near You”, which competes directly with folks like Yelp and Foursquare. And don’t forget the Google Earth link.

Could a designer waltz onto the Google design team and say “Hey, I think we should remove personalization from our maps?” They could try, but on what grounds? The features may not be used by 80% of people, but the people that use those features really love them.

What about “Explore Near You”? Could that be removed? Sure, I guess. But again, why? I’m as suspicious of feature bloat as anyone, but I don’t think it’s a stretch to imagine that when someone is in an map they may be looking to explore the area around them to find a good place for coffee. It makes total sense that it’d live in the app.

(Interesting aside: the Android version of the app puts “Explore Near You” right on the mapping canvas rather than tucking it away like on iOS.)

Which brings us to our last item in the list, the link to Google Earth. If this were an upsell to something unrelated (for example the Google+ app), it’d clearly be out of place. But going from Google Maps to Google Earth seems obvious. It doesn’t need to be there but it makes sense that it would be. It’d be hard to argue it away.

So Google is trying to do more, and Apple is trying to do less. I wouldn’t argue that Apple should try to pack in more features, and I wouldn’t argue that Google Maps should try to drop functionality we’ve come to expect. I think they’re playing each of their hands exactly right.

Onward!


By now we’ve thoroughly critiqued Google’s top bar. How many pixels it takes, the colors it uses, where it places things, all of it. Are we done? Did we miss anything in Apple’s version of the top bar? Indeed we did. Apple has two buttons we haven’t talked about yet, one for directions and one for share. Here they are:

Directions

We’re going to talk more about directions in a minute. But one detail I’d like to discuss is the “tapability” of it. A lot of people don’t even notice the icon, or think to tap it. Let’s think back to guideline #1, and assume the designers are smart, they’re aware that buttons should look like buttons, and they’re aware that people often miss this functionality as it’s currently designed. Where does that leave us?

Platform conventions, as we discussed earlier. Apple Maps could have made that icon into a big round circle, but it would clash with the rest of iOS 8. And clashing with the rest of the platform is unacceptable when you’re a built-in app that other software designers will look to for inspiration.

The share icon can get away with being more subtle than the directions icon, because it’s used all throughout the system. Over time people start to understand what it means, they understand that it’s tappable, and they know to look for it and what to expect from it.

But the directions button, despite being the same color, used in the same way, and aligned with the rest of the system, is only used in one place. Right there in Apple Maps. Which means people don’t acclimate to it as quickly.

I think they made the right call in aligning with the rest of the system, though they do end up with a less obvious directions button than they’d probably like. A natural tradeoff. Perfectly understandable, and better than if they had broken the system just to amp up a single icon.

(On product teams, PMs will often talk about the need to “celebrate” a feature to make it more obvious. It’s understandable, they’re judged on their work’s impact to the overall product. It’s hard to have restraint when you’re assigned vertically, which is why designers play a vital horizontal role. I often say “if we celebrate everything, nothing will be”.)

Share

Sharing holds a special place on mobile. Because copy and paste is so much harder than on a PC, mobile apps have had to build in simple ways to share images, links, and in this case, directions. Sure, you could make everyone drop a cursor in a text field, select all, tap copy, open a new app, find a new text field, drop the cursor, and tap paste, but using a standard share icon is more pleasant, consistent, and predictable.

So even if sharing a location is only used 5% of the time, or even less, there’s a strong rationale for putting it next to the text field. Doubly so since 1st party app designers need to set a good example. I could imagine the Apple Maps designers arguing for putting share in a sub-menu (or a hamburger!), but then realizing that they’re a first party app, so they should align with other 1st party apps like Safari and Photos.

So there it is on Apple Maps. But where did it go on Google Maps? I went looking and I found it several screens away. First you load a location, then you drag from the bottom of the screen to see the location page, then you click the the vertical dots overflow menu to load a half-sheet that has “Share” on it, then the half-sheet gives you additional options like “Message” and “Mail”. Personally, I struggle to find it each time.

Once again, this is easy to respond to unfairly. A lot of people count the number of taps in a flow, and if there are too many, they think it’s bad design. The reality is often more nuanced. (in fact, there are a ton of designs that are needlessly complex because the designer was so committed to reducing taps — perhaps the most frequent example of cargo cult science I’ve seen in design)

Of course it shouldn’t take 100 taps to get something done. But it’s just as bad to have 100 items on screen at all times. Again, if everything is celebrated, nothing is. So the key is finding the right balance. Personally, I like to say “Two easy taps are better than one hard one”. It’s ok to hide lesser-used things one screen away. So where does that leave share?

To answer that question, we need to consider how important share is. It’s important, sure, but I wouldn’t call it vital. Then why does Apple put it right at the top, front and center? And Google doesn’t just hide it behind one screen, they offer less functionality in a much less discoverable location. Why?

I’d start by considering all the Apple hardware they’re trying to design for, from Macs to iPhones to iPads to Apple Watches. On most of them, “Share” is the way to do things like printing or interacting with third party apps. It’s almost like a right-click on Windows in terms of its ubiquity and utility.

On the other hand, Google can think about sharing in a fairly scoped way:
“Hm, someone might want to share a destination. Maybe we should put a share option in the location screen”. Whereas Apple may well be thinking in a far-reaching way: “We need to come up with a single affordance, one that can be understand across all screen sizes, multiple form factors, and scenarios. And it has to work across two major platforms. And it has to do a lot of heavy lifting.” Whew!

So not only does Apple want to set a good example by making share highly visible and highly predictable across an array of situations, I’d go further. I’d argue they can’t hide it. Because again, sharing (and printing, and working with other apps) might be the most common thing you need to do with Maps, but it’s still really important when you consider it at a multi-device, multi-platform level. Important enough to take space on the main screen.

So back to Google, then. Let’s say Google’s design research team discovered that their implementation of “Share” was too hard for people to find. If you were the designer assigned to make it more prominent, where would you put it? Here’s the UI:

The top of the UI already has three tappable targets. The hamburger, the Google search field, and the microphone. It’s hard to see the benefit of cramming it in there.

After all, hamburgers are usually flush with the left side of the screen and microphones are flush with the right. Could we put it between the text field and the microphone? It’s best not to — as we discussed earlier, the text field and the microphone are paired. They belong together visually.

Then we have our two buttons on the bottom. One for centering the map, the other for directions. Could we put a third item there for share? Well, we could… but I wouldn’t recommend it.

More options means people slow down, especially when the buttons command this much attention. And is a share button really that important? For Apple, yes. For Google’s third party app on iOS, less so.

What about putting a share icon in the hamburger somewhere? It could work. After all, people often refer to it as a “junk drawer”, so why not put it there? Let’s take another look at the deployed panel to take a look.

This is a great example of where hamburgers do well versus where they do poorly. They’re pretty great as a launching-off point for other places. After all, if you’re about to load Google Earth, it doesn’t matter if most of the screen is covered up.

But sharing location, by definition, is a contextual action. You’re picking a noun (Shake Shack in Manhattan) and trying to perform a verb on it (Share via SMS). So if context matters, it’s not great that the hamburger menu covers the whole screen.

Sure, you know you just typed “Shake Shack”, but still, it’d be a bit clumsy to just say “Share” here on a mostly covered screen. Better would be to put share one screen away, on the actual location page itself.

And wouldn’t you know it … that’s exactly what Google did. Maybe they could pull it forward a screen or two, but it just doesn’t make sense for it to be on the first screen or a more prominant action than things like the phone number, the website, or a way to kick off directions.


Affordance Placement

Ok! We discussed all the various features that any mapping application needs to account for, plus called out places where Apple and Google designed differently based on their goals and direction.

But not every affordance is equally easy to reach or to use. Let’s analyze where they put everything. And for some context, let’s talk about what areas of the screen are easiest to reach. For a long time, people thought of “reach” on mobile devices very simply, like a game of Hungry Hungry Hippos. Things on the bottom are easy to reach. Things on the the top are hard, as shown on the left diagram:

The visual on the right approximates the “thumb zone” for a right-handed person

But the reality is more tricky. We don’t grasp our phones uniformly across the bottom of the unit. We grasp them, usually with our dominant hand, across the middle. And our thumb works in a bit of a windshield wiper motion, which creates a pattern more like what’s shown on the right.

At first glance, it looks like the diagram is for a left-handed person, since there’s more dark blue on the left. But your right thumb has a harder time hitting the bottom right of your phone than the bottom left. The right forces more of a scrunching motion because it’s more under your palm.

So how did Apple and Google do? What did they place where?

It goes without saying that this “thumb zone” is different for everyone. If you have a larger phone, or smaller hands, or grip your phone differently, or are left handed, your reach will be affected. But these graphics represent a reasonable rule of thumb. (Har har.)

Notice the bottom right corner. I consider this a magical spot on the phone because users won’t accidentally tap it, but it’s highly discoverable. And just because it’s harder to tap relative to the center of the phone doesn’t mean it’s actually hard in absolute terms. This means this location is easy to tap, harder to tap by accident, and hard to miss. So what do Google and Apple place there?

Google uses this spot for their “Load directions” icon and Apple puts their Information/Settings icon there. Pretty different approaches! But let’s look closer. If we look to the top left (another magic place that’s a reach but highly visible) we see Google’s hamburger and Apple’s “Load directions” icon. They’ve mirrored each other almost exactly. Let’s take a look:

Information/Settings

As we discussed earlier, this is where Apple lets you turn traffic on and off, change to a satellite view, and so on. It’s accessible by tapping the little i icon and it loads a menu from the bottom of the screen like this.







And of course here’s Google’s hamburger. It does a lot more by design, so it benefits from taking up almost the entire page.

But the key takeaway here is that these are screens refer back to what I call “vital edgecases” — you probably won’t use them very often, but when you do want them, you want to find them with as little friction as possible.





Though I wonder about that “Explore Nearby” option. It’s practically an app within an app, I could imagine some people getting a lot of value from it. But buried in the hamburger like this probably harms its discoverability. Before we mentioned how Android puts it right on the map canvas, which is harder to miss. Maybe there’s a third place for it?

Oh, here we go. When you type directions, it shows things near you. That way they can feature helpful stuff without taking up chrome on the main mapping canvas. This is clever and obvious in hindsight:

Plus, the keyboard doesn’t appear right away, it animates in from the bottom. This means they’re a split second where you can see more of the list before it gets covered by the keyboard. A great example of how a screenshot only tells one glimpse of the story, whereas smart use of animation can help make a design more clear.

Ok, what next? Let’s talk about another very important feature: centering the map on your current location.

Centering the map on your current location

Mapping is a pretty complex experience, and it’s easy for the map to be focused on the wrong place. So having the ability to center the map in one tap is vitally important. Let’s see what Google and Apple do here:

Google has placed it as its second “FAB” (floating action button”) at the bottom right. Hard to miss, easy to tap.

Apple has placed it at the bottom left of their screen. Icons at the bottom left are often seen first (in languages that read from left to right) and Apple has used their standard location icon to make it as clear as possible. You see this visual everywhere — in settings, in apps, and if you look at the top right of this screenshot, you see it next to the Bluetooth and rotation lock icons. By using it this way, users are more easily able to understand what it does and what to expect from it.

But wait, what about Google’s design? They’re not using the standard location icon. Why not?

Again, let’s do some role playing. Let’s say you’re new to the Google design team. And to make things even more interesting, let’s say you were hired away from the design team at Apple that worked on iOS Maps. You might say “Hey team! I was wondering, why not use the Apple-recommended location icon?” It’s a good question — but it’s worth remembering that there’s no single “right answer” here, only tradeoffs.

Apple has standardized horizontally across all their products, from iPods to iPhones to iPads to Macs. But Google has a much more complicated alignment story, because their products don’t just span hardware, they span different operating systems. Windows, iOS, Android, Chrome, Web, Blackberry, and Windows Phone to name a few. So that means they’re dealt very different hands, ones that play out very differently.

If Google aligns with Apple’s location icon on iOS, that’s one exception to their visual language. If they align with Windows Phone’s, that’s a second exception. But also remember you can access the web on iOS and Windows Phone. So should they change their web products to sniff out the OS they’re on and change the assets on the fly? They could, but it sounds like a lot of work.

Another issue is that Google’s “center the map on my location” graphic, the crosshairs icon, is pretty good. People know it well. I’m not sure when it first appeared, but I’m pretty sure it’s been around much longer than Apple’s location icon.

So there are a lot of reasons not to align with Apple’s location icon, and only a few reasons to align. I’m not surprised they landed where they did.


Entering Directions

Entering directions! This is the core functionality for the app, the 80% use case. An app that doesn’t get this part right might as well not exist, and everything we’ve talked about before (all six thousand words or so) is just setting the stage for this flow. Let’s dive in.

(Again, to reiterate, I’m not going to get into the quality of the mapping data stored in the cloud. By all accounts, Google Maps has the edge here, but we’re going to focus on the things that a design team can control themselves.)

It turns out this section is surprisingly complicated to analyze. I can’t just say “screen X goes to screen Y”, because both Google and Apple have chosen two different entry points for entering maps! Here I’ve called out the text field option with a (1) and the button with a (2):

We’ve already explained the differences in visuals and placement. But actually, when you click though both entry points, both Google and Apple are doing pretty much the same flow for both (1) and (2). But, as always, the slight differences contain a bunch of interesting things to learn.

Golden Path #1 (the text field)

Let’s draw some boxes and arrows to understand what happens when you click on the text field. First Google, then Apple.

Google Maps flow: searching for a location to the en route screen (low fidelity)
Apple Maps flow: searching for a location to the en route screen (low fidelity)

Ok, it looks pretty abstract like that. But the big takeaway is that they’re doing the same thing, in the same order, but Apple uses five screens and Google uses six. But, again, fewer steps doesn’t necessarily mean better! We’ll analyze those screens in a moment. For now, let’s show everything at full fidelity before we dive in.

Google Maps flow: searching for a location to the en route screen
Apple Maps flow: searching for a location to the en route screen

Ok, let’s take it screen by screen and look for big differences:

Google left, Apple right

Screen 1: They start pretty similar. Google has a much greater emphasis on your favorite places and Yelp-like suggestions, but otherwise it’s a text field that’s showing suggestions based on your search history. What you’d expect.

Google left, Apple right

Screen 2: Here, the auto-suggest region no longer has to guess, now it can just do a search based on what you typed so far. A standard typeahead pattern, found in lots of apps and on the web.

[Note: I accidentally dismissed the keyboard on the Google example but not the Apple example.]

Google left, Apple right

Screen 3: Functionally, these screens are very similar. They’re showing your chosen destination and waiting for you to signal that you’re ready to route this trip.

Google left, Apple right

Screen 4: The plot thickens. This screen is unique to Google. Apple skips right to the next screen, whereas Google lets you choose from a range of options.

[Note: sometimes Google lets you kick off the directions process from here, sometimes it doesn’t. I haven’t been able to figure out the difference. But when that happens, the user jumps to screen six rather than having to click through screen five.]

Google left, Apple right

Screen 5: Once again, these screens are functionally similar. The path has been routed and now all that’s left is committing to the route. But notice that Apple’s screen is showing multiple variations. It turns out Google Maps does the same thing with those grey lines, they just don’t call them out with labels.

Google left, Apple right

And then the en route screen, which is its own ball of wax. That’ll get its own section later in the issue.


What have we learned? Well, both applications have the same general flow. But what about those screens in the middle? Let’s analyze them in isolation. Here’s Google Maps, followed by Apple Maps:

Google is very consistent with the Material guidelines here. On screens 1 and 3, there’s a giant blue button leading you forward in the flow.

Of course screen 2 is a list of options, so a single call to action wouldn’t make sense.

Now look at Apple’s screen 3. Their call to action is the word “Start” at the bottom of the screen. While a lot more subtle, this also is a faithful execution of platform guidelines.

But what about that big blank step? What’s that about?


Explaining Apple’s Missing Step

In Google Maps, that second screen does a lot for the user. The focal point of the screen is on the list, particularly on what Google is predicting as the best option. But there’s also a mode switcher across the top so you can choose between car, public transit, walking, and biking. And a way to enter your starting and stopping point. And a way to reverse the order. Plus an “Options” button, and if you’re taking public transit there are yet more options for departure and arrival time. Whew! That’s a lot of options.

Apple offers none of these things.

But like the other screens we’ve looked at, I wouldn’t be so quick to say Apple is too featured limited or Google is too feature rich. I’d argue that both companies are playing to their strengths and these screens reflect that.

Apple doesn’t have public transit or biking information. Maybe they’ll add it one day. But for today, that means their flow can be a lot simpler. The single feature they can match with Google is “Choose alternate paths, including walking”. So Apple featured it on their third screen with a label rather than using more complex and heavyweight controls.

The “Dropped Pin” Screen

With screens 2 and 3 addressed, I’d like to turn your attention to the first screen, especially to focus on the call to action.

On the Google side, we have their FAB, or “Floating Action Button”. Easy to know what to press. On the Apple side, we have a more vague call to action. I’ve spoken to people who get confused here. Some don’t know to click on the little car icon. Others don’t know you can click the “directions” icon on the top left. I think a fair number of people tap into the place page itself, then click “Directions to Here”.

Rule #1 reminds us that these designers are smart. They’ve probably seen people struggle with this screen. So what’s going on here? I think it’s an issue of enforced consistency with the iOS guidelines, and how they probably didn’t want to make a custom control for this screen.

Look on the bottom and top chrome elements. Totally standard. Now look at what Google did — they placed the name of the place on the bottom of the screen, underneath their FAB. It’s a look that may match Google’s other products, but on iOS it’s out of place. And that’s perfectly fine for Google, but it’s not fine for 1st party Apple designers.

So Apple went with a contextual menu with two buttons. Click the car to get directions, or click the name of the location to read reviews, see photos, and a bunch of other options. I’m not sure if it’s a truly common control, but it is very reminiscent of the standard “long press” present in iOS since 2007. This is not an accident.

Regarding Common Controls


Designers refer to “common controls” as the standard OS provided options. Think dropdowns, the contextual menu shown here, buttons, grids of images, and so on.

Whereas “custom controls” are so much more fun to use. You get to design something! From scratch! And it can do exactly what you want, with no compromises! But it’s often a mistake.

You can compare it to code, actually. It’s a rookie mistake for an engineer to write a bunch of code by themselves, whereas a veteran developer has learned how to be lazy and get code “for free”.

This is done by using other people’s code and knowing how to tie it together effectively. It turns out design is pretty similar. New designers often want to reinvent the wheel because it’s fun to try designing new things. But veteran designers know how to use the common controls in clever ways to achieve their goals without taking on too much custom work.

Take the color wheel. On the Mac, there’s a standard color wheel that most apps use because it has a lot of powerful features. Windows also has a standard color wheel, but it’s far less powerful. Which means developers keep designing new color wheels, especially Microsoft Office. Which means customers have to learn how to use lots of different color wheels, which slows down learning across the system.

The same thing happened with printing. Most people want to hit a standard keyboard shortcut and have everything work. It’s pretty standardized on Mac OS X, but also includes the ability for developers to add functionality. Whereas on the Windows side it works in some places, not in others, Office does its own thing entirely, and so printing on Windows requires more effort. It’s not uncommon to re-learn behaviors as you move from app to app, and that’s not a great place to be.

As a UX designer, you will often write custom controls. But it should never be your first inclination. Your design and your product needs to truly earn the right to be different. If a custom control will make your app 15% better, that’s probably not good enough. You should be holding out for ~50%.

So that’s my best guess about what happened with Apple Maps. Their call to action isn’t as clear, but I wouldn’t say it’s 50% worse. I wouldn’t say it’s worth writing a whole new custom control to maintain and for users to have to learn. I often like to say “it’s not very discoverable, but it is learnable”. I think Apple Maps is in a similar boat here — once you realize you can click the car, things make a lot more sense.

But this is a scenario where Material’s super-obvious call to actions (FABs) really shine. All without being custom to a single app. That’s no small feat, and kudos to their design team for working towards something more standardized and predictable.

Golden Path #2 (the button)


It turns out that there’s another way to trigger directions in both apps. Wow. We just analyzed what happens when you click the text field. But what happens when you click the big FAB on Google Maps or the subtle directions button on the top left corner of Apple Maps?

Let’s compare. Here’s Google Maps, followed by Apple Maps:

Google Maps
Apple Maps

The last two screens are the same, and we already talked about why Apple is missing the third one. So we’ll be focusing on the first two screens.

But there’s another thing to discuss first. Why did Apple and Google both make two ways to do the same thing? Typically it’s a bad idea to let users do the same action is two different ways. Users don’t logically memorize how your app works. If something is complicated, they just keep trying until something works. And if you provide two paths to the same thing, you’ve thrown a big roadblock in the way of the user remembering how to use your app. So what are Google and Apple doing?

Let’s take a look by comparing what happens when you tap the “directions” button (shown in the first screen) versus tapping on the text field (shown in the second screen) in Google Maps:

You can spot the distance even while squinting. See that Google blue? It happens right away on the first example but at the end of the flow on the second example. And what does that screen let you do? Everything. You can change the starting location and the destination, reverse their order, and chose different forms of transit. Google passes you through this screen in both flows, but in different orders. Huh! Puzzling.

What did Apple do? The same thing, just different.

Once again, tapping the “directions” button changes the flow from three screens to two. Once again, they drop the user directly into a screen that assumes your starting location is where you’re standing. Once again the suggestions switch from general guesses to a search-while-you-type pattern.

Whereas tapping the text field doesn’t bother letting you monkey with the starting location right away. Just like Google’s example. Though in the Apple example you have to dig further to complete that task, whereas Google uses that screen with the blue header every time.

So why the differences? What’d we learn?

All of my design training has taught me to remove things like this. When you’re doing essentially the same thing from two entry points, you should do everything in your power to combine them. And I’m sure they tried and found it just wasn’t possible.

Recall, yet again, rule #1 of Design Explosions: assume the design team is smart. So let’s reverse engineer how they got here with that as our starting assumption.

First, if they combined the text field and the button, which affordance should they drop? You could get rid of the text field and rely on the button. But in Apple Maps it’d be far too subtle. Maybe Google Maps could get away with it? Ok, what about the other way around — maybe you get rid of the button but leave the text field? Apple could, probably, but Google’s got their FAB pattern to worry about. Even if the designers on Google Maps wanted to go in that direction, it wouldn’t align well enough with Material design.

So it’d probably be possible, in theory. But let’s go a bit further. Golden Path #1, the one that goes through the text field, is best at looking for stuff without turning everything into directions. Golden Path #2, the one that goes through the button, is better at getting you directions as soon as possible. To use a Google metaphor, the text field is like using Google’s search results pages, whereas the button is a bit more like the “I’m Feeling Lucky” button.

If we were to combine the two, we’d need to somehow serve both the searching and the directions scenarios equally well. But as a screen tries to do more, it gets harder to use and less focused. I’m sure it’d be possible, and it could probably even be done well. But when two strong design teams come to the exact same conclusion, take notice.


En Route

Here’s where you spend most of your time. And it’s not just about the total time spent here, it’s also the fact that you’re using this part of the app in a distracted way. Most of us just want to get to our destination with the absolute minimum amount of crashing. So it’s important to design the screen in a “ slippy ” way, as opposed to sticky, as my friend Jake Zukowski says. The visual design of this screen matters a whole lot, so let’s see them both at high resolution:

Let’s strip them down to silhouettes to see how each of the screens look at low resolution. At first we should focus on how much canvas the screens are taking up. First Google, then Apple:

Google Maps

Pretty different silhouettes this time! Let’s look at them side-by-side:

Clearly Apple is optimizing its real estate here. All right angles, no OS bar across the top for cell reception, wifi, time, battery, etc. But how much can Apple really fit into such a small amount of chrome? Let’s count the number of affordances in each. Google on left, Apple on right:

Whoa. What?

In Apple Maps, no matter where you tap on the screen, the same thing happens. It’s one giant tappable target. Whereas Google Maps has a more traditional design that features seven different touch targets. And that’s not including the mapping canvas itself — on both apps, dragging the map lets you “peek”. (In Apple Maps, it’s temporary and the map resets when you complete the drag. In Google Maps, you can drag as far as you want and you’re brought back with a “RESUME” button.)

That’s a pretty big difference. What’s going on here? Either Google has been overrun with feature creep or Apple dropped core functionality. What other option could there be when you have such a huge differential? Well, Apple’s doing something a bit tricky. Tapping anywhere goes to this secondary screen. That’s where they’ve put all their functionality.

See, now it’s a lot more like Google. The top bar is there so you can check battery life and time, plus a button to cancel the trip, overview and text directions, as well as a toggle for 3D mapping and volume controls. Let’s compare Google’s single screen with Apple’s two so we can see what we’re dealing with:

Left: Google Maps. Center: Apple Maps default en route screen. Right: Apple Maps after a tap.

There’s a ton to learn here, but first I’d like to tell a story about how I used to play football video games. I’d blitz. Every time. In every situation. 1st down. 3rd down. Long distance, short distance, blitz blitz blitz!

No one ever explained the tradeoffs to me. See, blitzing optimizes for one thing (hurrying the quarterback) by de-emphasizing another (having enough people on the rest of the field to handle a pass effectively).

You don’t need to understand sports to grasp the concept of a tradeoff. Put three men here and that means you can’t put them in this other place. Buy a cheap item and you’ll replace it more frequently. Buy an expensive item and you’ll run out of money faster. Everything is a tradeoff. If you ever only do one thing, you’re missing all the nuance, like 10 year old me blitzing on every play.

So let’s look back to the en route screen with tradeoffs fresh in our mind.

I love this comparison. Google is optimizing for driving because everything is one tap away. Want to cancel the trip because you’re looking for parking? One tap. Want to figure out how to turn on the traffic map? One tap. Want to re-orient the map to the direction you’re facing? One tap. It’s a very flat system where everything is right there, even things like seeing what time it is our checking to make sure your battery is ok.

Apple is optimizing for driving because it’s tucking everything away. There’s far more canvas available to show the map. When you drag the screen with your finger, it snaps back into place rather than putting you in another mode. It doesn’t show the current time, but it does tell you how many more minutes you’ll be driving, and your estimated time of arrival. Apple is doing what Apple does, for better or worse. They’re cutting as close to the bone as they possibly can. Nothing is assumed to be necessary on this screen.

In this case, I’d argue Apple is blitzing. It’s not “right” or “wrong”, but I would call it high risk with high reward. We’d need more data to know how successful it is in the real world. Maybe it’s worth it! Maybe not!

But even without testing, we can think through the tradeoffs. In that split second while driving when you need to get to the cancel button, or see the actual time, or change the volume, Apple’s design takes one more tap. And sometimes that can be frustrating.

On the other hand, Google Maps is running up against Hick’s Law which can be summed up as “options slow people down”. Sure, their stuff is all a single tap away. But there are a lot of them.

It’s a wonderful example of tradeoffs. They’re both right. You can prefer one, you can argue why one is worse, but I’d prefer that you simply admire two solid answers to a very complex design problem.

(Personally, I like to say “two easy taps is better than one hard one”. So I’d probably be inclined to come up with something closer to what Apple has done. But again, that doesn’t make it “right”. I’ve had these debates at my job dozens of times. People I work with sometimes don’t understand why I like to tuck things away into a second screen. And it’s a great discussion to have! But I could just as easily argue the other side. It just depends on the goals for your app.)

The En Route Lock Screen


We could get into a bunch of little side discussions about the En Route screen, but I’d like to focus on a big one, Apple’s customized lock screen. Here it is on the left, with the standard en route screen on the right for comparison:

There are just a few differences here. The lock screen shows the full OS status bar across the top, a simplified map canvas with no labels, and the standard bottom area with “slide to unlock”, access to the control center, and a camera entry point.

This lock screen is a bit risky. While I’m driving, I’m not thinking clearly, I’m not tracking what screen I’m in or the status of the lock screen timeout. But the lock screen puts a password between me and what I’m trying to do. (My iPhone doesn’t have Touch ID) So that’s a bummer.

Let’s put ourselves in the designers’ shoes and review their options.

  1. Don’t use the lock screen at all. This means you see your default lock screen wallpaper (a polar bear, let’s say), then after a swipe you type your password, then once you get through all that, you finally see where you are on the map. That’s not a ton better than putting on the lock screen itself. It’s more clear that you need to swipe to unlock, but it leaves you in the dark while you’re driving and you just want to see the next turn.
  2. Ignore the lock screen timeout. I’ve been on a design team working on lock screen/CTO corporate policy politics, and it’s intense. Enterprise customers want to know that their phones will auto-lock if they’re company property. Meaning they’re not interested in exceptions to their strict lock screen policy. You can’t just ignore CTOs when you’re courting their business and trying to prove that your product will work with their proprietary data. Maybe somone in the company can lobby them, but that’s beyond the scope of a designer.* So this option isn’t really on the table.
  3. Use the lockscreen as it exists today. Maybe tweak some of the visuals, but essentially keep the design flow the same.

The way I read it, #1 makes the product harder to use while driving and #2 breaks contractual obligations. So I’m not surprised they landed on #3, even if it’s imperfect.

*This was the beauty and power of someone like Steve Jobs — I get the feeling he’s call up some key CTOs personally to explain this somewhat esoteric but tangible user pain, and say “we’d like to make an exception for when the user is en route. If the car is moving, and a destination has been entered, and we know it’s 20 minutes away, fuck the lock screen for 20 minutes. What do you say?”

The en route lock screen, while I do struggle with it sometimes, is a great example of the limitations of design. #2 might be the best overall option, but shipping it has nothing to do with pixels. It’s all about relationships, and politics, and legal teams, and so forth. This is why it’s important to have design courage at an organizational level. Otherwise you can only get so far with your designs before the best ideas are overruled by legal, executives, and your partners. Effective designers have effective relationships.

Summary

If you’re like me, you look at at massively long post like this, especially one comparing two products, and you scroll to the end. You want the summary. If I came across this post, I’d say “Dude, I have 907 tabs open and no time to spare. Can’t you just give me the answer? Which product is better? It’s Google Maps, right?”

But this is a different kind of post. There’s no winner this time, just a bunch of design lessons inspired from studying two similar products. It’s fun to compare and contrast the approaches of two highly skilled teams working on a pretty unique design challenge. That’s huge, if you’re looking to learn.

Design is simply “deciding how a thing should be”, as William says in his book . But every decision has a tradeoff. Every thing you make easier can make another thing harder. There is no such thing as “the right design”. But we can learn a lot from seeing how experts in their field weighed the pros and cons of different approaches.

And with that, we’re done! We hope you enjoyed the first issue of Design Explosions. Also, we hope you have a very nice day.

Email me when UX Launchpad publishes or recommends stories