The release of the Google Maps App for iPhone has many users excited and relieved that they will no longer accidentally be directed to the far reaches of the Outback of Australia.
But one of the most interesting things for me, as a smart phone app designer, is the rhetoric in the news around how it’s designed – and how it compares to Google’s maps app for Android.
The first article I read repeatedly stated that the iPhone app was “intuitive” and “clearly designed for the iPhone.” Curious, I checked out the app, and I didn’t necessarily see any UI elements that seemed to specifically call out “iPhone!” Not a lot of chevrons, basic iPhone “list” elements, rounded corners, clearly native iPhone elements, etc. So what do people mean when they say that it’s clearly an iPhone app? Some journalists seem to be going back to vague generalizations, like iPhone apps are more intuitive, and Android apps are “all about raw functionality.” At least Adrian Covert claims this is true about the iPhone vs Android Google Maps app.
I would argue that, in the near future, even these generalizations will start to drop by the wayside, and the “styles” of these two platforms will merge (besides obvious differences like Android continuing to take advantage of its menu and back hard keys). The Google Maps app for iPhone is confirmation that this “one style to rule them all” is emerging in the smart phone app world.
Not so long ago, Android and iOS apps had very clearly different styles. You could glance at an app and tell immediately which platform it was for (if the designers knew their stuff, anyway). iOS apps had more rounded elements, rolling “pickers,” back buttons all over the place, etc. Android apps had more “edges,” were more “web-like,” with dropdown selectors and more things hidden in menus. As it’s become clear that it will be terribly inefficient for every company to develop and deploy different native smart phone apps for each platform, and HTML5 is becoming more robust, more people are turning to responsive design and web apps. When designing web apps, we can’t use the original iOS “style” or an obvious Android “style,” since they will be viewed cross-platform. So, instead, an integrated “general touch” style seems to be forming that will look good and natural across all platforms. Then, if these companies decide they *do* want a native app as well, they’ll be able to use this more platform-agnostic design as a starting point (and then build deeper integration) without jarring potential users. This will also allow users to switch platforms with more ease, and probably be good for the competitive market in general.
As a designer, I welcome this trend – being able to focus on what the absolute best touch experience can be instead of designing a handful of different experiences according to platform will be refreshing.
Lately I’ve become a big fan of the series of books by A Book Apart. So far I’ve read two of them — Luke Wroblewski’s Mobile First and Aarron Walter’s Designing for Emotion — and I bought Mike Montiero’s Design is a Job and gave it to my boss to read. Also, excited about Karen McGrane’s Content Strategy for Mobile.
What I love about these books is that they skip all the obligatory “what else can we put in here to make this ‘book sized’” content that continually haunts tech and design books. For example, O’Reilly’s Mobile Design & Development spends the first 41 pages discussing “A Brief History of Mobile” and “Why Mobile”… I don’t know for sure, but I’d think if you’re someone buying and reading this book, you probably already know something about the history of mobile, and you certainly don’t need to be convinced that you should be thinking about designing for mobile. Probably 4 out of 5 design or technology books I read involves skipping over a big chunk of the beginning of the book. And these books are advertised *for* design professionals, not newbies or dabblers. What’s going on here?
I think there’s a couple things. One, as discussed, the publishing industry is so used to a certain “size” of physical book that authors and editors try and force a bunch of content into their writing in order to prove that it really is a “book.” This issue will probably subside as ebooks become more popular.
The other issue that might be going on, however, is a bit more nuanced. I think that book authors often don’t go through the UX exercise of really thinking about and researching their audience and who they’re writing for. It’s more like “hey, I have a bunch of knowledge about this topic. I’ll put it into a book!” Lately I read (admittedly, only the first part) Gerald Zaltman’s How Customers Think. In this book, Zaltman seems to be speaking to an audience of people who are not only largely ignorant of the topic he’s speaking of, but actively disagree with his theories. I found myself wondering, would this “persona” that Zaltman is speaking to really buy this book and read it? Perhaps so – perhaps there are business folks out there that are potentially antagonistic toward his theories but interested and would read the book. But I got the feeling that Zaltman had not really thought it through – he was just arguing his point with an imaginary user (reader) who epitomized all the annoying people he’d had to convince about his theories in his lifetime. This didn’t work for me because I didn’t identify at all with his imaginary reader – I already agreed with a lot of the things he had to say.
The publishers at A Book Apart, however, obviously have thought quite a bit about their users. Their tagline exactly describes who they want to speak to – “Brief books for people who make websites.” They obviously have a persona in their minds of a web professional who wants to get the most out of their reading experience and to be efficient. On their “About” page they say “The goal of every title in our catalog is to shed clear light on a tricky subject, and do it fast, so you can get back to work.”
I think UX strategies like User Experience Research, Persona creation, and Testing could be an extremely useful tool for authors. First, identify the type(s) of readers you’re trying to target. Then, send out surveys or conduct interviews where you get at what these folks already know and what they’d like to learn more about in the space you’re writing. Out of this research, create Personas describing their knowledge level, their goals, and anything else about their personality or lifestyle that could help you hone your writing. Finally, send your drafts out to these “target users” and conduct testing. How does the writing make them feel? Are they annoyed? Happy? Are they learning sufficiently? I think there’s probably lots more that one could do here, but there’s a start.
In the end, if someone’s writing as a design and UX expert, I’ll trust them more if they’ve done some UX research themselves.
I just read an article on TechCrunch titled “Siri, Why Are You So Underwhelming.” In this article, Jordan Crook talks about some of the disappointments she’s had with Siri, and how her initial jaw-dropping reaction to the concept has taken some hits.
This article was completely unsurprising to me. In fact, the whole time the tech media was slavering over Siri, I just didn’t get it. What were people SO EXCITED about? After all, the application had already existed before Apple bought it. voice-input was already integrated in almost every native Android action. Yes, I understand that Siri is more integrated, context-sensitive and “gets what you mean” than Android’s straight-up text input. And I get the excitement about the step toward better NLP. But why were all these tech-savvy folks so amazed and making such huge claims about usage? Is it really so much easier to “talk at” your phone than tap a few times? Are people really going to want to do this?
I think a part of my reaction boiled down to my doubt that the *user experience* of “talking at” one’s phone was the experience many people actually wanted and would do. This lack of usage had already been shown in other applications – for example, in the Android operating system, how many people actually use the built-in speech-input instead of the keyboard? Yes, it’s cool, yes, it’s handy sometimes, but I’m doubtful that a large percentage of users regularly utilize speech input. How many people at a dinner party want to say “switch songs” loudly to their stereo system instead of just pressing a button or tapping their phone while their compatriot is chatting to them? Has Apple done any user research (perhaps diary studies?) to try and really find out “is this something people will actually use? When will they use this?” Let me know if so, because I’d love to see that kind of thing.
I’m not meaning for this to be a rant – I’m just trying to get at the core of people’s excitement.
My coworker Kimra mentioned that maybe this whole “telling your machines what to do” thing could be not based on user research, but on a kind of “tech fantasy.” This makes sense to me, especially based on all the articles that came out about Siri, with titles like “A voice-controlled future is finally upon us”, “Apple’s Siri points to an exciting future”, and “Apple’s Siri is the fulfillment of a dream from 1987.” These articles frequently reference science fiction visions, with sections that read: “This is all very Jetsons-type stuff, the kind of ‘they promised us jetpacks’ future we’ve all been waiting decades for” and “Ever since HAL refused to open the pod bay doors for Dave we’ve all wondered when we’d get to talk to our computers. Well now thanks to Apple and Siri we can.”
People talking to their machines (and the machines responding intelligently) has been a key staple of science fiction for years. Maybe people aren’t actually going to use Siri, but they’re in love with the “vision of the future” – in other words, people are excited by the fact that technology is getting more like science fiction’s “vision of technology.” But we all know that science fiction books and movies aren’t based on user research – they’re based on the author’s fantasies and visions about what they *think* people will want and use in the future. It will be interesting to see if these science fiction visions will actually translate to adoption and usage.
Maybe in the end, Siri *is* something people want and will use. Maybe it *is* an amazing step toward the future. But if so, it will have to work nearly perfectly. People will have to be able to rely on their voice commands working with just as much regularity as their touch commands, or else there’s not all that much point in using it. I think part of the reason people have been disappointed is because Apple and various articles promised this near-perfect fantasy. When we actually get to that point technology-wise, I’ll be excited to see what people do.