One size fits none

Who doesn’t love easy answers? Don’t we all feel good whenever the way forward comes quickly to hand?

With less effort, pain and outlay involved, the lure of easy answers is such that we tend to place undue emphasis on initial conclusions. This cognitive heuristic is called ‘anchoring’ or focalism, where we rely too heavily on the first piece of data we’re exposed to. Where the information is in line with an existing belief or preconception, confirmation bias simply compounds the issue.

Psychology aside, often the easy answers we seek are expediently classified as ‘best practice’. Indeed, when we ask “what is best practice?”, it can often be a thinly-veiled substitute for “what’s the easiest answer?”. The same question can similarly mask sentiments such as “if it’s good enough for company X, it’s good enough for us.”

Ten or twelve years ago, whenever a tricky decision point was reached on a website project, it wasn’t uncommon to hear someone chip in with: “Well, what do Microsoft do?”. The inference was that a) Microsoft had all the right answers (it didn’t) and b) that the context was the same (it almost certainly wasn’t).

‘Best practice’ can be an excuse for all manner of evils. Less than two years ago, the use of infinite scrolling – where a webpage would continuously load new content every time you reached the bottom of the page – was running rampant. It was a trend, and one that became de rigueur for any new website that wanted to appear ‘with it’. No doubt somewhere at some point it was referred to as best practice. But like any trend it is increasingly being left behind as we discover lots of sound reasons why it is bad practice. In the case of infinite scrolling, it meant that the website footer – use of which is definitely best practice – was forever just out of reach of the user.

No sooner has one best practice established itself than extrinsic factors change, and something else (albeit not necessarily better) has come along to take its place. What we call ‘best practice’ evolves in parallel with technology, user behaviour and established norms across industries. It never follows trends blindly. Sometimes trends are completely at harmony with firmly established principles. Sometimes they are not. And so it is with best practice.

There’s an old meme in the design industry that goes something like this: Good. Quick. Cheap. Pick any two. Meaning: good, quick design won’t be cheap, and so on. To paraphrase this for user research and customer insights: “Easy. Quick. Right. Pick any two.”

Whenever best practice is referenced as rationale for decision–making, the immediate follow-up should always be: “Okay. But is it best practice for our customers?”. Context is everything. Easy answers might be cheap, but they won’t necessarily be right.

Learning from others through benchmarking, both within and outside your sector, is an important element in establishing a direction of travel. However research and validation with customers are the true path to success.

Don’t let a quick and easy path to answers under the guise of ‘best practice’ either stifle your organisation’s chance to innovate and excel, or inspire outcomes which simply mirror your organisation’s in-house desires. The path to the dark side this is.

Every app is competing with Facebook

Very few app producers or start–ups may recognise the fact, but far from offering a unique experience to users, their apps are vying for attention with the giants of digital – Facebook, Instagram, even Candy Crush.

Any app that manages to make its way on to a user’s device automatically becomes a direct competitor to dozens of other apps, all within a thumb’s reach and all with the potential to use up those few minutes that the user has to spare at that point in time.

Combined with research that tell us apps feature an abandonment rate of around 95% overall, and that 1 in 5 apps are used only once, this is chilling stuff for any app producer. So while the user’s decision to download an app remains a landmark moment, the battle for engagement has only just begun.

Some months ago Vibhu Norby, founder of start–up EveryMe, wore his heart on his sleeve in a blog post, letting the world in on adoption rates for the app that had to date secured $3.6m in funding. Those of a nervous disposition may want to sit down for the next part:

• From over 300,000 downloads of the EveryMe app, 200,000 people signed up to use the service.
• A requirement for a phone number or email address saw 25% abandon the app.
• A further (optional) step to sign in via a social network saw a further 25% leave the process.
• Just less than 25% didn’t create a social group within the app
• Finally, another 25% failed to add anyone to the group they created

The net result was that EveryMe retained around 5% of users through the entire on–boarding process, by all accounts a common story even with apps that have firmly established themselves over time.

The challenge for EveryMe, indeed for every other app on the market, is simply to build something that people want to use. Before app producers get close to design of any sort answers to the following questions should be clear: firstly, “What problem are we solving?”, and subsequently, “Are we building the right thing?”.

Solid research and user experience strategy goes a long way to providing answers to those questions, which should include getting a firm grasp on user motivation and intent. A lot has been made in the past of the idea of gamification and making apps and services more pleasurable to use. These principles tap directly into motivation theory, where ‘rewards’ (badges and user levels for example) are offered as extrinsic motivation for user interaction.

Stronger than this again is intrinsic motivation, where the drive to act (and interact) comes from within the individual. Take as an example the popularity of the ‘Couch to 5K’ system. The method of incrementally lengthening periods of running until a non–runner can comfortably cover a 5 kilometre distance has spawned dozens if not hundreds of apps, which consistently top the app download charts. Leveraging intrinsic motivation is one of the most effective ways to encourage engagement over the long term.

The secret to establishing motivation – extrinsic or intrinsic – and knowing which one to use, is to know your audience well enough to answer the fundamental questions: what are users trying to achieve and how are you facilitating those goals? A user’s time spent interacting with an app or a website is an investment on their part; they expect a return on that investment, whether that is in the form of a tangible return, say an item bought online, or through realised ambitions and goals achieved.

What motivations are you able to tap into for your app, website or application? What are users really trying to achieve, and how are you making it easier for them?

If these questions are not credibly answered, those thumbs will almost certainly drift towards the Facebook icon once again.

This post first appeared on the FATHOM_ blog.

A little UX knowledge is a dangerous thing

Picture the scene: a meeting room. A board meeting perhaps, or a presentation. Certainly something to do with commercials. Before too long the acronyms begin to fly thick and fast. Those coming out with the acronyms seem confident enough saying them, so everyone else nods along sagely, never daring to stop the flow to ask what anything means.

For most of the time, you keep up.

“’CRM’… yep, got that one. ‘ROI’? Schoolboy stuff. ‘C–suite’? Hang on hang on, I know this one… nope, I’ll just have to nod like the others and get past it.”

If that doesn’t sound familiar in any way, let’s (ahem) say it’s just me then.

You’ll forgive my sensitivity about something so close to my own heart, but ‘UX’ has now joined the buzzword bingo list. In many ways it’s understandable – it has an X in it after all, so it sounds edgy. User experience is ‘hot’ at the moment, and anything hot will inevitably get misappropriated. Some will be quick to pursue the credibility they assume will come from adopting a faux interest in customer needs.

Last year, one of the most blundering commentaries on user experience I’ve had the misfortune to read appeared on a popular professional networking platform, written by the MD of a prominent advertising agency. With a promise to explain UX to the reader, this individual went on to stumble their way through an incoherent, rambling essay in which UX was thrown haphazardly into – and I quote – “lots of different stuff” that delivers the power to steal customers from competitors.

The kind of zero sum thinking reflected in the piece smacked of the marketing and advertising of 20 or 30 years ago, where brand was the dominant force pulling the strings, and a ‘by–any–means–necessary’ approach to customer acquisition ruled the day. Any means, that is, except focusing on the goals of the customer.

I often re-read the piece, each time more convinced it was authored on the wrong side of a bottle of wine. But also because it typifies the greatest distortion of user experience thinking; that it is somehow a natural extension of traditional advertising or marketing, and pertains to “the experience that we will impose on them”. User experience is in fact anathema to that worldview.

The trouble is that simply talking about UX, dropping it into the meeting room game of buzzword bingo, suggests a fait accompli; that simply referring to it ticks the box. Beauty pageants are an archaic concept nowadays, but one version of them pops up regularly in the business world: the presentation of graphics when not one question has been asked of the end user. There is no ‘U’ in UX if the user hasn’t been represented in the outputs.

UX is primarily about solving problems, not merely the amplification of a marketing message or delivery of a brand style guide. User-centred design is not a fatuous term. It requires a process, one that starts with the user.

UX doesn’t exist within a set of graphics, or a piece of content, or navigation labels, or any single component. A user experience exists in the mind and memory of those people that have engaged with a product or service. This must be understood before it can begin to be addressed.

Of course, such simple facts have never got in the way of some people just opening their mouths (or taking to their keyboards) and letting the buzzwords flow.

There’s nothing new about innovation

If you are familiar with phrases such as ‘early adopter’, or the bell curve model of adoption then – whether you know it or not – you are also familiar with the work of Everett Rogers.

In 1962 (53 years ago at time of writing), Rogers published Diffusion of Innovations which contained not only enduring ideas like the bell curve, but a wealth of material that continues to be relevant in a world hungry for the silver bullets of success.

Still in print, and in its fourth edition, Diffusion of Innovations remains a central text when it comes to assessing the potential of innovations in the marketplace. Building on research gleaned from over 1500 field studies, Rogers identified that an innovation could be rejected, and therefore fail, based on one or more of the following characteristics:

Relative Advantage – the degree to which an innovation is perceived to be better than something comparable

Compatibility – the degree to which an innovation is compatible with existing habits or behaviours

Complexity – the level of complexity associated with adopting the innovation

Trialability – the level of opportunity to test out or trial the innovation

Observability – the extent to which the results of an innovation are visible to others (particularly peers)

Many producers or technologists believe – indeed, need to believe – that they will be the next Steve Jobs or Henry Ford. Quite a number will claim that Rogers’ criteria don’t apply to their product, in the same way that Jobs’ or Ford would never be tethered by rules of any kind.

However the vast majority of everyday innovations tend to build on and improve something already familiar. Taking the iPod as an example, the idea of a portable music player was well established – the Walkman was first introduced in 1978 after all. Indeed, the Walkman’s innovation had been to miniaturise the music player, as the opportunity was already manifest. Its inventor Akio Morita had observed young people in New York carrying boom boxes around on their shoulder and recognised the desire for portability.

Further, the iPod was not even the first of its kind; that honour belonged to the The Audible Player from, arriving on the market almost 4 years before the launch of the of the iPod. By the time Apple’s response appeared there were over half a dozen MP3 players available on the market.

So the ‘innovation’ of the iPod was building on already established behaviours and needs. Where Jobs’ vision triumphed was in the exquisite execution of the concept, something that other companies didn’t come close to. Viewed through the lens of Rogers’ criteria, the iPod matched many of the requirements:

Relative Advantage – it had a much greater capacity than the Walkman

Compatibility – the idea of personal stereos was ingrained in the consumer mindset (although iPod brought with it the iTunes eco–system)

Complexity – the iconic scrollwheel made the task of navigating huge amounts of content not only easy, but pleasurable

Trialability – many high street stores stocked the iPod

Observability – the white earbuds were instantly identifiable, and formed the centrepiece of advertising campaigns of the time

Assessing the potential for innovations to succeed is prime territory for user experience work. UX is often confused as being user interface work alone, or as a trendy nom de plume for what we should simply call ‘design’. But in 1962, Rogers was already speaking a new language:

Determining felt needs is not a simple matter, however. Change agents must have a high degree of empathy and rapport with their clients in order to assess their needs accurately. Informal probing in interpersonal contacts with individual clients, client advisory committees to change agencies, and surveys of clients are sometimes used to determine needs for innovations.

This was new thinking in 1962, and it remains a challenge for businesses and organisations today. But it is evergreen advice, and words that we in Fathom adhere to day and daily.

In the scramble to innovate, don’t overlook the fundamentals.

This post first appeared on the Fathom_ blog.

Designing for Dignity

User experience professionals are vocal about the benefits of a user–focused approach, the need to remember that users are people, and the importance of dealing in human–centred design. Rationale tends to link the outcomes of a user-centred methodology with conversions and completions, funnels and fulfilment. It is less common however that we hear of the human value of great design.

In 2007, Dr. Richard Buchanan published a seminal essay reflecting on the ability of design to play a greater role in society. In the essay, he wrote:

“Human-centered design is fundamentally an affirmation of human dignity. It is an ongoing search for what can be done to support and strengthen the dignity of human beings as they act out their lives in varied social, economic, political, and cultural circumstances.”

With those words, all stakeholder, designer and developer egos should collectively and rightly crumble.

Simple support for human dignity doesn’t make it into the marketing discussion. It’s unlikely to be talked about during the brainstorming sessions; it almost certainly won’t make it into the design brief, and the technical specification won’t address it either. So where do we fit in designing for dignity and the basic ability for people to achieve their goals and get on with their lives?

Respect for people’s time by rights means allowing them to complete what should be simple tasks, and letting them get on with what they really want to do; something which is unlikely to include continuing to use your app or website. Linking design with usability is a vital step. But that further step, of linking usability to dignity and respect, is under represented.

Unconditional positive regard is a term used in psychology relating to the acceptance and support of a person no matter what they say or do. Applied to the world of UX, we might say ‘there is no such thing as user error’. Design luminary Don Norman puts it this way:

What we call ‘human error’ is a human action that … flags a deficit in our technology. It should not be thought of as an error.”

And yet it is technology that so often lets humans down. It often appears that our desire for impact and ‘cool’ has overtaken the need to design products and services that meet basic needs.

As an example, I’ve witnessed first hand a 90+ yr old come to grips first with a PC and subsequently a tablet. I’ve been left feeling ashamed for the software industry as a whole, as the same person tried to adapt to a new operating system that installed itself, after they had only managed to come to terms with the previous one. I have seen them struggle with the iPad version of a shopping app only to be forced to running the scaled–up iPhone app on the tablet to make the system accessible to them. Unsurprisingly, they blame themselves.

The inherent simplicity of touchscreen devices offers a potential lifeline for those who have been left behind, or left out of the internet revolution of the last 20 years. Badly-designed apps and online services immediately waste that potential.

The goal that people should be able to use what we design with ease, free from stress or friction, is not mutually exclusive from the business objectives of most projects. Absent from too many project briefs, the principle of designing for dignity should be a prerequisite; a foundational element in any design discussion.

This post first appeared on the Fathom_ blog.