Broken Windows

The Broken Window theory of criminology was popularised in Malcolm Gladwell’s 2002 book, The Tipping Point. The theory says that urban environments where vandalism and dereliction are present redefine social norms (reducing the pride people take in their communities) and leading to greater crime.

In Gladwell’s book, he highlights the effect of Giuliani’s zero tolerance policy on minor crime in New York City had in reversing a years’ long reputation for being dirty and dangerous.

I think the same kind of effect of the magnified effect of small issues applies just as well to another kind of Windows.

When Microsoft launched Windows Phone in 2010, they achieved something similar in a change of attitude. Screw the number of total features, apps or whatever, the Windows Phone team reversed a decade-long (Pocket PC first came out in 1990, Windows Mobile in 2003) trend of releasing software with lots of little bugs in it. And in doing that, they gave many of us hope that Microsoft could really rival Android and iOS in the mobile phone OS market.

Anyone who lived with a Windows Mobile devices (Windows Mobile 2003, Windows Mobile 5, 6, 6.5) will remember these little bastards the overwhelming feeling when one thing or another just failed to work wasn’t anger, it was resignation.

I’m not talking about UI failures – although there were certainly plenty of those. My favourite bit of non-user-centred thinking must be the snooze menu for the built in alarm clock which required the navigation of a pop-up submenu – this for a user that you can guarantee is half asleep. In later versions, SMS messages were threaded but with new messages appearing at the bottom of the list which would open at the top – often taking several minutes to scroll down to.

No, I’m talking about full-on bugs. In our office at the time, where these phones were standard issue, we gave up asking why people had failed to return calls, had hung up mid-sentence (I still believe the phone would drop the call if it received an email with attachment), or sent garbled and incoherent emails and texts.

I remember a conversation with the ‘mobile expert’ from our firm back in 2008 when he told me that the way to keep your WinMo phone working well was to completely wipe it and re-install everything each month.

You just learned that every so often, the phone would let you down and the only thing to do would be to suck it up.

It was a disaster. Ballmer even admitted as much in public.

Despite all this, the platform was pretty successful, commanding up to 20% of the marketplace. Because it was only competing with Blackberry (which was a bit more expensive and required a server for enterprise customers to get their mail) and Symbian which was late to make any kind of leap to the enterprise.

I’m sure Microsoft would kill to have the same share with Windows Phone today that they once had with Windows Mobile. And the fact is that Windows Phone – a completely re-designed mobile platform – deserves to be a serious competitor in the marketplace. It’s really good.

But the best thing about WP7 when it came out was that it wasn’t buggy. It didn’t have multi-taking (WM did), it didn’t have copy and paste (ditto) or all sorts of other features. But at least it didn’t have any bugs. Things would straight-forwardly work. Calls could be made. The screen wouldn’t stop responding or go all laggy. The UI was consistent. In fact, the UI was excellent and intuitive. So good in fact that it’s ended up on Windows 8, but that’s another story.

For once, it felt the Microsoft team behind the product really understood the need for quality in the product released. Better quality not more features. When Microsoft updated the OS to 7.5 (codenamed Mango) they brought a host of new features and capabilities to the platform and once again, maintained the capability. Of course it was and is an uphill struggle for the OS. Clearly it’s been slow to grow. But the people that have it like it, and that’s a great starting point.

So now it’s two years later, And Microsoft has recently launched WP8 for new WP hardware and a final update for WP7.8 for older hardware.

Whilst it brings a couple of new features and a new start screen, WP8 is really an engineering-led change for Microsoft, building on a long-story which dates back to before the somewhat calamitous release of Windows Vista.

Vista had been intended to improve the overall user experience of Windows, making a big step forward from Windows 2000. As it happens, the user-experience of the Windows Vista interface was very compelling. Unfortunately the performance – the most important element of any user experience – was not up to scratch. Frustrating many with the new OS.

By contrast, Windows 7 went on to be Microsoft’s best and most successful OS and it did this by making the heart of the operating system as small and efficient as possible and therefore dramatically improving the actual user experience. Project lead, Sinofsky did this by taking advantage of the ‘winmin‘ project which had been running at Redmond for many years to cut down core Windows NT.

With Windows Phone 8, Microsoft has replatformed – almost invisibly – their phones from Windows CE (a somewhat dated and clunky core) to a version of Windows NT (a long-standing but highly efficient system), just like Windows 7 and Windows 8.

There is no doubt that this is an amazing engineering achievement. Even though it has come of the cost of WP8 moving along very little from WP7 in terms of what the user sees. But it also seems to have come at the cost of quality in delivery, and not just the delivery of WP8, but WP7.8 too.

Nokia was kind enough to send me a Lumia 820 device early on. Aside from using the highlight colour for the button actions as well as the tiles, the devices can only really be told apart from the Lumia 800 by the removable back cover and the size (for my money, a bit too big). The screen’s actually the same resolution (but bigger so it drains the battery faster). It’s got NFC and wireless charging, both of which are cool. But it crashes. About every six hours, meaning I’ve got pretty good at taking the removable cover off. And the music player hangs the system. And you know what I thought straight-away? This is like having Windows Mobile back. Broken windows.

Because it’s careless. As I said earlier, I’m sure it’s a major engineering triumph but from a user’s point of view it’s taken a year to make a phone that’s bigger, has worse battery life, crashes (often at night, making it’s use as an alarm clock somewhat questionable), hangs, doesn’t have Gorilla Glass (the 800 does), has a much worse desktop sync client and doesn’t look as nice.

And the 7.8 update, a sort of parting shot to keep a Microsoft promise about upgrade cycles, is full of bugs. So now my 800 is broken too. The live tiles don’t work, mine at least is crashing regularly and there are small careless errors dotted here and there. Take for example my Music tile which has recently renamed itself (somewhat accurately) ‘Crowded House’!

Forgive me for saying that it doesn’t feel like a year well spent. A year in an industry which (Android at least) is moving ahead very quickly. Yes, we want new features but what I personally want more than anything is quality. Each new product should have fewer bugs than its predecessor, not more. And every time I find a ‘little bug’, it shakes the faith I have in Microsoft to win in phones.

Surely a successful phone is the most important key to Microsoft’s long-term consumer strategy. So why isn’t it their top priority to get it right?

Decision time

It takes a very cold heart indeed to not love a user-experience concept which can be illustrated using a mathematical formula. Look at Fitts’s law:

T = a + b \log_2 \Bigg(1+\frac{D}{W}\Bigg)

This set of symbols help us understand that the ability to point at something on a screen (or in real life) is dependent on the size of the thing in question and its distance from where you’re currently point (D is the distance, W the width of the thing and T the time it will take to do it).

How do such formulae exist? They show us that we’re dealing with a fundamentally limited but predictable set of capabilities of a fundamentally mechanical end-user. They have real life results, visible in any good mobile phone interface design, no amount of jiggery pokery will change them.

Well it was in this spirit that I stumbled across Hick’s law.

The law is a formula to help show how humans make a choice from a set of available options. Most famously, I suspect, this has been spun off to show that navigation systems should have about 7 options in them.

The idea here is that humans have certain coping strategies for making decisions. If a long list is presented, for example, they will try to create patterns to help them (roughly) bisect the list (pick half and reject half). It has also been shown that decision speed  is related to IQ.

So – whilst we cling to the nice idea that any navigation system will be OK so long as we’ve got no more than 7 items in it, in fact there are several other dynamics at play

* Stimulus / response capability. It will take a lot longer to click on the right link if we break the intuitive link with layout (e.g. “bottom | top| left” is very hard to scan)

* Elements of mixed sorts shown together require the user to read all the labels and think about them together, placing enormous overhead. (“Carbon neutral products / Contact us / Back / About“)

* Users can ignore well known patterns, significantly reducing the thought process.

But the key thing to take away here – which may be very counter-intuitive for you advertising johnnies – is that it is positively in your interests if you can quickly help your users to ignore options which are not relevant to them. Support your user in ignoring messages Smile

7 reasons

So Windows 7 is now winding its way through to be on the PCs your average users. It’s virtually impossible to know how they’ll like it.


Certainly the development community has been very impressed. A lot of my colleagues have been using it as their primary OS for many months. I’ve had it running since the first public beta. It was very good then, and it’s got better through the releases.

But, as everyone knows. Developers aren’t normal people! Their impressions of the software could prove to be the exception. Developers can see why things are screwy, and their conceptual models are far more developed, enabling them to much more quickly understand how something works functionally.

More importantly, developers are much more rarely out of their comfort zone. And, in my experience of user tests, most consumers are out of their comfort zone most of the time.

I recently spent a couple of hours watching a user who was setting up their (XP) machine. I asked him how he accessed the internet. He showed me that he would log into messenger (because it was set to pop up automatically), click on the ‘unread messages’ icon which would open Hotmail, and then he would enter ‘Google’ into the (Bing) search panel (in IE). He refered to the browser (IE) as ‘Google’. He was very annoyed that he had to keep seeing (optional) Windows Live Today. He was absolutely delighted when I set it so he could launch ‘Google’ directly from  a link on the desktop (and a change of homepage). The only ‘advanced’ feature he seemed to be aware of what ‘clear browsing history’ 🙂

These tales are not exceptional. They are the norm.

So it’s remains a big question how Win7 will fare in the ‘my mum’ test.

I think it’s got several things going for it. All of which are testament to a strong focus by Microsoft on what actually matters to consumers, and – as I’ve said before – some very sophisticated research and evaluation techniques to back that focus up. For no reason at all, I’ve picked out 7 of them.

1. The number 1 element of user experience is performance

I sat in a Microsoft presentation a few weeks ago. The guy presenting was flipping between a number of decks. I remember thinking, ‘Windows 7’ certainly looks better than it’s predecessors. Then I realised he was using Vista. On the surface of it, Vista looked pretty good and had loads more features. But it was slow at some key things, and it was sort of randomly sluggish.

Everyone knows those pop-stats like ‘the average person spends three years of their life at traffic lights’. How long did we spend watching the blue hoop of maybe-death in Vista? Far too much.

Windows 7 is simply a lot faster than Vista (and even XP) at most common tasks

But this is not just about how long you actually wait.

I think the neatest bit of design I’ve seen in the last two years is the favourites screen (new tab page)  on Google Chrome browser. By loading instaneously, it makes you feel like the app is instantly ready to go, even though it could be another 30 seconds till you’ve got your first page loaded.

We know that the Windows 7 team spent a lot of time on this sort of thing, for example making sure the start menu would pop up quickly no matter what. I’m sure it’s faked some of the time. But it works. It makes you feel like you’ve got a snappier computer.

And then they’ve picked out the key performance things – that people actually notice – like sleeping and waking up, and  they’ve made sure Windows 7 is way ahead of the pack.

2. Less clutter works towards ‘ feeling of mastery’

For all UI issues, I think rule 1 is that users don’t customize. Remember that over-inflated services bar in XP (the one on the bottom right)? How many people (outside of developers) did you ever meet who had managed to reduce it down to  a manageable set?

So quite a lot of the UI in Windows 7 is about reducing clutter. And making things nice, big and clickable when appropriate. Simples!

There is a chance now that an average user might be able to look at their desktop, and scan the start menu, and have some reasonable idea what everything does. Again, this sounds obvious, but it’s been lacking in any other OS I’ve come across.

3. Copy

Windows has always had a fondness for incomprehensible text and excessive dialogue boxes. It’s a small tweak but the UI text in Windows 7 is easier to read and understand, and improved UAC gives you less you have to deal with.

All of these little ‘exceptions’, the times when the user leaves the ‘happy path’, are often paid too little attention in the design process. They are in fact the times of maximum stress for the user, as they stray the furthest from their comfort zone. How on earth do we expect a normal user to deal with ‘host process for windows has terminated unexpectedly’. I don’t even know what to do with that one and I know what it means.

4. A bit of taste

Obviously this one’s subjective, but certainly the backgrounds, themes and login screen in the release candidate are significantly more interesting and creative than anything we’ve seen before from Microsoft. It’s not trying to be too funky, but it feels for the first time in a long time that someone with a bit of taste has been involved in the visual (and interaction) design of the interface.

5. Dealing with third parties

Clearly integration with everyone and their aunt has long been both the key weakness, and key strength of PCs and Windows. The horror show of Vista driver compatibility was arguably its single biggest problem. Win 7 won’t repeat that, since we’ve been through it already (and there’s been a lot of advanced prep with partners). And, a real effort has also been made on the ‘device stage’ functionality to try and make the whole thing feel like one computer experience.

6. Integration of Office apps

I hope the European Commission aren’t listening, but some of the neatest features of Win7 are integration with the UI and features of the apps that run on top of it, and principally the Office suite. The ability to peak into subwindows with Aero Peak is brilliant. Part toy / part useful function, it is very compelling, again building the sense of mastery over everything runnning on the machine. For the first time, the search (from start bar) really works, indexing everything and presenting categorized lists (by source). There are about ten small features like this that add up to real feeling of integration and control. Very neat.

7. Never mind the hype

Arguably the most important distinction between the Vista launch and the Windows 7 launch has been the approach to hype.

What’s worse than suffering at the hands of a dysfunctional operating system? Being told how great it is while you do it.

The defining moments of any OS are not the big numbers, the length of the feature lists or the coolness of the loading animations (whilst important). They are the moments when the user feels ease or disease.

There are lots of parallels in the real world for this. But when a user can’t see how to do something, they feel stressed and they blame themselves. They feel stupid. Not being certain is as bad as not knowing at all. Will what I’m doing really erase all my files? Am I actually in the ladies’ section, because I quite like those trainers? When users are agitated or nervous, they are not building happy memories.

Incidentally, the most misjudged result of this misunderstanding came from a Microsoft marketing campaign, Project Mojave where Microsoft suggested that users were ‘wrong’ about Vista, which is essentially like telling your four-year old that he’s NOT afraid of the dentist.

Instead what we see in Win7 is an absolute acknowledgement that the consumer has the right to misunderstand and make snap and shallow judgements, a fact that many other industries have known for a while.

The story goes that Fiat has a design shop looking at door handles, stearing wheels and ignitions, because these are the only bits most punters will come into contact with in the showroom or on a test drive.

It appears that Win 7 designers and engineers are thinking the same way. They have smoothed the edges, and picked out the occassions when performance matters most, and tuned it up just a bit. Bugger what’s going to make the OS appeal more to devs, what does the consumer care about – a bit less gloss there, a bit more gloss there, a bit faster here, a bit less cluttered there, and the 100 things will make the user feel confident and in control. It may be on an industrial level but it is experience design of the highest order.

Certainly we won’t know until it’s had it’s mass market mauling at the hands of my mum and millions like her who don’t do this for a living. But if I had to bet, I’d say ‘average user’ will like it more than the beta audience. And even better, they won’t know why.

Doors and language

I talked a few weeks ago about how toilets and planes are bastions of usability. Of course, I missed out the number one usability battleground. As Don Norman covers in incredible detail in the Design of Everyday Things, doors are the simplest opportunity for poor and inconsiderate design.

And, although the world remains full of terrible doors, I found a great exception in a most surprising location. And for added marks, it was a toilet again. The loo in question was in a Starbucks, and has an ingenious solution to an often mangled problem. To lock the door, you lift the handle up.


Like all good ergonomics, the solution is elegantly simple. Providing visual feedback, preventing any attempt to open the door without unlocking it and reducing the total number of controls.

Unfortunate then, that the same smallest room also offered this feat of mangling of the English language:


When I was at Bristol university, our marvellous professor of Logic, professor Mayberry once spent 10 minutes showing what distinct meanings the phrase ‘every nice girl loves a sailor’ could have – mostly concerned with how many girls there are, how many sailors there are, and who loves who, in reality or in theory.

Well without getting all ‘That’s life’, this sign suffers a similar – and frankly filthy – ambiguity: Surely other things than paper are going to go down the toilet, and surely you’re allowed to do more with toilet paper than just flush it down the toilet.

Now I think I know what they intend the sign to mean, but surely a little effort could have been put in to the language, just as there was in the handle.

Creating the ribbon


I’ve talked here a few times (here and here) about how Microsoft doesn’t seem to be able to catch a break. Google or Apple get gushing reviews for living ‘in beta’, Microsoft gets slammed for getting stuff out too soon. Apple’s security is questionable, but we never hear about that. Nor it seems are we ever reminded of the potentially dangerous level of detail Google extracts from customers. Ballmer’s an egotistical wild man, while Jobs is a quirky eccentric genius. Making huge profits turns Microsoft into the evil empire, but is seen as a validation of Google’s all round wonderfulness.

This year’s Mix event, which finished yesterday has been a strong reminder that in fact, there is a good deal of great stuff going on at the software giant, and that developers in particular are delighted with much of the company’s output.

Friday’s presentation on the design of Office 2007 provides a fascinating insight into the sheer scale of the software and interface engineering challenged the team faced, their tenacity in dealing with it, and the powerful role place on the needs of the end user.

Including early prototypes showing hugely varied ideas which the team went through to get to the version that has been released, the presentation is rich with insights into the internal battles that had to be fought throughout the process and some amusing asides to previous mistakes, the presentation (75m) is well worth a watch.

Jensen Harris looks all the way to Office 1, documenting the slow decent into the chaos of Office 2003 which boasted 31 menus and 19 taskpanes. The impetus to redesign the interface from the ground up for Office 2007 rather than more menus, wizzards and taskpanes, was an understanding that the user must feel in control of their document and that – while all the features should stay, the ‘perception of bloatedness’ had to be removed.

We see some of the stats from the customer improvement programme (collecting millions of anonymous customer usage patterns). This information was a key part of understand the sequence of actions that real customers actually take, and reveals – perhaps unsurprisingly – how erratic their actions actually are. There is also some amusing eye tracking against the 2003 site, some interesting insights into the challenges of creating a taxonomy of the 1500 functions, and some more unkind words about the demise of clippy, the automated assistant which was just one way to get around the almost impossible interface that existed until recently.

During the Q&A at the end of the session, Harris is asked about the extent to which customisation was considered. Whilst not against customisation per se, Harris argues that it mustn’t be used as a ‘crutch’, avoiding usability problems by allowing the user to remove them, and explains that only 2% of users ever used the customisation features of 2003, and then only for one or two buttons.

Model citizen

Thougt bubble

Watching people in usability tests is fascinating. Anyone who has done this will know what I mean. Months of planning a system, of hours spent building in impecable logic are dashed irrefutably  against the rocks of reality when user after user simply fails to see  it the way the designer does.

The concept of mental models was first put forward by Scottish psychologist Kenneth Craik in 1943. The idea is that humans are frantic interpreters and, to aid in the speed of interpretation, will create small scale pictures in their mind of what is going on. While these models continue to perform users will hold on to them and use them. But they are expendable. If the user hits a brick wall and their model fails to predict what happens in the real world, it will be abandoned for a new one. Philip Johnson-Laird extended this concept through studying how readers understood novels, saying that some authors would force the reader – through ambiguity – into holding several mental models in mind concurrently – each vying for selection.

In designing computer interfaces, we often have conceptual models (to a certain extent, the designer’s mental model, or the shared “mental model” of the design team), and of course there is also a functional model -what actually happens, how it actually works. Something that doesn’t get mentioned in HCI discussions is that there are very often business rules which also apply throughout the function, which are essentially part of the functional model. We need to work hard to try and get the often complex functional models to deliver simple, understandable conceptual models.

So take a new site where items can be added to a basket by drag-and-drop. There’s a number of models being combined here. The user is being asked to co-opt an understanding taken from the classic operating system GUIs (dragging and dropping). There is an underlying co-opting of the supermarket experience of baskets. I, for one, am not convinced that this later abstraction was a natural one to users to learn, although most users do now understand the concept of an electronic basket almost as well as they know how to shop in stores. Of course the functional model will be completely different and much more complicated.

It is suggested in this fascinating summary that conceptual models shouldn’t obfuscate what is really going on. Certainly in terms of HCI, I find that view insupportable. The user doesn’t want to know that their product going into the basket is just a new entry in a database join table having passed through a set of business rules. Although we do see sites regularly forcing customers into this level of mental gymnastics.

Sometimes, resembling other mental models is helpful (drag-and-drop in the example above). Often too, it is confusing. Picking only parts of a conceptual framework, or attempting to abstract it too far from its original purpose leads to a cognitive disonance that leaves the user unconfident, often taking them back to square one.

Humans don’t scale

 Talking Head album cover - an oil painting of a monkey

In the Spring ’07 Market Leader (the Marketing Society publication from WARC), Y&R’s Simon Silvester talks about how it is the limitations on our ability to learn and adapt to new technologies which will actually restrict their spread; that innovation is useless without usability.

He points out that the “geek” audience of super-early-adopters have a very different (and dichotomous) set of needs from later adoption groups and certainly from the mass market and the laggards. Most people don’t use most buttons on their remote controls, most people use a small fraction of the functions available in software packages, and even most teenagers can’t keep up – Silvester’s own research could not find one teenager who knows how to use every button on their phone.

Refreshingly Silvester calls for a more human-centred approach to design, debunking some powerful myths:

  1. That consumers want convergence – actually the most successful products often do one thing well
  2. That later adopters will not just have different needs, they will have a different entire framework (the example given is that the first round of mobile-phone users saw it as a tool  for urgent calls if – for example – arrangements changed or went wrong. The second, youth, generation in contrast have re-orientated their entire lives around phones).
  3. Once technology works, consumers forget it exists
  4. Female audiences are increasingly key drivers of communications technology
  5. Changes may take a generation to take hold
  6. People simply don’t read manuals – don’t even hope

This is all grist to the mill for those of us that are passionate about the user-centred (or human-centred) design approach but it also ties in rather well with a Gapping Void post that “Human Beings don’t follow Moore’s law” or in Hugh’s own words “Humans don’t scale”. There’s all this new technology but it’s being used by the same over-developed apes. So we’ve got to really work hard to make it immediately understandable and usable.

If Web 2.0 is Web 1.0 done better and adopted more widely, and the truth is that alot of the technology was around in 1999 – it’s just that people couldn’t or didn’t want to use it, then we need to keep up the good work. Let’s hope the voice of the customer voice just keeps on getting louder and louder.

No logo?

What’s missing from every page of YourSpace except the home page?

Lilly Allen - My Space

Give up?

It’s the logo stupid. Aside from the URL and a couple of subbranding elements (like the player), there is no MySpace branding. The site hands ownership properly to its users but has done a very neat trick through being recognisable just through its (ugly, illegible) UGC design patterns.

(Incidentally, what is all this nonsense about Lily Allen (who I used for the grab above) being fat? If we let Girls Aloud people criticise proper musicians for not being anorexic, we really are in trouble – 2338 responses to that post!).


Paris Hilton with a Blackberry

I’ve never really understood Twitter. I regard this as a weakness. All the coolest people seem to love it, and I can see how it’s a neat concept. I just wonder what I’d put: “Doing sudoku on tube”, “buggering up a lasagne”, “In meeting”, “reading in bed”. I’d bore myself.

Well I’m delighted to see that I’m not 100% alone in my luditeitude (I hearby create a new word!). This brilliant ‘Creating passionate users’ post by Kathy Sierra goes well beyond that initial suspicion that there’s something a bit freaky in it, putting a (very cool) name to a phenomenon I’d been quietly aware of for some time.

In the quite brilliant Perfect Pitch, Jon Steel talks about how constantly receiving and checking of messages can (temporarily) lower your IQ by 10 points.

We now know what it’s called:  “intermittent variable reward”. Or, in other words: behaviour which is rewarded/reinforced intermittently, rather than consistently – is the most difficult to extinguish. Or to really reduce it to simple terms, the addiction to email and Blackberries is similar to slot machines. As Patricia Wallace put it in Time magazine: “You are not sure you are going to get a reward every time or how often you will, so you keep pulling that handle.”

Not content with revealing the real reason for email addiction, Sierra goes on to explain the emotional dissonance that arises out of “virtual” interactions – although this is not necessarily a twitter phenomonen – it applies equally well to TV. The brain feels like it’s experiencing social interaction but is missing an element – body language etc, leaving the subject feeling disappointed and dejected.

Finally, Sierra brings in the concept of “continual partial attention”. Thinking-wise, what we as humans enjoy most is deep thought and processing. But what we do now is the opposite, we constantly pay partial attention to a huge range of inputs. We care more about not missing anything than about actually focussing on and achieving anything.