Evil

Google-dont-be-evil

If I were to set up a company whose whole brand promise was ‘no snack foods’ and then I released a range of delicious healthy biscuits, you would think I would have some branding issues. I’d certainly have to reconsider that ‘no snacks’ banner on my corporate headquarters, and perhaps I would have to rethink those 30 second TV ads talking about my anti-snacking commitment.

Well, what’s up with Google then? This company has long traded off the idea that their founding principle is ‘Do no evil’, yet it has recently been found  guilty again on appeal of illegal wiretapping in the US and Europe. For a business that is basically about communication, it’s hard to think of a more pertinent type of evil than stealing person data from customers. And this is far from being the first time they’ve been found guilty in this way.

Think that’s bad? That’s nothing compared to what they’ve done which is ostensibly legal. Even before we think about their relationship with Chinese authorities or the NSA. In particular, look a their attitude to the rights of others over information. I’m not talking about Rupert Murdoch’s information – which he is reasonably peeved that Google has profited so highly from. No, Murdoch is hard to frame as a victim. Think instead of all the millions of photographers who try to eek out a living from their craft. Or just those of use would rather keep ownership of the pictures we’ve taken of our kids.

So convinced, Google is, of the right to access other users’ data for free that they have been lobbying politicians at the highest level to try and push through the orphaned works legislation.

Andrew Orlwoski seems to be the one journalist covering this issue with any clear idea of the implications. Google is, in essence, lobbying that any image on the internet which has lost its meta data (even if it’s really obvious who owns it), is fair game for them or anyone else to commercialise.

And, of course, images which have ‘lost metadata’ includes every image on Facebook and Instragram (also Facebook), the largest photo sharing sites on the web, as well as many others.

Given the out cry that every Facebook Privacy Policy change attracts, why are people not more concerned that Google is directly influencing the highest level of our governments in order to get their hands on our content?

Out of our minds

Cardgan Bay (image stolen from Russell Davies)

Four weeks ago I went to Wales, to the beautiful area around Cardigan Bay, to join Russell Davies’ day on How to be Interesting.

And now I want my money back etc etc. Boom boom.

But seriously folks, seven hours on Parc y Pratt Farm (home of the Do Lectures), with one of the most fascinating thinkers of this here internet was a rare privilege and one I thoroughly enjoyed. And that’s before you count the amazing, food and hospitality of the event’s hosts. Just Brilliant.

It’s also the reason that I’ve been back writing almost weekly on this blog of late – something I haven’t done since I was a young lad back when the internet was just starting out. The theory being: if the main part of being interesting is being interested (here is the original inspiration for the day in convenient readable format), then we need to practice the habits which help us to collect experiences and think about them. And so that is what I’ve been trying to do.

I’ve also started concentrating much more closely on how other people work  in groups. Partly because that’s interesting for me. But mainly because, like interesting-ness itself, the effect of people working together seems, to me, to be a kind of magic.

Faced with a problem you are trying to solve, you can often spend hours alone and find no solution. A companion and a bit of energy and you’ll have solutions in minutes. Why is this?

Smarter groups seem to produce better ideas, but willingness to take part seems almost as important as cleverness in this context.

At the heart of both things – interestingness and using groups to solve problems and create things – there seems to be a similar dynamic. A new thing (a useful solution, a creative idea, an interesting notion) will most often come from taking ideas, breaking them up, and then putting them back together. Sometimes as little as forcing together two apparently unrelated domains will generate something of interest, or of use.

Like this:

floor-mops

Or this

Post its

Why does that work?

Our own brains seem so set on getting reliably from A to B (and we train them to  be better over the course of our lives), that even a little bit of a curve ball thrown into the process is refreshing, or surprising, or can move us away from producing the same old answers.

As a species, we’re not made – as it were -for surprises. But we can suppress our instincts, and get there any way. And the the trigger for doing just that can be the dynamic effect of multiple people working together, or – I suppose – the mind altering chemicals used by several generations of writers, or perhaps the writing processes many authors follow of free-association, and simply creating huge volumes of stuff. There’s lots of ways (see Arden and Bernbach), but often the simplest and most powerful is a struggle between 2, 3 or 4 people, each building and refining the other’s thinking. Getting out of their own minds, by letting go of their own ideas.

Part of this leaves me wondering, just how vast are the possibilities of our creative minds.  And if Russell’s course taught me anything, it is that they are without limit.

How will we evolve these skills? It’s not as ifnadvertising creatives will be killed off by their inability to think laterally (even if digital media has slowed them down considerably).

Rather it will be whole societies, cultures and companies who will live or die, perish or persevere through their ability to get people out of their day to day minds and acknowledging the incredible power of hacking our synapses to produce things which are new.

Every nice girl

Nice girls, sailors

I’ve talked before about my amazing Maths professor at Bristol university, Dr Mayberry, and in particular about his dissection of the phrase ‘every nice girl loves a sailor’. Is it: “For each nice girl, For each sailor, the nice girl loves the sailor”, or perhaps “For each nice girl, there exists a sailor, such that the nice girl loves the sailor”? Each of these possibilities, and I recall there being many more, was written out in logical notation. The point was, I suppose, that language is sloppy, logic is not, and… you know… be careful.

Perhaps the most infuriating lacks of care is when a marketing person gets hold of a ‘unifying idea’. And the best example of this was a former employer (I was an adopted child in a marriage of the shotgun variety), who shall be know as XYZ Corp.

Two sorts of super smart people worked at XYZ’s massive head office, it seemed to me at the time. Type A went and did complex acquisitions. Type B went around behind them explaining why such and such an acquisition was ‘strategic’. This is all well and good of course, you can’t possibly announce you’ve just bought companies for the sake of getting bigger, or to conceal and distract from some other failed corporate activity. Protocol dictates that for a period of at least one year, all involved will pretend they were genuinely in love and not remotely drunk. After a year, it’s time to start arguing over who owns the CD collection and what you’ll tell the kids.

Anyhow. The rightness or wrongness of corporate mergers is not the point. The point is the mangling of logic which often ensues. For XYZ, faced with integrating a digital design consultancy with a company that made things with flashing lights, the story was that we were both ‘information’ businesses.

I suppose we could have just said we were both in the sales business, or the bullshit business.

Aside from the pure bravado of this manoeuvre, which is breathtaking, the most amazing thing about it is that it sometimes worked. People would nod along, half asleep in meetings. The air would be punched at sales conferences as we discussed how ‘information’ was that the heart of our growth strategy. The fact that not a single employee understood a word of it was not discussed.

And XYZ corp were certainly not alone in this madness. I’ve seen all sorts of companies stitched together on the thin understanding they are about ‘results’, about ‘communication’, or shared a ‘passion for customers’. Someone once tried to tell me our digital agency (a different one) should merge with a bill stuffing company because we were both ‘about customer data’. This last one must have taken a supreme effort of self will to keep a straight face for. Equally brazen, and potentially more incoherent, I once heard that a music retail company was ‘already a social network’ because people used to socialize ‘in their stores’.

If your objective is to make to things that aren’t equal sound equal by positioning each at the end of a similar sounding definition, then you have only served to slightly weaken our ability to communicate. And if it’s your job as a branding agency (who I believe have to take at least most of the blame for this sort of behaviour) to do this, then I fear you’ve not responded candidly to the brief. Go back and tell your client it doesn’t all fit neatly together, and that’s not the end of the world, so long as you can create customer value. There are other branding strategies than ‘one big brand’, there are worse things than being diverse. Namely being incoherent and self-obsessed. And if your client doesn’t want to hear that then let them hire someone else.

Optimism

I remember heading up the stairs at the Ace Hotel in New York City (one of the finer things about that fine city) and, stumbling – almost literally – over this:

Image

I think I took a photo at the time but I can’t find it now. And this one is better anyhow. Turns out the staff at the Ace really like this idea, so much so that they also print it on the back of their keycards:

Image

And they’re not alone: as tumblr proves. Artist Martin Creed has made a habit of constructing various signs saying just the same thing everywhere from Edinburgh to just a few blocks north of the Ace in Times Square.

And the affirmation remind us that, things now are as they should be, and the future will work out just fine.

What do we call this. In fact the definition of the work ‘optimism’ is a belief that the current state of affairs is the best it could be, and that the future will be too. And yet sometimes we fail to see this truth about now until it is later. It’s when we look back at our past that we can see what we should have perhaps valued more at the time.

Do our modern lifestyles make it easier or harder to be optimistic? I’m not necessarily contrasting optimism with pessimism (the belief that things are not as they should be and may not be in the future). What I’m contrasting it with is distraction. This dream that Steve Jobs et al have had – to put a computer in the hands of every man, woman and child, seems to rob us of our ability to understand where we are now. Everyday as I travel into work and back I see people staring endlessly at tiny iPhone screens, as disengaged with the real world as we could be. And most of the time, I’m one of them. Perhaps on occasion, what’s on the screen of the anonymous commuter is photos of family and friends from a million miles away via the miracle of  Facebook and the internet.

But a lot of the time it’s work email or Angry Birds.Why do we do this to ourselves? Why must we constantly stare at these alternative universes?

I suppose the answer is where we started out. That sometimes we do not wish to contemplate now, and prefer neither the past nor the future but some other timeless realm. It’s escapism, but often escaping to somewhere as uninspiring as a work email.

Still, the reassurance remains, everything will work out as it should, even if you’re reading this on the 5.27 to Sevenoaks.

The worst form of government

Image

Anyone who is a bit of a smart arse, like me, will recognise the quote above. It’s from Winston Churchill and it is about Democracy.

You would be forgiven for thinking the full quote is ‘Democracy is the worst form of government, except for all the others’, as that is how the punchline is usually delivered. In fact the quote is

Democracy is the worst form of government except for all those others that have been tried from time to time

I remember learning about democracy at school. The ancient Athenians, I was told, had the purest form of democracy because literally all the people were called to vote on the major issues. It was an entirely participatory democracy, not a representative one. The voice of the people spoke directly on all the issues (so long as ‘the people’ only meant rich Greek men, of course).

We don’t do this any more because it would be impractical… but of course, now, with the internet and our clever phones, we probably could do it again if we really wanted to. We could also, when it comes to it, do any number of other things to make decisions which would like be way more representative than what we do do. Why couldn’t we do sampling to decide, why couldn’t we analyse Google’s search logs? We might not trust Google, but we don’t trust MPs either.

When I talk about polling or sampling, I don’t mean the sort of polling where Rupert in Piers Rupert and Tristan PR asks two mates what they think about designer handbags and then sells it to a newspaper as a story (“66% of women like handbags”). I mean honest to goodness polling. Ask a scientifically balanced control group their opinion and then do that. It’d probably still end up being more fair than what we do now, and a damn site more practical, and cheap. After all, that’s how they decide what’s on the telly.

The obvious concerns is that we wouldn’t have any safeguards against tampering. Well why not, we don’t think people tamper with general elections now do we?

The truth is that we do already govern by polling. Not officially perhaps but political parties often using polling / sampling to set policies and decide agendas. This is how they figure out which laws they’ll pass and how to spend your money. But most people would shy away from it being a formal mechanism (polling suggests).

Why? Because  it’s not what we’ve done in the past. And we must always do what we’ve done in the past.

As I see it, the most fascinating example of this very conservative bias towards the past is US politics. Americans agree on nothing. They are bitterly divided on almost everything. Unless it was said more than 100 years ago, or by a Kennedy or by Dr King. In which case, everyone agrees on it.

This reverence for the past is amazing. Especially for a country so wedded to the future. The American Dream is about tomorrow but the American consensus is firmly about the past.

Of course, revolutionary America was gifted with many fine men, many great public speakers and many heroic soldiers. If only the leaders of today who match these qualities could gain the same respect.

And as we look at the hideous turmoil in Egypt, can we ever imagine that the statesmen, the soldiers and common people will be revered as such decisive change makers? I certainly hope so, but they should not be forging that future based on the model of dead presidents or prime ministers from years past. Where the army can remove elected officials in the name of democracy, perhaps we need to think more originally about what it means to be a democracy.

Now that we can all talk to each other all the time, couldn’t there be and shouldn’t there be a more challenging re-evaluation of what it means to operate a society. Why cling so fervently to rulling through putting folded pieces of paper in metal boxes?

Let’s look for one of these forms of government that is less bad that democracy to try from time to time, or rather lets look at a version of democracy which deliver fairness but also the progress needed by these countries who are growing up in an era of big data, or mass communication, of mass participation and of political despair.

Two tribes

guinness_surfer

A side effect of the digital revolution has been the closing of the perceived gap between product thinking and communications thinking. Not always with desirable results.

Watching these worlds collide has long been a fascination for me, as they are such distinctly different approaches, require such different skills and temperament and are typical bought by distinct clients.

Yet, they have much in common too.

Whilst the staff are rarely transferable between the disciplines, they are very much the same breed: recognisable by their dress, their attitude, their enthusiasm, their intelligence and creativity. And the technical tools and techniques used are often similar too: creative briefs, Photoshop, brainstorms and so on.

Many other worlds use these tools and techniques. You might find PostIt notes and Adobe in packaging, PR, management consulting and other parts of the professional services world, but you’re unlikely to find an agency that does Packaging, Consulting and PR products. Or, at least, one that does it well.

One reason why we see so many try to combine the art of the digital communicator and the art of the digital product developer could be that a single company that could do both could create, launch and grow digital products without need for external support. That’s a powerful dream. Although I don’t know how often it has been fulfilled in reality.

By treating these two disciplines as if they should be accomplished in a single place, do we run the risk of losing the best of each, and the opportunity to properly assess how elements of each would enrich the other.

Let’s look at the two things and see what, if anything they can contribute to the other.

And this is the hard bit. How do we capture the heart of each discipline succinctly, without jargon, and in a way that practitioners can both agree with for themselves and understand for the other.

Skill One: experience of the thing
Product designers (and that’s a pretty broad church) are responsible for developing the thing itself. The more obvious (if not easy) part of this is in developing things that the end consumer will love, that they will find intuitive, rewarding and so on; a thing that users will continue to want to use. In this case the user’s relationship to the thing is more or less obvious. The user experiences it directly. Of course, that doesn’t mean that each user has the sample experience, even if they all see the same thing. Experience depends on the user’s skill, the user’s context, the task the user is trying to achieve. As Lou Carbone puts it, the experience is not related to what the user thinks of the thing, but rather, how the user feels about themselves once they’ve interacted with the thing.

The science of trying to design for users despite their differing skills, context and so on, is quite well developed using research to understand and group people, using tools like user journeys and personas to codify them. Whether it’s a bottle opener or an app, we also need to push product developers to consider the aesthetic as much as the function of the product. The impact on the user can be equally impacted by emotional triggers in the product, not just how successfully it can be used to achieve a task.

Meaning is – of course – socially constructed. And so understanding how objects will be interpreted and understood, should understand this social context, norms, reference points etc. I’m sure owning the first gas lamps was a sign of being cutting edge. Now it would be a statement about being traditional. How do we compare the effect of owning and iPhone in New Jersey with owning one in Shanghai, and so on. When done well, a measure of this social context of use, and of understand the meaning derived from objects, is included in the definition of user experience.

Skill Two: communication of the thing
Modern communications skills are no less important or complex than the skills required to design a product in the first place.

By definition, this is not a design of a product which is directly experienced.

At the heart of it, the communicator is trying to precondition the audience to have a different reaction to the product (or service) and is doing so in the absence of a direct experience of the product. In fact, it is odd for communications about the thing and the thing itself to be present at the same time (think of those awful brand posters in Barclays branches or adverts for Rank Screen Advertising).

How can external stimuli change how we react to stimuli, real or imagined? Fundementally, it must help the receiver to create links amongst their understandings of meaning. It is a process which results in new associations. What is it that makes Prada posh or Pepsi-max precocious. These are truths which have been created by marketing.

In order to do this, a communications designer must have some grip on the inexact art of stimulus and response. Why is it inexact? Because meaning is ever mutating and audiences are not amorphous. If I talk about ‘pretty little colleens’, it is a phrase which some will recognise exactly (it has been a lyric in a pop song), others will recognise generically (Colleen taken as stereotypically Irish name), and others (those called Colleen) will identify with specifically. Your distance from the various references will impact on the extent to which you understand it at all.

So to a real degree, the communications planner must understand how embedded and related these social norms are and construct their message to match this.

So arguably meaning/context is even more important for communications than it is for product.

At the same time, the communicator is most often working to do a lot in a very short time frame, whether the turn of a page, or the 10 watched seconds of a 30 second TV commercial. This has led to lots of techniques focussed on gaining maximum traction with minimal transaction: big ideas, single minded proposition, visual identities and so on.

In contrast, product designers are often working to reduce the amount of time consumers spend with their offspring and making each moment delightful and uninterruptive.

I think all of us who have seen the inner workings of the development of a truly phenomenal advertising campaign, are in awe of the planners and creatives who can translate such a disparate context in such a challenge medium into so much meaning. Guiness’ surfer,  VCCP’s years of work for O2, the great Levi’s ads.

But why would the people that do one of these things be good at the other, and for what reason (other than commercial) would you want both under the same roof?

What I’ve seen is people from each discipline trying their hand at the other. This has rarely ended well. And, worse, I’ve seen managers from one discipline try and manage the other, but without changing their approach. This never works, except in the pitch room where all is roses, and the only possible impact of crushing together ying and yang, of fusing these atoms, is a joyful integration and 5% off the cheque.

I’ll wait for a braver clever agency person to show me the way. Perhaps even in the comments. Go on, you know you want to.

Lean blog post

I’ve always been a fan of Lean. Most recently Lean Startup has been very influential on me, as it has been on many others. On my desk sits an unread copy of Lean Analytics, and now, I see we have also got ‘lean strategy’, meaning not the strategy of using lean techniques, but rather the use of lean techniques to develop strategy.

In this post, Andy Whitlock of Made by Many, espouses the change in mind set of strategist getting involved in new lean techniques.

Is this still strategy? Not, perhaps in its traditional sense. Many would say lean is almost an ‘anti-strategy’ movement, in the sense of favouring action over long planning cycles. This would make ‘lean strategy’ an aggressive contradiction in terms, a self-destructive oxymoron. And that sounds like some thing we should avoid.

But forgetting all that, what is it Andy is really saying? He seems to have three themes:

1/ The ideas can come from anywhere – strategists need to come to believe that their team is just as capable of generating useful insight as they are, and need to learn the language of their colleagues so they can take part more

2/ Don’t cook up big intellectual theories to confuse people – provide information that can actually be used to do things.

3/ Propose intermediary hypothesis and watch them adapt rather than trying to solve the whole problem upfront. Very lean.

Perhaps it was always a good idea for planners to be less precious (1), less overly intellectual (2) and less hell-bent on grand unifying theories (3). But if the lean movement underscores those needs, so much the better.

For my money, the job of the strategist / planner remains the same: make complicated things seem simpler and figure out how to solve the problem. Doing that in rapid iterations and tests is certainly unusual for the average planner, but should be welcomed by most once they’ve got over the shock of the whole thing, since it gives us a chance to break down and learn about problems in pieces as humans really do,  removing the often unrealistic expectation for a sudden, blinding flash of inspirational light.

Broken Windows


The Broken Window theory of criminology was popularised in Malcolm Gladwell’s 2002 book, The Tipping Point. The theory says that urban environments where vandalism and dereliction are present redefine social norms (reducing the pride people take in their communities) and leading to greater crime.

In Gladwell’s book, he highlights the effect of Giuliani’s zero tolerance policy on minor crime in New York City had in reversing a years’ long reputation for being dirty and dangerous.

I think the same kind of effect of the magnified effect of small issues applies just as well to another kind of Windows.

When Microsoft launched Windows Phone in 2010, they achieved something similar in a change of attitude. Screw the number of total features, apps or whatever, the Windows Phone team reversed a decade-long (Pocket PC first came out in 1990, Windows Mobile in 2003) trend of releasing software with lots of little bugs in it. And in doing that, they gave many of us hope that Microsoft could really rival Android and iOS in the mobile phone OS market.

Anyone who lived with a Windows Mobile devices (Windows Mobile 2003, Windows Mobile 5, 6, 6.5) will remember these little bastards the overwhelming feeling when one thing or another just failed to work wasn’t anger, it was resignation.

I’m not talking about UI failures – although there were certainly plenty of those. My favourite bit of non-user-centred thinking must be the snooze menu for the built in alarm clock which required the navigation of a pop-up submenu – this for a user that you can guarantee is half asleep. In later versions, SMS messages were threaded but with new messages appearing at the bottom of the list which would open at the top – often taking several minutes to scroll down to.

No, I’m talking about full-on bugs. In our office at the time, where these phones were standard issue, we gave up asking why people had failed to return calls, had hung up mid-sentence (I still believe the phone would drop the call if it received an email with attachment), or sent garbled and incoherent emails and texts.

I remember a conversation with the ‘mobile expert’ from our firm back in 2008 when he told me that the way to keep your WinMo phone working well was to completely wipe it and re-install everything each month.

You just learned that every so often, the phone would let you down and the only thing to do would be to suck it up.

It was a disaster. Ballmer even admitted as much in public.

Despite all this, the platform was pretty successful, commanding up to 20% of the marketplace. Because it was only competing with Blackberry (which was a bit more expensive and required a server for enterprise customers to get their mail) and Symbian which was late to make any kind of leap to the enterprise.

I’m sure Microsoft would kill to have the same share with Windows Phone today that they once had with Windows Mobile. And the fact is that Windows Phone – a completely re-designed mobile platform – deserves to be a serious competitor in the marketplace. It’s really good.

But the best thing about WP7 when it came out was that it wasn’t buggy. It didn’t have multi-taking (WM did), it didn’t have copy and paste (ditto) or all sorts of other features. But at least it didn’t have any bugs. Things would straight-forwardly work. Calls could be made. The screen wouldn’t stop responding or go all laggy. The UI was consistent. In fact, the UI was excellent and intuitive. So good in fact that it’s ended up on Windows 8, but that’s another story.

For once, it felt the Microsoft team behind the product really understood the need for quality in the product released. Better quality not more features. When Microsoft updated the OS to 7.5 (codenamed Mango) they brought a host of new features and capabilities to the platform and once again, maintained the capability. Of course it was and is an uphill struggle for the OS. Clearly it’s been slow to grow. But the people that have it like it, and that’s a great starting point.

So now it’s two years later, And Microsoft has recently launched WP8 for new WP hardware and a final update for WP7.8 for older hardware.

Whilst it brings a couple of new features and a new start screen, WP8 is really an engineering-led change for Microsoft, building on a long-story which dates back to before the somewhat calamitous release of Windows Vista.

Vista had been intended to improve the overall user experience of Windows, making a big step forward from Windows 2000. As it happens, the user-experience of the Windows Vista interface was very compelling. Unfortunately the performance – the most important element of any user experience – was not up to scratch. Frustrating many with the new OS.

By contrast, Windows 7 went on to be Microsoft’s best and most successful OS and it did this by making the heart of the operating system as small and efficient as possible and therefore dramatically improving the actual user experience. Project lead, Sinofsky did this by taking advantage of the ‘winmin‘ project which had been running at Redmond for many years to cut down core Windows NT.

With Windows Phone 8, Microsoft has replatformed – almost invisibly – their phones from Windows CE (a somewhat dated and clunky core) to a version of Windows NT (a long-standing but highly efficient system), just like Windows 7 and Windows 8.

There is no doubt that this is an amazing engineering achievement. Even though it has come of the cost of WP8 moving along very little from WP7 in terms of what the user sees. But it also seems to have come at the cost of quality in delivery, and not just the delivery of WP8, but WP7.8 too.

Nokia was kind enough to send me a Lumia 820 device early on. Aside from using the highlight colour for the button actions as well as the tiles, the devices can only really be told apart from the Lumia 800 by the removable back cover and the size (for my money, a bit too big). The screen’s actually the same resolution (but bigger so it drains the battery faster). It’s got NFC and wireless charging, both of which are cool. But it crashes. About every six hours, meaning I’ve got pretty good at taking the removable cover off. And the music player hangs the system. And you know what I thought straight-away? This is like having Windows Mobile back. Broken windows.

Because it’s careless. As I said earlier, I’m sure it’s a major engineering triumph but from a user’s point of view it’s taken a year to make a phone that’s bigger, has worse battery life, crashes (often at night, making it’s use as an alarm clock somewhat questionable), hangs, doesn’t have Gorilla Glass (the 800 does), has a much worse desktop sync client and doesn’t look as nice.

And the 7.8 update, a sort of parting shot to keep a Microsoft promise about upgrade cycles, is full of bugs. So now my 800 is broken too. The live tiles don’t work, mine at least is crashing regularly and there are small careless errors dotted here and there. Take for example my Music tile which has recently renamed itself (somewhat accurately) ‘Crowded House’!


Forgive me for saying that it doesn’t feel like a year well spent. A year in an industry which (Android at least) is moving ahead very quickly. Yes, we want new features but what I personally want more than anything is quality. Each new product should have fewer bugs than its predecessor, not more. And every time I find a ‘little bug’, it shakes the faith I have in Microsoft to win in phones.

Surely a successful phone is the most important key to Microsoft’s long-term consumer strategy. So why isn’t it their top priority to get it right?

Meaning

Image

 

Why is this image so powerful?

Even before the caption is read and the context explained, it has a very strong visual resonance.

Obama is obviously an historical figure, a phenomenal orator, a symbol of humanity and intellect. With his image comes a huge amount of that recollection and meaning: the victory speech after the Iowa primary in January 2008 (“They said this day would never come”), the victory speeches after the two elections (2012: “The role of citizens in our democracy does not end with your vote”), the basket ball, the humor of the correspondent’s diner, the battle with Donald Trump and so on.

The pose is somewhat humble and inquiring and Obama is alone, looking confident but curious.

But that is only the start of the meaning. That Obama is sitting on the bus where Rosa Parks once made her historic protest changes the picture altogether. Park’s refusal to give up her seat to a white passenger became a powerful weapon in the civil rights movement’s campaign which would eventually change laws and end segregation; her act is a landmark on the continuing struggle for racial equality in the US.

The bus is in a museum now, of course, and Rosa Parks herself died in 2005. But the act lives on a strong symbol. Fifty-two years and a few months after Park’s original ‘disobedience’ led to her arrest, Obama was sworn in as US president, the ultimate proof that – whatever racism remains in America – it is not ingrained in the institutions through which the country is run.

Race at times seems to be the least part of the Obama presidency. I’m sure that’s how it should be. However, this image reminds us that it is no small achievement for a country that – within living memory – built racism into its laws, to elect a president who might have been the victim of such segregation.

Needs

oliver-twist-007

‘Needs’ is a brilliant word. Five letters long and yet it means so many different things to so many different people; can encompass a huge range of planning challenges and can, perhaps, lead us to some interesting thinking about how to make things people really love, and to communicate things in a way which will really captivate.

The starting point for this discussion must surely be Maslow. In creating a classification of the natural order of human needs, he helped the expansion of the concept of need beyond the purely physical and into a somewhat grey area between needs and wants. Of course, he is also responsible for the concept of a hierarchy among needs with some being of a higher order than others, whether that has pejorative implications or not.

Humans are physical, but also emotional, social, ambitious and so on. And if ‘needs’ are to include all of these factors (which surely it must), then they will encompass every aspect that could be important to us in creating a product or communications which has some resonance with our customer. And of course, needs must be personal, so we will necessarily have to try and understand what needs are commonly shared.

In user experience, we may use ‘needs’ to codify user requirements. But this can be layered. If we’re designing an interface, we need to know which tasks or functions a user needs to be able to carry out – e.g. I need to be able to update my address details. All users also face straight-forward usability needs. However, in thinking about the broader concept of the product, we need to consider emotional needs in addition to the functional needs of the user. How can the product resonate on an emotional level, as well as a functional one.

Here there are two concept which may well be very useful. The first is the idea that emotional response can be classified and codified. Here the work of Plutchick seems very relevant. In classifying the emotions, exhaustively, Plutchick raises and interesting question. What is the emotional reaction that we are looking to achieve and how does the product foster that emotion or suppress its opposite. Does the product in question look to surprise or reassure, to reduce frustration or drive trust?

However, it seems reasonable to suggest that such an emotional resonance can only be properly described in context: What is surprising or fear-inducing in the light-snack market is presumably different from the mobile phone market.

And can we extend this to include the ability of a product (or communication) to offset an emotional reaction that already exists to some other object – i.e. where the emotional response is in the problem the product or communication looks to resolve?

The second framework which is quite powerful here is the concept of the derivation of emotional response. So, in choosing the right shirt for a night out, this is related to the need to impress the opposite sex, from the need to continue the species and ultimately (perhaps in all cases!) the fear of death.  I’ve tried to discuss this briefly before. I’m still convinced there is value in this. If we can understand how an emotional response is driven, we can better understand how to respond to it. After, all, if your product can’t be traced back to a real underlying human need (or multiple needs) of this sort (and here perhaps we’d be less keen on a Maslow style hierachy), then what use is it?

In this post, Northern Planner discusses his fundamental planning beliefs. One of them  is that:

Behind every business problem is a very human behavioural problem you need to change. The art of strategy is making people care enough to behave differently

When they don’t want to be sold to anymore, if they ever did, we need to start with what they’re interested in and work back from there. Real problems and tensions in real lives

Good quote, (and the reason I started writing this post). I agree entirely, apart perhaps from describing the behaviour in question as a behavioural ‘problem’. It seems to me, it’s only  a ‘problem’ from the point of view of the business being discussed. From the user’s point of view, it’s just a behavior.

Perhaps there’s little new in directing our strategies to meeting needs, but I suspect we can all benefit from being more curious in how we dissect that need in the first place.

UPDATE. I had originally meant to kick off this post with the following video. Genius, speaks for itself etc. etc.

Follow

Get every new post delivered to your Inbox.

Join 696 other followers