Mr Garvey and his newspaper clippings at Sprowston High School in the mid-90s

Mr Garvey was a standout character in Sprowston High School in the mid 1990s. 

He arrived in classrooms late, moving quickly, as if he had learned of his assignment seconds before beaming from Supply Teacher Alphacenturi and materialising on to the melamine floor just outside the door. 

“Settle down. SETTLE DOWN.”                                                                     

Slim and upright, he had thick, short salt-and-pepper hair atop a short white beard, with florid complexion and closely set rheumy eyes. 

When reading he wore small, silver wire glasses. He was immaculately dressed. I picture him in a light-blue cotton suit, with a navy woollen tank top, pale yellow shirt and a blue and yellow striped knitted tie knotted into a large full Windsor. He found his sartorial niche in the early 80s and stayed there. He was like a cast member of Rainbows at a wedding. 

At all times Mr Garvey was carrying or attending to a Manila folder in which he kept newspaper clippings and whole newspapers yet to be clipped. 

His teaching style was thermostatic. He took the register and asked us to work from our exercise books while he cut the newspapers. Children do what children do when unattended: they tend to disorder; whispering becomes talking, notes are passed, and banter and hi-jinx are had, until a critical decibel level is reached. 

“SILENCE!”

The cycle repeats, the class is quiet at first but slowly gets louder before Mr Garvey erupts once more; each eruption more severe than the last. 

A pupil looking over to Mr Garvey sat behind the teacher’s desk would tend to see one of three unfolding scenes. Mr Garvey would be

  1. cutting with oversized orange-handled scissored an article from the Eastern Daily Press or Evening News, or
  2. scratching his beard with the backs of his fingernails, chin in the air, eyes squinted, or
  3. lifting his glasses with one hand and dabbing his eyes with a chequered handkerchief with the other, like a pair of water fowl tending to their precious eggs. 

Of course, all this made him a source of amusement for the pupils. 

Yet I liked Mr Garvey and his classes. They were easy and I had a chance to get ahead. 

And we sometimes laughed with him. 

In one D&T class he made the mistake of leaving the class register on a shared bank of pupils’ desks. One of the bolshier kids noticed that Adrian Tipple’s name was written in pencil, so duly amended the surname to Nipple. 

The anticipation built as Mr Garvey read down the register.

“Paul Smith”

– “yes, sir”

“Uh, Adrian, Adrian Nipple? Is that right?”

Uproar. The white-hot hilarity was unsurpassed in my life until that point. 

I’m not sure what Mr Garvey did with his newspaper clippings. I recall he collected articles about the local history of Norwich and Norfolk; anything he found “interesting.” I hope he did something with those clippings. I hope he wrote a book. 

I’ve been thinking about Mr Garvey as I’ve thought about the information I’ve accumulated and hoarded. My storage and retrieval methods are haphazard. They include: writing in Apple notes; writing notes in Word: an aborted attempt to write regularly on a WordPress blog; sending emails to myself, bookmarking in X, LinkedIn and the FT app; printing out papers and adding them to a drawer in my study; saving audio snippets in Audible, adding web links to my iPhone Home Screen; and folding over the corner of interesting pages in books I’m reading. 

Despite this, I find myself regretting not making a note of something I’ve read or seen. An anecdote is at least 10X flimsier if the source can’t be located: “I read somewhere that [some diminished point]”. This is why I would welcome a wearable AI device that could track and store everything you read or listen to. 

How useful would it be to run a search on all the information you had ever consumed? 

There would be no need for newspaper clippings and Manila folders. 

Don’t get fooled by randomness (or mis- or disinformation)

The mind tricks us into seeing patterns when there are none. This causes all kinds of problems.

According to Nassim Taleb, people routinely mistake luck for skill, randomness for determination, and misinformation for fact. These are the factors that cause us to process information incorrectly, and distort our views of the world:

All-or-nothing view

We tend to see events as all-or-nothing, rather than shades of probability. If you see a successful business executive, they’ll tell you that the reason for their success is some combination of hard work and talent. But what if you have thousands of successful business executives? Probability tells us that some of them must be successful only through sheer luck. But none of them will admit to that.

Affirming the consequent

Our brains slip up in trying to find patterns to explain things. Studies show that most successful business leaders are risk-takers. Does this mean that most risk-takers become successful? Ask those whose risks failed.

Hindsight bias

We also ignore probability when it comes to past events. When we look back at things that have happened, it seems like they were inevitable. Just listen to any political pundit explain why a shock election result was obviously going to happen. This means we ignore the effect of random chance, instead preferring to rationalise why something happened.

Survivorship bias

If we only see the survivors of an event, our brains assume everyone survived. Or at least, the survival rate is good. Take the view that stock markets just go up in the long run — imagine how rich you’d be today if you’d invested in the stock market in 1900?

But this view is based on the stock markets and companies that have survived to this day. If you really did invest in stocks in 1900, you probably would’ve bought into the developed markets of Imperial Russia or Argentina rather than the emerging United States, and most of the companies you bought into would have failed.

Mistaking a one-off for the whole

Similar to taking an all-or-nothing view, Taleb warns us against conflating details with the ensemble. This is particularly acute when you’re facing nefarious agents that are spreading disinformation (information that’s deliberately spread to deceive) or unwitting redistributors of misinformation (incorrect or misleading information presented as fact). For instance, in saying there are Neo-Nazis in Ukraine — how relevant is it to the whole? what is the proportion? There are unfortunately Neo-Nazis in most large countries, including in Russia.

Okay the problem is you take something called a ‘detail’, say an anecdote, single random event, and by emphasizing it, making you think that that detail represents the ensemble. It does not, and we are much more vulnerable to details, the salient details. Something psychologists like Danny Kahneman called the representativeness heuristic.

Nassim Nicholas Taleb, Disinformation and Fooled by Randomness

In short: an anecdote falls far short of conclusive evidence.

See also Brandolini’s Law or the bullshit asymmetry principle that states that the effort to refute bullshit is an order of magnitude larger than is needed to produce it.

Watch out for

All of these combine to make us ignore the possibility of rare, disastrous events like a huge market crash. It’s hard for us to comprehend something that’s never happened before. So we discard the possibility, in the same way statisticians might discard outlying results to avoid skewing an average. This is a mistake and leads to paralysis when rare events do happen.

History teaches us that things that never happened before do happen.

Nassim Nicholas Taleb, Fooled by Randomness

With thanks to Ivan Edwards who wrote most of this post. Thanks Ivan!

Resources

Gladwell, Malcolm (April 2002), Blowing Up, The New Yorker

Taleb, Nassim Nicholas (2001), Fooled by Randomness: The Hidden Role of Chance in Life and  in the Markets, Penguin

Parallel Thinking and De Bono’s Six Hats

This is how a group of people can solve a problem without arguments.

Think about all the times you’ve been in a team meeting, dealing with some issue. Everyone goes in with the best intentions, but the team members quickly form their own ideas of what needs to be done, argue about why everyone else is wrong, then eventually go with whoever won over the most (or shouted the loudest).

The six thinking hats are a more efficient way of solving problems than arguing. In an argument, each side picks a conclusion, finds evidence to support it, and ignores or discredits any evidence to the contrary. Emotions take hold as each side aims for the glory of being right and the thrill of defeating an adversary. It sounds like a terrible way to solve problems constructively. Yet our entire political and legal systems are based on it.

The alternative to arguments is ‘parallel thinking’. Instead of each individual taking different sides, all individuals take the same side and look in the same direction, in any one moment.

That’s where the imaginary hats come in. Each hat is a way of looking at an issue. They come in pairs, but you can only wear one at a time. And in a group discussion, everyone wears the same hat at the same time.

White Hat and Red Hat

The white hat is where the team establishes what information is known and what information is needed. This is about facts – not interpretations, judgements or opinions.

“Market research shows demand for coffee flavour biscuits is growing.”

The red hat is where intuition, feeling, opinions and emotion come in. They can be based on experience or just a hunch.

“I feel our current range of biscuits is boring and old-fashioned.”

Black Hat and Yellow Hat

The black hat is about caution and critical thinking. Everyone should be looking for danger signs, something that could go wrong.

“If we introduce a coffee flavour biscuit, our rival might copy us.”

The yellow hat is about sunny optimism. Finding possibilities for putting a plan into practice, and searching for the benefits.

“If our rival copies us, that could help grow the whole market so we’ll still benefit.”

Green Hat and Blue Hat

The green hat is about being creative. Coming up with new ideas, options and ways of looking at things.

“How about adding chocolate chips to the biscuits?”

The blue hat is about control and discipline. It ensures the meeting remains structured rather than free-flowing (which will probably deteriorate into an argument), and as a result the leader of the meeting wears the blue hat at all times. The discussion may start and end with everyone wearing the blue hat, first to define the problem and lastly to make a decision.

“We’ve agreed the next step is to develop a coffee flavour chocolate chip biscuit.”

Watch out for

While some tests and teamwork models assign people to different categories based on their strengths, De Bono says this restricts them rather than getting the best out of them. The advantage of the six hats is everyone tries a bit of everything, and comes up with ideas no-one else would’ve thought of.

“There is a huge temptation to use the hats to describe and categorize people, such as ‘she is black hat’ or ‘he is a green-hat person’. That temptation must be resisted.”

Edward de Bono, Six Thinking Hats, p. 6

Did you know? De Bono coined the phrase “lateral thinking.”

With thanks to Ivan Edwards who wrote most of this post. Thanks Ivan!

Resources

De Bono, Edward (1985), Six Thinking Hats, 2016 edition, Penguin Life

Arguments Don’t Add Up. They Average Out.

What’s the best way to win an argument? If you think it’s by coming up with as many reasons why you’re right as you can, you’re wrong.

This seems counterintuitive at first. Surely the way to make a case stronger is by adding more arguments in favour? But the person you’re trying to convince doesn’t see your arguments adding up — they see them averaging out. If you have a strong argument and then add some more supporting points that aren’t as convincing, you’re weakening your case.

In an experiment by Christopher Hsee, one group of people was asked how much they would pay for a brand-new 24-piece dinnerware set, while another was asked how much they would pay for a 40-piece set, which had more brand-new items than the smaller set but also included a few broken items. The result was that the smaller set was judged to be worth more than the bigger set. By adding a few inferior items, the seller weakened the overall proposition for buying the product.

Niro Sivanathan found that people react in the same way when dealing with arguments against doing something. Drugs that are advertised in the US as having side effects of “heart disease and stroke” are judged to be more risky than those advertised as having side effects of “heart disease, stroke, headache and dry mouth”. The major and minor risks are averaged out, so the drug with more side effects is seen as less risky.

This matters not just for advertisers, but for anyone who is trying to persuade someone. A CV with a few strong arguments for why you should get the job looks better than one with all the arguments. And a three-word slogan is more effective at winning an election than 105 pages of policies. The quality of arguments, and how they’re delivered, are more important than the quantity of arguments.

Watch out for

This dilution effect not only shows how to make arguments more convincing, but also affects how people make judgements in a variety of situations. People given a mix of relevant and irrelevant information about an individual come to less extreme conclusions than people given only the relevant information, helping to reduce the effect of stereotypes.

With thanks to Ivan Edwards who wrote most of this post. Thanks Ivan!

Resources

Hsee, Christopher K. (1998), Less is Better: When Low-Value Options are Valued More Highly than High-Value Options, Journal of Behavioral Decision Making, Vol. 11, pages 107-121
Nisbett, Richard E.; Zukier, Henry & Lemley, Ronald E. (1981), The Dilution Effect: Nondiagnostic Information Weakens the Implications of Diagnostic Information, Cognitive Psychology, Vol. 13, pages 248-277
Sivanathan, Niro (May 2019), The counterintuitive way to be more persuasive , TED
Sivanathan, Niro & Kakkar, Hemant (February 2019), How Drug Company Ads Downplay Risks, Scientific American

Mimetic Desire: a philosophy for asset bubbles and FOMO

Tulip Mania. The Roaring Twenties. The Dotcom Bubble. Bitcoin.

Why do we like the things we do? René Girard may have the answer.

Everyone knows what an asset-price bubble looks like after it crashes. In fact, everyone looks back and says the crash was inevitable, though no-one thinks to mention that before it happens. But the causes of bubbles are still disputed. How can the perceived value of an asset rise and fall so rapidly — and why does it keep happening?

The value of an asset fluctuates because the desire for the asset fluctuates. René Girard’s mimetic theory gives us an idea about why this happens. Girard said that, beyond the basic needs for survival, there is no such thing as true, authentic desire. No-one wants a new car or a new haircut because they’re just following their heart. Desires are motivated solely by what other people want — we mimic the people we admire and decide we want the same things as them.

This gets us into bubble-like situations because the desires become part of a self-perpetuating cycle. It goes something like this:

  1. Mimesis
    I admire some guy getting rich from bitcoin and I want to be like him. He is my model. 
  2. Mimetic desire
    To be like him, I desire the same things he desires. If he buys bitcoin, I buy bitcoin as well.
  3. Mimetic rivalry
    He sees other people copying him, proving he was right to buy bitcoin. He desires bitcoin even more, meaning my desire also intensifies. We’re in competition for something in limited supply, so the price goes up and up. Other people see what’s happening and start admiring the model as well.
  4. Mimetic violence
    The rivalry and the emotion become more important than the original desire. The competition becomes so intense that everyone forgets why they wanted bitcoin in the first place. All they know is they have to have it — at any cost.
  5. Scapegoat mechanism
    In Girard’s view, this rivalry stops short of total escalation because of a collectively agreed-upon scapegoat, an other that can be blamed for the chaos and strife. Usually it’s an external actor, like a central bank threatening new regulations or raising interest rates. This scapegoat helps dissipate and direct the anger when the bubble pops and fortunes are lost.

The main lesson? Don’t buy something just because everyone else is buying it. If you didn’t want bitcoin at $5,000, why would you want it at $50,000? Remember, there’s no such thing as ‘just following your heart’.

Watch out for

Mimetic theory has some similarities with Maslow’s hierarchy of needs. Both recognise the differences between basic and higher needs, and both show us how we always desire that which is just out of reach — and that these desires can never be truly satisfied. But while Maslow presented his idea as an individual pathway to achievement, Girard saw cycles of people copying and fighting each other. Maybe it’s time for a new theory of motivation combining the two.

With thanks to Ivan Edwards who wrote most of this post. Thanks Ivan!

Resources

Bowyer, Jerry (November 2015), René Girard, ‘The Einstein of the Social Sciences’, Forbes
Girard, René (1972), Violence and the Sacred, 2005 edition translated by Patrick Gregory, Continuum Books
Legler, Janis (August 2020), Bitcoin is a game of musical chairs – and the music is stopping, City A.M.
McDonald, Jamie (February 2021), The Anatomy of Financial Bubbles , Yahoo Finance

Nudge Theory: how a little psychology can go a long way

This is how you can change people’s behaviour without anyone realising.

In a perfect economy, people faced with a decision would choose the best, most rational option for them, every time. What’s more, the more choices you give to people, the better their decisions will be.

We all know the world doesn’t work like that. If it did, I would buy an apple when I’m hungry instead of half price crisps at the checkout. Therefore, the solution is to ban crisps so everyone makes better choices. Right?

Nudge theory rejects both of these extremes. Firstly, it says that the way choices are presented affects the decisions people make. Secondly, the best way of helping people make good decisions is not by restricting their choices, but changing how they are presented — nudging them.

To understand how to present choices, we need to understand how people make decisions. Richard Thaler and Cass Sunstein, the creators of nudge theory, gave a few examples of what sways us when faced with complex decisions:

  • Loss aversion
    As prospect theory shows us, people hate losses more than they like gains. Telling someone they’ll lose hundreds of pounds if they don’t switch their car insurance is more effective than saying they’ll gain hundreds of pounds if they do.
  • Status quo bias
    People tend to stick with default options, because that’s easier and it’s assumed the default is the best. When workplace pensions changed from ‘opt-in’ to ‘opt-out’, millions more people started saving for retirement. The choices are exactly the same, but the decisions have changed.
  • Following the herd
    If you can convince someone that everyone else is doing something, they’re more likely to do it. It’s why adverts claiming a product is the most popular in the country are so effective. And why voter-turnout campaigners should stop loudly complaining that lots of people don’t vote.

Advertisers and the food industry have known how to influence people for decades — that’s why the supermarket puts half price crisps at the checkout instead of apples. How can nudging be used for good? Thaler and Sunstein call for nudges in situations that are “most likely to help and least likely to inflict harm.”

People will need nudges for decisions that are difficult and rare, for which they do not get prompt feedback, and when they have trouble translating aspects of the situation into terms that they can easily understand.

Richard Thaler & Cass Sunstein, Nudge, p. 72

Watch out for

Nudge theory became influential for policymakers around the world, especially in the administrations of Barack Obama (for which Sunstein worked) and David Cameron. So unsurprisingly, it’s controversial.

One of the major criticisms is that nudges may be used as a cheap and ineffective substitute for policies that are more ambitious or costly. The UK government was criticized for relying on nudges and behavioral science at the start of the Covid outbreak while other countries were locking down. Sunstein himself said in 2014 that “nudges are not a sufficient approach to some of our most serious problems”.

With thanks to Ivan Edwards who wrote most of this post. Thanks Ivan!

Resources

O’Brien, Hetty (May 2019), Cass Sunstein and the rise and fall of nudge theory, New Statesman
Thaler, Richard H. & Sunstein, Cass R. (2008), Nudge: Improving Decisions About Health, Wealth, and Happiness, Yale University Press
Selinger, Evan (July 2013), When Nudge Comes to Shove, Slate
Sunstein, Cass (April 2014), There’s a backlash against nudging — but it was never meant to solve every problem, The Guardian
Yates, Tony (March 2020), Why is the government relying on nudge theory to fight coronavirus?, The Guardian

Goodhart’s Law and the ‘tyranny of metrics’

“When a measure becomes a target, it ceases to be a good measure.”

This, in the words of Marilyn Strathern, is the most succinct summary of Goodhart’s Law. It has implications for how organizations make decisions and evaluate performance.

Over-reliance on measurements and targets causes problems for organizations, at both the top and the bottom of the hierarchy. Executives who are paid based on a stock price will try to pump it up as quickly as possible, with little thought for long-term planning, investing or taking risks. Meanwhile, employees given high-stakes targets will try to game the system to reach them.

Governments are just as guilty as businesses. Ask any teacher getting their students ready for yet another test, or a police officer not booking an incident because of crime reduction targets.

Jerry Z. Muller called this the ‘tyranny of metrics’ — the replacement of judgement (acquired through experience and talent) with standardized numerical indicators, making the data public and attaching rewards and penalties.

Here are some of the problems it causes:

  • Conformity
    When the same targets are used across different parts of large organizations, they all have to act the same way. There can be no innovation as the target is based on doing things in the established way.
  • Narrow-mindedness
    Most organizations and jobs have multiple purposes. Targets narrow these down, creating incentives to ignore purposes that aren’t measured by the target.
  • Focus on the simple and the short-term
    It’s easy to set a target for how many sales you make. It’s not so easy to measure how well you work as a team, how influential your ideas are, or how much you inspire a young colleague.
  • Focus on measuring itself
    Rather than improving performance, the focus turns to the measuring itself. A new layer of managers and administrators is created, coming up with new things to measure and diverting resources away from the front-line. Or the front-line staff get stressed over bureaucracy, rather than doing what they do best.

Watch out for

This doesn’t mean measurements and targets should be abolished. Muller argues that measurement and judgement are complementary, that measurement demands judgement of what to measure, how to do it and how to interpret the information. This means ending the myth that standardized data is a perfect, neutral arbiter.

With thanks to Ivan Edwards who wrote most of this post. Thanks Ivan!

Resources

Goodhart, C.A.E. (1984), Monetary Theory and Practice: The UK Experience, Palgrave
Muller, Jerry Z. (2018), The Tyranny of Metrics, Princeton University Pres
Strathern, Marilyn (1997), ‘Improving Ratings’: Audit in the British University System, European Review Vol. 5 No. 3

User-centered design: the art of making things easy to use

Fewer buttons, more thought.

The experience of the user should be the most important consideration when designing a product.

Sound obvious? You’d be surprised. Look around the room you’re sitting in now and you’ll probably find something that’s been badly designed. Maybe a remote control with lots of buttons whose purpose is a complete mystery. Or a website that makes it impossible to find the information you need. (Please tell us if it’s this website.)

Don Norman set out the problems with how things are designed in The Design of Everyday Things, first published in 1988. Since then, an entire industry of user-experience (UX) designers has cropped up.

He argued that, even though most designers think they’re making things better for the people who will use their products, in reality, they have no idea how people use them. So they design things how they want them to be, perhaps adding lots of clever features, or making them look like a beautiful work of art that will win them a prize.

But users shouldn’t have to adapt themselves to work out how to use a product. Instead, the product should be adapted to the user, so he doesn’t even have to think about how to use it. When you walk up to a door, you should just know whether it’s a push or a pull door. If you have to read a little sign saying ‘pull’, that’s bad design. If you guess wrong and push, that’s the designer’s fault, not yours.

To design things well, we have to understand how people interact with them. Here are a few things to consider when designing something – whether that’s a new product, a website, or a way of making something work.

  • Mapping
    The user should be able to work out in their mind what something does. A switch next to a light should turn on that light.
  • Affordances
    How the user perceives what an item is for. A switch is for pressing, a knob is for turning.
  • Constraints
    Make something easy to use by limiting what you can do with it. If one button does one thing, it’s hard to get that wrong. If one button does lots of things depending on how you press it, the user will make mistakes.
  • Feedback
    When the user does an action, they should get a signal that the action had a result. If you press a button and nothing happens, you’ll probably keep pressing it. If you press a button and a light comes on, you’ll know it works.

Watch out for

Even after you think you’ve created a great user-centered design, the only way to find out is through research and testing. Create a prototype and observe how a sample of people use it. Or if you’re making a big purchase, try it out as much as possible first. You may be surprised by the problems that come up.

User-centered design thinking dovetails with some of the core characteristics that make up great innovators: understanding from first principles the “job to be done” by always questioning conventional wisdom and the status quo and observing the behaviour of customers to figure out new ways of doing things. See the Innovator’s DNA model.

With thanks to Ivan Edwards who wrote most of this post. Thanks Ivan!

Resources

Don Norman on the term “UX” (July 2016), Nielsen Norman Group
Norman, Don. (1988), The Design of Everyday Things, 2002 edition, Basic Books
Stevens, Emily. (July 2019) The Fascinating History of UX Design: A Definitive Timeline, careerfoundry.com

The expectancy vs surprise continuum: music, art, suicide, and the good life

A rule of thumb about how to live a good life and what makes good content.

There’s a thread that connects sounds in nature, music, suicide and what makes a good film: the expectancy vs surprise continuum. Understanding or at least being aware of this continuum can help us see life through a different lens and helps us make sense of why we enjoy the media we consume and the things we produce and the work we do.

Suicide

Let’s start with at the bleakest so we can move into the light. Suicide. Often regarded as the forefather of sociology Emile Durkheim wrote about suicide in his treatise Le Suicide that was published in 1897. Durkheim described four types of suicide using two sociological variables: integration and regulation. He argued that too little or too much of either creates conditions which make suicide more likely.

Along the regulation axis sits fatalistic suicides caused by excessive structure and control, such as slaves who are unable to influence the rules under which they must live. On the other side of this axis sits anomic suicide, where there are no rules or clear conventions on how to act and behave, which can lead to a breakdown of social equilibrium. For example, suicide because of bankruptcy or loss of a job or loss of close family members.

The parallels between Durkheim’s work and the expectancy vs surprise continuum are obvious. If too much or too little structure increases the likelihood of suicide, we can surmise that the conditions for the good life lie somewhere in a sweet spot between those two extremes.

Pink noise

Halfway between the entirely uncorrelated random notes of white noise and the entirely correlated drunkard’s walk of brown noise sits pink noise (or 1/f noise or flicker noise). Tunes based on pink noise are moderately correlated over short and long runs. Benoit Mandelbrot was the first to recognise that pink noise and 1/f fluctuations are everywhere in nature, from the annual flood levels of the Nile and variations in sunspots to the wobble of the Earth’s axis and currents in the nervous system of animals and ourselves.

Our perception of the world seems to cluster around pink noise.

From the cradle to the grave our brain is processing the fluctuating data that come to it from its sensors. If we measure this noise at the peripheries of the nervous system under the skin of the fingers it tends, Mandelbrot says, to be white. The closer one gets to the brain, however, the closer the electrical fluctuations approach 1/f. The nervous system seems to act like a complex filtering device, screening out irrelevant elements and processing only the patterns of change that are useful for intelligent behaviour.

Martin Gardner, p.24, Mathematical Games (1979)

The thread between pink noise and music, and the expectancy vs surprise continuum is clear.

It is commonplace in musical criticism to say that we enjoy good music because it offers a mixture of order and surprise. How could it be otherwise? Surprise would not be surprise if there were not sufficient order for us to anticipate what is likely to come next. If we guessed too accurately, say in listening to a tune that is no more than walking up and down the keyboard in one step intervals, there is no surprise at all. Good music, like a person’s life or the pageant of history, is a wondrous mixture of expectation and unanticipated turns.

Martin Gardner, p.28, Mathematical Games (1979)

The same applies to all media we consume, from films and novels to newspaper articles. While the format is similar the content must be sufficiently different to pique and retain our interest. Hollywood can’t be sustained on Marvel sequels and prequels alone. As consumers get bored, new forms will replace them, much like the new schools of art that have emerged over the past 400 years.

Bore out versus burn out

What are the conditions that encourage the state of flow, when work comes easily and minutes drift into hours and we become super productive? According to Steven Kotier, a key psychological trigger is to find the balance between the challenge of the task in hand and our skills and ability to perform that task. We need to find tasks that stretch our abilities to force us into the present, but not too much that we snap.

This bore-out-vs-burn-out dichotomy maps directly on to the expectancy vs surprise continuum.

Kotier suggests that the task should be around 4% greater than the skills one brings to it as a rough heuristic for finding flow. This varies per person. High achievers may blow way past this threshold without any of the motivational reward of flow and risk burn out. While underachievers need to get comfortable with feeling uncomfortable and stretching themselves.

Watch out for

Everyone’s tolerance for structure or surprise varies. And for each of us it waxes and wanes over time.  

You may boost overall happiness and fulfilment by compensating when one area of your life feels overly restrictive, or, on the other side of the ledger, disorderly. For example, a natural creative might feel stifled working for a large, bureaucratic company, but find solace in creative writing or abstract art or, indeed, writing an irreverent blog.

Resources

Gardner, Martin. (April 1978), Mathematical Games, Scientific American, Vol. 238, No. 4, pp. 16-33
Jones, Robert Alun. (1986) Emile Durkheim: An Introduction to Four Major Works. Beverly Hills, CA: Sage Publications, Inc. pp. 82-114
Kotier, Steven. (May, 2014), Create a Work Environment That Fosters Flow, Harvard Business Review

John Kotter’s 8 stages of change management

Most major change programmes fail because of a lack of proper planning, according to John Kotter. He proposed eight steps to overcome this.

Kotter’s 8-step model of change management was first published in a 1995 Harvard Business Review article and followed up in his book Leading Change. His steps comprise:

  1. Create a sense of urgency
    Find what’s going wrong or what needs to change and make it dramatic. A new competitor is a crisis, a new technology is a once-in-a-lifetime opportunity. Make a statement that communicates the importance of acting immediately.
  2. Build a large, powerful coalition
    One executive isn’t enough. Get the chairman, division managers and more on board. Build a coalition outside the normal business hierarchy — after all, if the corporate structure was working well, there would be no need for change.
  3. Develop a vision for change
    Go beyond the numbers and talk about a direction, a picture of what the future looks like and how it will be different from the past. It doesn’t have to be fully formed. Something that customers, investors and employees will understand and throw their weight behind.
  4. Communicate the vision
    This is about rallying the troops. One meeting isn’t going to do it. Make the vision part of everyday activities, appraisals and reviews, and make it exciting. Broadcast it through every channel. Leaders should walk the walk in everything they do.
  5. Remove obstacles
    These may come in the form of narrow job categories, a stubborn boss, or an incentive system that goes against the vision.
  6. Generate short-term wins
    This is not the same as hoping things go well. Plan what goals you’re going to reach and when, make them happen and celebrate them. Keep morale high to keep people on board.
  7. Build on wins
    Declaring victory prematurely kills momentum. Instead, use those short-term wins to tackle even bigger problems. Changes to corporate culture take at least five years to set in.
  8. Embed changes into culture
    Once you’ve transformed the corporate culture and values, make those changes stick. Show employees that performance improved because of the changes. Promote those who personify the new way of doing things.

Watch out for

Kotter said the two most important lessons from successful transformations are that they take a considerable length of time, and that failure in any one of these steps can be catastrophic.

He also argued that change requires leaders, not managers. (The book is called Leading Change, not Managing Change after all.) Leadership is “the engine that drives change”, taking risks and entering uncharted territory. Without disparaging the skill of management, the mindset of simply making things work as they are is doomed to failure when it comes to the transformation of a company.

Support from senior management is essential to any big change so make sure you have identified the stakeholders whose support you need, and make sure you dedicate the time and resources to persuade them to be strong advocates for your project.

With thanks to Ivan Edwards who wrote most of this post. Thanks Ivan!

Resources

Kotter, John P. (May 1995), Leading Change: Why Transformation Efforts Fail, Harvard Business Review
Kotter, John P. (November 2012), Leading Change, Harvard Business Review Press
Hamel, Gary & Zanini, Michele (October 2014), Build a change platform, not a change program, McKinsey & Company