Good evening Ladies and Gentlemen. Tonight we explore managing our 'elders and betters', or sometimes our 'youngers and betters', well perhaps just our betters. As Gresham regulars know, it wouldn't be a Commerce lecture without a commercial so I'm pleased to announce that the next Commerce lecture will be 'Local Or Global? Network Economics And The New Economy' here at Barnard's Inn Hall at 18:00 on Monday, 26 January 2009.
An aside to Securities and Investment Institute, Association of Chartered Certified Accountants and other Continuing Professional Development attendees, please be sure to see Geoff or Dawn at the end of the lecture to record your CPD points or obtain a Certificate of Attendance from Gresham College.
In a 2004 essay in Scientific American, Michael Shermer noted that Francis Bacon identified four barriers to clear judgement: 'idols of the cave (individual peculiarities), idols of the marketplace (limits of language), idols of the theatre (preexisting beliefs) and idols of the tribe (inherited foibles of human thought).' Shermer pointed out that 'in one College Entrance Examination Board survey of 829,000 high school seniors, less than 1 percent rated themselves below average in 'ability to get along with others', and 60 percent put themselves in the top 10 percent.' [Shermer, 2004] When we rate ourselves, even when we are told about our self-serving biases, we are friendlier, smarter, selfless, sexier, and more modest, than average.
As I noted in a 2005 lecture on measurement, Garrison Keillor welcomes you to Lake Wobegon, 'where all the women are strong, all the men are good-looking, and all the children are above average.' In a 2006 lecture on lotteries, I echoed J R Dobbs 'You know how dumb the average guy is? Well, by definition, half of them are even dumber than that.'
Tonight I'd like to explore the contrast between these two quotes. Further, I'd like to dwell on managing the above average, taking J Robert Oppenheimer's quote about Albert Einstein seriously, 'Any man whose errors take ten years to correct is quite a man.'
Many people have pointed out the absurdity of more than half of us being 'above average'. Does it matter? I contend that yes it does. We spend a lot of our time trying to manage people who have more knowledge than we do, more skills, more connections, more power, more money, more computers, more status. Carl Fürstenberg (1850-1933), a German banker, reputedly said, effectively, 'Shareholders are stupid and impertinent: stupid because they give their money to somebody else without any effective control over what this person is doing with it; impertinent because they ask for a dividend as a reward for their stupidity'. ['Die Aktionäre sind dumm und frech; dumm,weil sie Aktien kaufen, und frech, weil sie auch noch Dividende erwarten.] These people can be your doctor, your investment advisor, your broker, your lawyer, your computer consultant, your political representative. They can also be your staff or your gifted children or a bright relation. If you're a mid-sized nation, it may be about the way you try and influence a superpower. Finally, as a citizen, it can be about regulating government officials and policy makers who are spending other people's money, yours that is. I think it's important that we think about how we manage the above average. When can't we manage? What tools and techniques might help us?
Games entertain. At Christmas time, we spend a lot of time playing various games, board games, card games, dice games, guessing games, strategy games, trivia games and puzzles. Games are amazing things. People deliberately sit down to play games that can involve a lot of different skills. Skills such as guessing people's reactions, empathy, numerical calculation, pattern recognition or general knowledge. Of course, I'm particularly intrigued by 'The Game'. Its rules are simple. The first rule is, 'you are always playing The Game, but if you think about The Game, you 'Lose The Game''. So to all game players awake just now, you just lost 'The Game' (refer to previous rule). Second, if you lose the game, you don't lose again until you have forgotten and then remembered The Game again. The Game is truly for the Aristotelian above average person, as Aristotle maintained that 'It is the mark of an educated mind to be able to entertain a thought without accepting it.'
Games also frighten. There is nothing worse than playing the wrong game or realising too late that others have stepped out of the game or finding you're playing against the above average. The 1996 Michael Douglas film, 'The Game', or the horror films of Hannibal Lecter, starring Anthony Hopkins, show how afraid we are of getting into situations out of our depth, above our head. Though we can also have a good laugh at bumbling naivety winning a game, such as the 1997 Bill Murray film, 'The Man Who Knew Too Little', or the 1959 film 'The Mouse That Roared', featuring the success of the Duchy of Grand Fenwick in its bloodless conquest of the United States.
I often use the device of a science fiction story about hyper-intelligent aliens to show people that we do assume, within some bounds, we can best the above average. Everyone has some superiority in a field, or combination of fields, such that if they choose the right battleground they can win. Our opponents can be above average in something, or many things, but not everything. Thus the above average must remain vulnerable somewhere. Perhaps that's why Albert Camus noted that, 'Nobody realizes that some people expend tremendous energy merely to be normal', i.e. not to be perceived to be vulnerable.
A good example of besting someone intellectually above average comes from a 1973 children's story, The Princess Bride, by William Goldman, made into a Rob Reiner film in 1987. In the story, the Sicilian criminal genius Vizzini, played by Wallace Shawn, has abducted Princess Buttercup, but is thwarted by a hero, the Masked Man in Black. At each thwarting Vizzini exclaims, 'Inconceivable!'. The Man in Black challenges Vizzini to a duel of wits in order to free Princess Buttercup. The Man in Black sets out two goblets of wine into which he claims to have put some deadly iocane powder. The game is simple, Vizzini has to choose a goblet and then both will drink. The Man in Black starts, 'Where is the poison? The battle of wits has begun. It ends when you decide and we both drink, and find out who is right ... and who is dead.' After some rather funny dialogue involving why Vizzini cannot choose the goblet before the Man in Black, nor the goblet in front of himself, Vizzini distracts the Man In Black, swaps goblets, and then both drink. Vizzini explains to the Man in Black, 'Ha ha! You fool! You fell victim to one of the classic blunders! The most famous is never get involved in a land war in Asia, but only slightly less well-known is this: never go in against a Sicilian when death is on the line!', before collapsing dead. Freed, the Princess says to the Man In Black, 'And to think, all that time it was your cup that was poisoned.' To which the Man in Black replies, 'They were both poisoned. I spent the last few years building up an immunity to iocane powder.'
This little tale is typical of tales where superior intellects are bested by someone extending the rules of the game, or moving outside the rules of the game. Moreover, it calls into question what it means to be a superior intellect. Was Vizzini really so smart, or was the Man In Black just smarter? Elbert Hubbard quipped, 'Genius may have its limitations, but stupidity is not thus handicapped.' So we need to set out the rules of winning the game.
One first step in managing superior people must be to set out how to evaluate their performance. Now I'm one of the world's worst appraisers. Many years ago, one of my staff asked how much time she should schedule for her appraisal. My answer lacked some empathy, 'I don't know, how long does it take to fill out a P45? [the UK leaving work form]'. Touchingly, she stuck with me for eight more years. I omce had the interesting experience of watching an Italian company evaluate two finance directors, one running the UK and the other running part of the Italian business. Let's call them George and Fabio. The evaluation consisted of reviewing performance, then assessing personal strengths and weaknesses, finally concluding with a pay decision. George was a heavy drinker, married yet a womaniser, with a penchant for rude stories and gambling. Fabio was a dapper, almost delicate, man with a lovely family and a deep, religious concern for his community. George was a hard-hitting accountant with a nose for numbers and an aptitude for tough tax negotiations, possibly the best tax person I've known. Fabio was, to put it mildly, a bit disorganised and his staff were constantly picking up the pieces and, sometimes, even covering for him.
They both received about the same message from their pay packet, good performance. In George's case the firm evaluated his performance in the role. The firm ignored his errant personal life and focused on his service to the firm. In Fabio's case the firm looked at his virtues as a person, 'he's a good man, with a lovely wife and children whom he dotes upon'. In the UK, the reaction to Fabio's pay rise was incredulity. 'How could they? Fabio's useless without his team.' In Italy, the reaction to George's pay rise was incredulity. 'How could they? How do they even trust George with the firm's money?' There are clearly some cultural nuggets to be mined, but the basic point here is that firms conflate performance and the person.
One performance management approach is to inculcate some set of basic values in every person. Long-term problems such as running a business sustainably are too complex to be specified in a set of auditable standards. Instead, we should imbue everyone with a set of virtues that permits them to make the right decisions. Professional bodies and their principles of ethical conduct are a good example. The summary of a Tomorrow's Company event on 24 September 2008 at Mansion House in London is typical:
'Of particular interest was the emerging focus of discussions at the event on the cumulative impact of environmental, social and governance factors on securing sustainable, long-term returns - rather than being about a simplistic notion of 'doing good'. The discussion kept on returning to the importance of a clear sense of purpose, and of being deeply rooted in values, in charting a course during these uncertain and turbulent times.'
The more we focus on linking performance to values, the more we move from the objective to the subjective. We move from George towards Fabio - 'shared values' approaches. While not denying the importance of culture [Howitt, Mainelli & Taylor, 2004] - 'would one rather have a bunch of honest people in a loose system or a bunch of crooks in a tight system?' - and its crucial role as the starting point for organisational and cultural change, shared values are hard to formalise. At one extreme, one can parody culturally-based programmes as 'rah! rah!' cheerleading - 'every day in every way let's get better', yet people in organisations do need to share values on risk awareness, assessment and action. Shared values are essential, but insufficient for regulating the above average.
The more we focus on linking performance to pay, the more we move away from Fabio and towards George. To paraphrase Gandhi, 'there are people in the world so greedy that God cannot appear to them except in the form of money'. Further, moving well towards George, even Joseph Juran, the great statistical guru of the quality movement, averred that 'fear can bring out the best in people' [DONKIN, Richard, 'The Man Who Helped Japan's Quality Revolution', Financial Times, 6 March 2008, page 7]
A common approach to performance management is 'control structures'. Often denigrated as 'tick-bashing', control structure approaches are particularly common in regulated industries. The difficulties with control structures are legion, e.g. tough to design, often full of contradictions ('Catch-22's'), difficult to roll-back, expensive to change. Control structures often result in a command-and-control organisation, rather than a commercial one, with costs frequently exceeding not just the potential benefits but also the available time [Mainelli, 2003]. While organisational control dashboards are culturally suited to banks or government organisations (bureaucratic 'tick-bashing' and form-filling with which they are familiar), excessive control structures undermine and contradict shared values.
The positive view of undermining is 'working the system', but the negative view is lying. A simple example, managers inculcate a lying culture among subordinates to avoid chain-of-command pressures on targets - 'I know you can't lock the computer door on our African computer centre because you've been awaiting air conditioner repair for the past five days, but could you just tick the box so my boss stops asking about it on his summary risk report?'. Another example, people repeatedly answer questions with the desired answer, e.g. 'does this deal have any legal issues' - strongly-suggested-answer-for-an-easy-life 'no', thus penalising honest thinking. Finally, the resulting RAG (red-amber-green) reports cannot be readily summarised or contrasted - five open computer room door incidents may be rated more important than a single total power outage.
A top-down, taxonomic, checklist approach conflicts with a bottom-up, values and virtues approach encouraging people throughout the system to make responsible decisions. Unfortunately, the trite response, 'we want both', is not so trite. What happens when the top-down taxonomy meets a virtuous person who believes the correct decision doesn't accord with the checklist?
We are left to consider the types of measure that can be applied to performance, 'it is the job of the scientist to discover appropriate measures.' [Beer, 1966, page 537]. But when it comes to performance, 'All principles, all truths, are relative, they [early humanists] said. 'Man is the measure of all things'' [Pirsig, 1974, page 337]. So over-simplistically, all that remains is to collate the responses about performance from other people. Clearly, a large part of the problem is that people are selecting the measures: 'The track of Quality preselects what data we're going to be conscious of, and it makes this selection in such a way as to best harmonize what we are with what we are becoming.' [Pirsig, 1974, page 280]
A lot of people say we shouldn't look at the role or the person, we should be looking at their performance. Before we get started on performance, I have a little performance test I'd like us all to take. You should have had on your seat an envelope. Please go ahead and open it. It's a little Christmas present from my firm, Z/Yen [one penny], to each of you. I often find that a coin is one of the best decision-making tools. When I can't make up my mind I frequently assign the two decisions heads or tails and flip a coin, but keep it covered under my hand. Without looking at the result, I think hard about whether I want to see heads or tails. Then I put the coin in my pocket without looking. Try it sometime. It drives friends and family nuts.
We scoured the world to find the most insignificant present and are delighted to present this superlatively tiny gift to you; we couldn't find a smaller one. Now before you spend it all at once, we shall use your pennies for an experiment. First, we must have a little health & safety briefing. Would you please realise that this penny is not for eating, nor for sticking in electrical sockets, and if you find yourself using this penny to plug an erupting volcano, please be sure to use appropriate thermal safety equipment. Second, and more importantly, you shall be flipping this coin so please make sure not to take out your neighbours' eye. Finally, the coin has two sides, heads - more formally called the obverse - and tails - the reverse. In the UK, only the reverse seems consistent, so you will have to be flexible about the heads you have on your experimental device and what you see on the screen.
Meanwhile, have a practice spin now and show your neighbours the result. In preparation for this evening, over the weekend I visited a random number generator on the web that flipped coins for me. I recorded 10 coin tosses in all. Now would you all please stand? I'd like you to get ready for up to 10 coin tosses. After each toss I'm going to ask you to show your neighbours the result. If your toss accords with the random number generator, then please keep standing. If your toss contradicts the random number generator, then please sit down.
Please keep standing if you have [heads/tails].
Please sit down if you have [tails/heads].
[on the night, the rough numbers from 100 people were as would predicted, 50 sat down in Round 1, 25 in Round 2, 12 in Round 3, then 7 in Round 4. However the resulting 5 stayed standing after Round 5, then 3 down in Round 6 - but one was a friend, Ian Harris, so Helen won the book!]
Let's have a look at the maths. The first column here shows the coin toss, the second the results of the flips I recorded this weekend, the third the odds (always 50:50) of you being right, the fourth the probability on a scale from 0 to 1, and the fifth the probability expressed as odds, i.e. 1 in 2 chances of getting the first toss correct. As you can see, you have a 1 in 1,024 chance of getting 10 tosses correct in a row.
Tonight out of 100 people in Barnard's Inn (I'm afraid I can't see the overflow area), two made it through the 6th toss, i.e. 1 in 64 chances of getting 6 tosses correct in a row. Pretty much what you?d expect.
Now many of you will feel cheated. This was hardly a complicated game and there was no skill involved, just dumb luck (no offence intended!). When you look at games such as coin tossing and lotteries, remember Groucho Marx, 'He may look like an idiot and talk like an idiot but don't let that fool you. He really is an idiot.?
When you look at popular games it's interesting to notice that games of dumb luck are rarely played by consenting adults. Games that just count on dice throws, for example to go round a board with markers, or purely random card games, are uninteresting. Young children may be interested in some games that are close to pure chance, but rapidly progress to games that mix luck and choice or strategy. At the other extreme, games without chance are merely puzzles. I wonder whether the proof that computers can consistently win at chess - despite chess's complexity it is ultimately calculable - has contributed to a decline in chess's popularity. Popular adult games mix chance and skill, luck and strategy. How can we tell in what proportions? A game of complete chance means that over numerous plays, no player seems to get ahead - or they've all failed to grasp the game. As the amount of skill or strategy rises, then we should see skilled players truly move ahead, though they could just be having a long lucky streak. We talk about the truly skilled making their own luck, which can be true in a strategic game, but as the degree of chance rises we need to run the game more and more times to discern true skill.
'Success measures' suffer from the complexity of measuring not what level of success was achieved, but what level of success should have been achieved. Further, if there are differences between actual and potential, are they attributable to luck or skill? For example, in military affairs, inaction or the initial disposition of forces are believed to be decisive factors in many victories. Sun Tzu indicated that there is a strong element of quiet victory in strategic success and that true success is the victory won before an obvious battle, 'and therefore the victories won by a master of war gain him neither reputation for wisdom nor merit for valour' [Sun Tzu, 1963]. It can be difficult to distinguish positive inaction from luck, or negative action from misfortune. When objective measures of success are attempted, the results are disappointing, particularly as detailed by Sherden [1998] or Mintzberg [1994, pages 91-152]
Still, wouldn't it be great if you could be paid for managing the random - a thousand monkeys trying to write Shakespeare or dogs playing poker? Of course the great unspoken unease arises from the slight doubt that things might not be random. Perhaps the monkeys of metaphor know more about language than we grant. Perhaps the dogs could become skilled poker players. There are many cases where we reward people for managing situations that might be random, but we're not sure. Before we proceed further, just consider politicians and the economy, a spiritual advisor, or a fitness trainer. These are all good examples of where you want to believe that there is a cause and an effect, that these people do control the outcomes, but you realise that there are many factors involved which can thwart success, not least of which may be you. It's almost religious, you want someone else to be responsible for the wrath, and beneficence, of the gods, while realising that they truly don't control things. It reminds me of a Woody Allen remark about religious belief, 'To you I'm an atheist; to God, I'm the Loyal Opposition.'
One of the great debates about chance and control and professionalism looks at how we pay lawyers. Richard Susskind tells a story about his daughter that is identical to a story about mine. Both of us wanted to have our CDs 'ripped', that is put onto a digital medium so we could play them on our iPods. Both of us asked our young daughters if they wouldn't mind doing this job for us. Both daughters asked how much they'd get paid. Both fathers had no idea what was fair or how many CDs could be ripped in an hour, so in a spirit of fairness both said a few units an hour. Both daughters said, 'well, then I'll go very slowly, won't I'.
We pay many trades by the hour - lawyers, plumbers, doctors, accountants, masseuses - despite the fact that 10 year olds can easily see the flaws in the system. One flaw is our ignorance. We are ignorant of the tradesman's skills, the difficulty of the problem, how long it could take. Part of the problem is mutual ignorance about unknown unknowns, such as this problem could get rather large if the main intake pipe has been accidentally cemented in to the ground, or this legal case could get quite large if the other side doesn't want to settle. For lawyers, success fees, payment only if they win your case, are problematic. Success fees seem straightforward and fair. Much fairer than a lawyer getting paid by the hour to drag out cases or walking away once your wallet is exhausted. Success fees mean a lawyer has to be more upfront with you about the odds and consider what he or she does control. How chancy is your case? Of course, as Jonathan Howitt points out for investment managers, you have a bunch of people who believe they're going to win or they would never have entered the race.
However, success fees might motivate lawyers to dredge up cases that don't need to be fought, though there is a high likelihood of success. Success fees might mean lawyers only fight easy cases. Success fees might mean that you grossly overpay your advocate as the effort involved is unrelated to the size of the settlement. More straightforwardly, and avoiding the issue of payment, a good friend of mine, and a doctor, Will Ayliffe asserts that 'smart guys don't have managers'.
In situations with a degree of chance, performance measurement is fraught with problems. Let's look at performance measurement in finance for a moment. To get things started, have a look at 12 years of NASDAQ Index data. Twelve years ago it was bouncing around between 1,200 and 1,480 points. Last week it was bouncing around 1,500 points. Now you might give a fund manager a target of tracking the NASDAQ. At what point do you evaluate performance? Clearly you are a reasonable person and know that these indices bounce around a lot, so you decide to be very very generous and only check up on him or her once every three years, as indicated here. In the event, it turns out that he or she just tracked the NASDAQ Index. But what sort of evaluation will it be? In the first three years, great success. In the second three years, great loss. In the third three years, some gains, and we'll see about the fourth three years next week. Professor Andrew W Lo of MIT has been trying to pick apart hedge-fund managers. He talks about strategies that are akin to 'picking up nickels in front of steamrollers', i.e. temporary, advantageous trading opportunities. Thus, hedge-fund managers can amass great credentials that may not be sustainable, yet at the height of uncertainty they garner the most investment. A lot of the track record has to do with the frequency of observation. Further, David Benson points out that the period for meaningful observation of some markets might be 25 years or more before a fair evaluation is possible. How can anyone wait this long? Or work this long at the same firm? So there are plenty of problems dealing with people who are smooth and smart, trying to apply trust over rules in a situation where numbers only tell you half the story.
But few people are as reasonable as you. Who wants to wait more than a decade to see how performance works out. So let's look at two fund managers and their performance over the past year. We have George and Fabio again. Here are is their net fund value at the end of each month. Hard-bitten George who delivers the numbers, and feeble but friendly Fabio, such a nice guy. Again, nobody would be foolish enough to evaluate them over as short a period as a month, so let's evaluate George at the end of every quarter. Great performance, eh? George has consistently delivered rising performance over the year. Turning to Fabio though, how sad. Here Fabio has turned in worse and worse results each quarter. Sure, George has had his bad months too, but Fabio, surely this can't go on? On the other hand, Fabio has been positive in eight out of 12 months, while George has only been positive in four out of 12 months.
However, if we get a bit fancier, we could look at a trend line - the dotted yellow lines. A regression through both Fabio's and George's performance shows you that Fabio has actually been improving slightly over time, while George has been declining. But far more importantly, it's fairly clear that things are quite random. R2 is a widely used measure of how well a trend line 'fits' the data. An R2 of 1.0 implies perfect predictability. An R2 of 0.0 implies things are random. George's poor trend is nearly random with an R2 of 0.0067, and Fabio's good trend even more so at 0.0026. So while you've been measuring their performance assuming that they are decent fund managers, actually neither of them can prove that they are better than random.
[on the night, an astute member of the audience wondered why, if the graphs appeared to be reversed, R2 differed. In fact, the graphs were almost mirror images, but to adjust for the trend lines, not identical, hence the slight difference in R2.]
This is not just theoretical. I remember an interesting review I had to perform on a magazine during the recession in the early 1990's. The magazine was in the building trade and made most of its money from advertising. Advertising revenues were plummeting and the board questioned the competence of the managing editor, and even considered closing the magazine. Based on not inconsiderable experience in media and publishing, my firm was called in to review the situation. I looked at everything the managing editor was doing and found little to fault. In fact, he was very thorough about keeping good records. Based on his records I was able to show that their advertising revenues correlated strongly with new housing starts by the building trade - the R2 was very strong. The proposition we brought back to the board was 'if you believe that the housing industry will pick up and new housing starts will rise, then you should persevere; if not, close the magazine.' The board decided to stay the course and when housing starts returned, the magazine rapidly returned to profit. R2 saved the day.
Now if I can subtly shift ground, assume that George and Fabio are not trying to track an index where we were looking at their net fund value, but that these are their net profit performances over time - the dotted yellow lines are their cumulative profits. Here their roles are certainly reversed, George is doing terribly, while Fabio is doing well. We can pull out three big points from these three slides: frequency of observation matters, volatility matters and cumulative assessment matters.
There are at least three conflicts that emerge from our desire to link remuneration with accountability. The first conflict is that we tend to value 'commission over omission'. We want to believe that people made a difference. We over-pay for luck, we under-appreciate preparedness and we under-penalise failure to anticipate. Remuneration committees find it difficult to account for luck. In a year when Kirk Kerkorian, Summer Redstone and Hank Greenberg are stumbling ['No Country For Old Men', The Economist, 25 October 2008, page 80], we are still unsure if financial and business wonder folk are full of luck or skill.
The second conflict is that we value 'losses over gains'. Remuneration committees are happy to pay out when things are going well. In fact, when things go very well they pay out too much. They tend to make people with losses suffer too much for both accidents and mistakes. Given that losses are so disproportionately penalised, yet remuneration only kicks in for positive results, an unintended result is to increase risk-taking rather than reduce it. If you don't get paid except for gains, but get kicked out for small mistakes, you might as well risk a big mistake.
The third conflict is that we value 'present over future'. Preparing a company for the future is not as important as good results today. This leads to under-investment. We tend to discount the future at such a rate that we over-penalise the cautious. We tend to penalise the person who plans for a rainy day if the rainy day doesn't arrive. We tend to reward the person who doesn't plan for a rainy day, so long as it doesn't rain or, when it does, everyone else is rained on too. Further, we benchmark performance based on current results rather than having the remuneration committee take joint responsibility for sensible future stewardship by agreeing that evaluating management's current performance needs to take future probabilities into account.
These three problems become more complex when we interlink them with our expectation of other people's expectations, feed-forward and feed-through as we've discussed before. From The Economist ['The Big Mo', 22 November 2008, page 96]:
'In the short term, it is difficult to distinguish management skill from luck. Because the index represents the average return of all investors before costs, some managers will beat the index while others will underperform. There is a natural tendency to assume the outperformers are skilful. So the underperformers will lose clients and the outperformers will gain.
'The dotcom bubble was a case in point. 'Value' investors (who look for stocks that appear cheap by usual measures) ignored the technology industry. They were dumped by clients who gave money to 'growth' investors (who look for companies with a promising future) instead. By itself, that pushed up the value of dotcom stocks and made the relative performance of value investors even worse.'
['The Big Mo', The Economist (20 November 2008), page 96 - http://www.economist.com/finance/displaystory.cfm?story_id=12652255]
Keynes spoke about being conventionally wrong rather than unconventionally right, 'It is better to fail conventionally than to succeed unconventionally.' More poetically, in his essay 'Self Reliance', the American transcendentalist philosopher and poet Ralph Waldo Emerson said, 'It is easy in the world to live after the world's opinion; it is easy in solitude to live after our own; but the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude.'
So what can we recommend? Probably two things - the scientific method and competition. In the late 19th century, Frederick Winslow Taylor promoted scientific management. The legacy of Taylor's early attempts to systematise management and processes through rigorous observation and experimentation led to the quality control movement from the 1920's, Operations Research and Cybernetics from the 1940's and Total Quality Management from the 1980's leading through to today's Six Sigma and Lean Manufacturing. The aim of scientific management is to produce knowledge that improves organisations using the scientific method. Taylor promoted scientific management for all work, such as the management of universities or government.
The scientific method is based on the assumption that reasoning about experiences creates knowledge. Aristotle set out a threefold scheme of abductive, deductive, and inductive reasoning. Inductive reasoning generalises from a limited set of observations - from the particular to the general - 'every swan we've seen so far is white, so all swans must be white'. Deductive reasoning moves from a set of propositions to a conclusion - from the general to the particular - 'all swans are white; this bird is a swan; this bird is white'. But neither inductive nor deductive reasoning is creative. Abductive reasoning is creative, generating a set of hypotheses and choosing the one that, if true, best explains the observations - 'if a bird is white perhaps it's related to other white birds we've previously called 'swans', or perhaps it's been painted white by the nearby paint factory' - from observations to theories. Abductive reasoning prefers one theory based on some criteria, often parsimony in explanation, such as Occam's razor, 'all other things being equal, the simplest explanation is the best'.
The scientific method may be the application of a process to the creation of knowledge from experience, but the scientific method is hardly a sausage-machine. It is often represented as 1 - gather data, 2 - create a hypothesis, 3 - deduce a prediction, 4 - formulate an experiment, and 5 - corroborate, else, go back to 2 - create a hypothesis. William Whewell noted in the 19th century that that 'invention, sagacity, genius' are required at every step in the scientific method. The hypothesis stage is often overlooked as a creative stage. Creative people are different. They are misfits and rather like The Game, if you recognise you're creative in a large organisation, you cease to be creative. But creative people love learning.
Perhaps the most interesting 20th century insight into the scientific method came from Karl Popper, who asserted that a hypothesis, proposition, or theory is scientific only if it is falsifiable. Popper's assertion challenges the idea of eternal truths because only by providing a means for its own falsification can a scientific theory be considered a valid theory. Every scientific theory must provide the means of its own destruction and thus is temporary or transient, never an immutable law. Or, in other words, we progress via experiment - we must, and will, have failures. Why should I reiterate this? Because the key element is to force people to be predictive. If they can't be predictive, then push them by categorising their work as gambling, restricted professionalism or piece work. Nothing wrong with any of those three, but often a dent to false pride.
Our firm has become much more forceful with the above average over the years. Can they predict their own performance? With luck, their enthusiasm for science and prediction allows us to hoist them on their own petard. If they can't predict their own performance, i.e. an R2 of close to 0.0, then clearly they either aren't putting enough effort into data collection or prediction, or they could be adequately substituted by a random monkey. If they can predict their own performance brilliantly, i.e. an R2 closer to 1.0, then they should be thinking about automating themselves out of a job. More likely, evaluations between 0.3 and 0.7 R2 keep them on their toes. They should be able to compare their predicted performance against their actual in a spirit of scientific enquiry.
Of course scientific enquiry also involves a lot of chance. Dudley Edmunds relates that he was once in conversation about headhunting for a fund with a human resources manager and a fund manager. The human resources manager wanted to specify "above average intelligence" but couldn't define it more clearly. Somewhat exasperated he said that average intelligence was a top down view on the guesstimate of a bottom up approach. The fund manager spilt his coffee laughing, though Dudley wasn't sure the human resources manager got the joke.
On competition, we spend a tremendous amount of time listening too uncritically to people talking about being above average all the time and it leads us into trouble. Every corporate mission statement exceeds itself in being a mission to the extreme reaches of superlatives - Levi Strauss' 'We will clothe the world', Success Networks' 'Our mission is to inform, inspire, and empower people and organizations to be their best'. A superlative is almost certainly unattainable. How can you be your best every day? How can you sell the highest quality items at the lowest possible prices every day without fail, and stay in business. I grind my teeth when I'm told about 'totally excellent customer services' or 'the premier retailer of the finest quality pap'. I'm particularly fond of the puzzle set by Britannia College's mission from around the corner, 'Excelling Towards Excellence'.
At our firm we ask prospective employees, 'what was your biggest failure?'. Clearly we don't want failures as people, but people who don't recognise where they have failed are unlikely to recognise the role of luck in their previous successes. And this leads me to regulation. Criticising the Financial Services Authority, Jon Moulton, the famous private equity managing partner at Alchemy, says, 'Regulators cannot reasonably hire the best people in the marketplace because the best people in the marketplace will not work in that environment. Therefore, we should limit what they do to what they can reasonably understand.' [HOPE, Katie, 'Moulton Hits Out At Ability Of Watchdog', CITY AM, 14 May 2008, page 4]. He's most certainly right, but ultimately all of finance is an evolutionary process. The fittest survive and the weak are culled. However, there is often adverse selection. When those who are highly leveraged, yet leave early, generate high returns; or those who eschew debt, until very late, lose everything, are culled; we realise that timing is everything.
When we talk about managing the above average, there are, as pointed out by Coffee [2007] and Ferran [2008] two variables at play - regulation and enforcement. We note that the most desirable combination is when regulation is light, appropriate, and effective at setting standards, and therefore the need for enforcement is minimal. As illustrated, other combinations may prove confrontational, confused, cumbersome, or ineffective. The problem of course, is that heavy regulation impairs competition. One of the problems with historic financial services regulation has been that too big to fail = too big to regulate. Good regulation has few regulations and little enforcement, not just because it is 'principles-based', but because intense competition lowers the need for both regulation and enforcement.
So what do we conclude about the Above Average. I keep telling my wife that she has found Mr Right. First name, 'Always'. She begs to differ. In fact she seemed to think that if this lecture had been called 'Regulating the Below Average' she would be more than qualified to give it herself - I can't imagine why. Her criticism of some of my lectures is that they are great surveys of problems, but rarely give practical advice. As she's not here this evening, I thought it might be a good idea to try and quash her radical thought. What practical advice can I give about regulating the above average? Well, it's really rather simple, help them regulate themselves by appealing to professional or scientific virtue and using control systems. But then there are two special things that apply when you help the above average to be regulated by the market:
¨ pin them to predictions - where things are numeric, hold them to strict predictions. A truly above average person should be able to help you evaluate their own performance. The scientific method helps them to realise they're learning. They should also concede that increased predictability should improve the overall system in which they work. I've often, unverifiably, invoked Mainelli's law of predictive efficiency - a system gains 1% efficiency for every 1% improvement in predictive R2. Perhaps regulators should give leeway to firms in proportion with the accuracy of their predictions of future performance;
¨ promote competition - far too often overlooked by regulators, competition is the key to handling the above average. A bit like throwing a bone into a pack of wolves chasing you, competition is the best way to manage the above average. In fact, competition is the base form of regulation, going back to anti-monopoly and anti-trust laws.
One expression of Goodhart's Law is 'When a measure becomes a target, it ceases to be a good measure.' One corollary to Goodhart's Law I'd like to explore in closing is that 'When a target is overtaken by time pressures, it turns into a measure of popularity.' There are numerous examples where remuneration committees either can't or won't take responsibility for agreeing that today's actions are a responsible response for the future. They won't calculate an adequate, long-term frequency of observation. Remuneration committees overuse numerical benchmarks. Instead of evaluating a fund manager on long-term prospects (tough if he or she has just had a bad year), they evaluate on just this year's performance, killing the fund manager's interest in long-term prospects. Current available measures are more likely to be 'popularity' measures such as growth of funds under management, which arises from good marketing or public relations rather than good investment performance.
This pursuit of popularity is evident in CEOs seeking the front covers of business magazines, in managers pursuing popular strategies rather than correct ones ('no one ever got fired for following the herd'), in regulators pushing people to do what others are doing or remuneration committees relying on outside consultants who can only judge whether or not management are doing what everyone else is doing. Popularity may be the way to evaluate social performance at a high school, but it's a terrible way to run a sustainable organisation - avoid measuring popularity in place of performance.
In a financial system where we value commission over omission, losses over gains and present over future, all the while measuring popularity, it's no surprise that we increase risk. So, in regulating the above average, the key question is probably whether the regulators themselves are lucky or skilful, and how would they measure the difference?