Brent Wheeler has used economic analysis to drive successful investment and governance in the public and private sector for the last 29 years. Typically with an ........ eye2thelongrun

When Common Sense Fails

9th April 2021

A banker taped a picture, drawn by one of his small children, to his office wall. When he arrived at work the next morning, he found the picture was covered by a large notice, saying he was in violation of company policy which required personal items to be put away at night. Such a reaction was not just petty, it risked demotivating the banker completely. In short, it defied common sense.

Martin Lindstrom is a management consultant who spends his time battling the kind of corporate red tape that alienates customers, as well as employees. He has even persuaded some companies to establish special departments to fight this nonsense, sometimes dubbed “The Ministry of Common Sense”, which is the title of his latest book.

As Mr Lindstrom says, successful companies are able to put themselves in their customers’ shoes, and this leads to better service and sensible solutions. He once advised a credit-card issuer which had poor ratings for customer service. So he booked a restaurant for dinner with the executives, but got the fraud division to ensure their credit cards did not work. When one manager tried to pay for the taxi, he then watched their fury and embarrassment as they tried to get through to the call centre themselves.

The author came across another example of poor customer satisfaction at Maersk, a big shipping company. On investigating the matter, he found that call-centre employees were judged on the time spent per complaint. The firm changed the metric for judging success from time spent to other factors, such as issue resolution. Customer satisfaction nearly doubled. Later the company suffered a cyber-attack which meant that the headquarters lost contact with its ships. The chief executive issued a directive to staff to “do what you think is right to serve the customer”. This flexibility helped the company to survive the crisis and improved employee engagement.

Often the problem stems from new regulations being introduced without thinking through the implications. The pandemic has provided plenty of examples of new rules that lack common sense. On a flight last year, Mr Lindstrom flew from Zurich to Frankfurt. The crew asked passengers to fill in a form detailing where they were from and where they were going, in order that they could be traced in case others became infected. But there were only two pens on board so the writing implements were passed from hand to germy hand. When they left the plane the passengers were asked to keep six feet apart as they filed down the steps before they reached the bottom, whereupon they were crammed onto a packed shuttle bus.

When it comes to dealing with employees, budgeting rules are often the cause of frustration. Many companies insist that workers travel on a certain set of airlines, even when cheaper options are available, and insist they stay in certain chains of hotels, even though they may be many miles away from the site they are visiting. Mr Lindstrom recounts the story of a junior manager who had an executive shadow him for a day. To illustrate the problem, he decided to take the executive on a business trip. This required a 6.05am flight (the cheapest available); the executive agreed but took a business-class seat, which was against company policy. The executive then tried to read his emails on the plane—another breach of the rules, because the company required employees to access emails only when they were linked to a secure network. That regulation ensured they were out of reach for hours at a stretch.

Why can’t companies escape all this nonsense? Part of the problem is that bureaucracy has an innate tendency to multiply. Successful companies have only three or four reporting levels. Every reporting layer adds 10% to an employee’s workload, Mr Lindstrom estimates. And bureaucracy also means that employees’ time gets consumed by endless meetings, as Bartleby has often complained. Such gatherings should last no more than half an hour, says the author, who should clearly be hired by The Economist immediately.

In many companies, meaningful change could be achieved if the management just asked the staff. Most employees will be able to cite rules or practices that make it harder for them to do their jobs and to serve customers properly. Creating a special unit to push through the changes is a sensible idea, provided it has the support of senior management. That, of course, requires executives to have the common sense to appreciate that change is needed. Employees and customers can only hope they do.

This Bartleby article appeared in the March 13th 2021 edition of The Economist 

Feynman’s Tricks of the Trade

4th March 2021

One of the privileges we have is the ability, should we choose to use it, to “stand on the shoulders of giants” the better to see. One such giant is Richard Feynman who offers the most insightful yet simple rule set for thinking logically, rationally and helpfully. Here are seven steps he summarises as invaluable principles for ensuring that we do not get lost as we seek to separate fallacy and fake from clear understanding.

“We take other men’s knowledge and opinions upon trust; which is an idle and superficial learning. We must make them our own. We are just like a man who, needing fire, went to a neighbour’s house to fetch it, and finding a very good one there, sat down to warm himself without remembering to carry any back home. What good does it do us to have our belly full of meat if it is not digested, if it is not transformed into us, if it does not nourish and support us?” —Michel de Montaigne

The Feynman Technique helps you learn stuff. But learning doesn’t happen in isolation. We learn not only from the books we read but also the people we talk to and the various positions, ideas, and opinions we are exposed to. Richard Feynman also provided advice on how to sort through information so you can decide what is relevant and what you should bother learning.

In a series of non-technical lectures in 1963, memorialized in a short book called The Meaning of It All: Thoughts of a Citizen Scientist, Feynman talks through basic reasoning and some of the problems of his day. His method of evaluating information is another set of tools you can use along with the Feynman Learning Technique to refine what you learn.

Particularly useful are a series of “tricks of the trade” he gives in a section called “This Unscientific Age.” These tricks show Feynman taking the method of thought he learned in pure science and applying it to the more mundane topics most of us have to deal with every day.

Before we start, it’s worth noting that Feynman takes pains to mention that not everything needs to be considered with scientific accuracy. It’s up to you to determine where applying these tricks might be most beneficial in your life.

Regardless of what you are trying to gather information on, these tricks help you dive deeper into topics and ideas and not get waylaid by inaccuracies or misunderstandings on your journey to truly know something.

As we enter the realm of “knowable” things in a scientific sense, the first trick has to do with deciding whether someone else truly knows their stuff or is mimicking others:

“My trick that I use is very easy. If you ask him intelligent questions—that is, penetrating, interested, honest, frank, direct questions on the subject, and no trick questions—then he quickly gets stuck. It is like a child asking naive questions. If you ask naive but relevant questions, then almost immediately the person doesn’t know the answer, if he is an honest man. It is important to appreciate that.

And I think that I can illustrate one unscientific aspect of the world which would be probably very much better if it were more scientific. It has to do with politics. Suppose two politicians are running for president, and one goes through the farm section and is asked, “What are you going to do about the farm question?” And he knows right away—bang, bang, bang.

Now he goes to the next campaigner who comes through. “What are you going to do about the farm problem?” “Well, I don’t know. I used to be a general, and I don’t know anything about farming. But it seems to me it must be a very difficult problem, because for twelve, fifteen, twenty years people have been struggling with it, and people say that they know how to solve the farm problem. And it must be a hard problem. So the way that I intend to solve the farm problem is to gather around me a lot of people who know something about it, to look at all the experience that we have had with this problem before, to take a certain amount of time at it, and then to come to some conclusion in a reasonable way about it. Now, I can’t tell you ahead of time what conclusion, but I can give you some of the principles I’ll try to use—not to make things difficult for individual farmers, if there are any special problems we will have to have some way to take care of them, etc., etc., etc.””

If you learn something via the Feynman Technique, you will be able to answer questions on the subject. You can make educated analogies, extrapolate the principles to other situations, and easily admit what you do not know.

The second trick has to do with dealing with uncertainty. Very few ideas in life are absolutely true. What you want is to get as close to the truth as you can with the information available:

“I would like to mention a somewhat technical idea, but it’s the way, you see, we have to understand how to handle uncertainty. How does something move from being almost certainly false to being almost certainly true? How does experience change? How do you handle the changes of your certainty with experience? And it’s rather complicated, technically, but I’ll give a rather simple, idealized example.

You have, we suppose, two theories about the way something is going to happen, which I will call “Theory A” and “Theory B.” Now it gets complicated. Theory A and Theory B. Before you make any observations, for some reason or other, that is, your past experiences and other observations and intuition and so on, suppose that you are very much more certain of Theory A than of Theory B—much more sure. But suppose that the thing that you are going to observe is a test. According to Theory A, nothing should happen. According to Theory B, it should turn blue. Well, you make the observation, and it turns sort of a greenish. Then you look at Theory A, and you say, “It’s very unlikely,” and you turn to Theory B, and you say, “Well, it should have turned sort of blue, but it wasn’t impossible that it should turn sort of greenish color.”

So the result of this observation, then, is that Theory A is getting weaker, and Theory B is getting stronger. And if you continue to make more tests, then the odds on Theory B increase. Incidentally, it is not right to simply repeat the same test over and over and over and over, no matter how many times you look and it still looks greenish, you haven’t made up your mind yet. But if you find a whole lot of other things that distinguish Theory A from Theory B that are different, then by accumulating a large number of these, the odds on Theory B increase.”

Feynman is talking about grey thinking here, the ability to put things on a gradient from “probably true” to “probably false” and how we deal with that uncertainty. He isn’t proposing a method of figuring out absolute, doctrinaire truth.

Another term for what he’s proposing is Bayesian updating—starting with a priori odds, based on earlier understanding, and “updating” the odds of something based on what you learn thereafter. An extremely useful tool.

Feynman’s third trick is the realization that as we investigate whether something is true or not, new evidence and new methods of experimentation should show the effect of getting stronger and stronger, not weaker. Knowledge is not static, and we need to be open to continually evaluating what we think we know. Here he uses an excellent example of analyzing mental telepathy:

“A professor, I think somewhere in Virginia, has done a lot of experiments for a number of years on the subject of mental telepathy, the same kind of stuff as mind reading. In his early experiments the game was to have a set of cards with various designs on them (you probably know all this, because they sold the cards and people used to play this game), and you would guess whether it’s a circle or a triangle and so on while someone else was thinking about it. You would sit and not see the card, and he would see the card and think about the card and you’d guess what it was. And in the beginning of these researches, he found very remarkable effects. He found people who would guess ten to fifteen of the cards correctly, when it should be on the average only five. More even than that. There were some who would come very close to a hundred percent in going through all the cards. Excellent mind readers.

A number of people pointed out a set of criticisms. One thing, for example, is that he didn’t count all the cases that didn’t work. And he just took the few that did, and then you can’t do statistics anymore. And then there were a large number of apparent clues by which signals inadvertently, or advertently, were being transmitted from one to the other.

Various criticisms of the techniques and the statistical methods were made by people. The technique was therefore improved. The result was that, although five cards should be the average, it averaged about six and a half cards over a large number of tests. Never did he get anything like ten or fifteen or twenty-five cards. Therefore, the phenomenon is that the first experiments are wrong. The second experiments proved that the phenomenon observed in the first experiment was nonexistent. The fact that we have six and a half instead of five on the average now brings up a new possibility, that there is such a thing as mental telepathy, but at a much lower level. It’s a different idea, because, if the thing was really there before, having improved the methods of experiment, the phenomenon would still be there. It would still be fifteen cards. Why is it down to six and a half? Because the technique improved. Now it still is that the six and a half is a little bit higher than the average of statistics, and various people criticized it more subtly and noticed a couple of other slight effects which might account for the results.

It turned out that people would get tired during the tests, according to the professor. The evidence showed that they were getting a little bit lower on the average number of agreements. Well, if you take out the cases that are low, the laws of statistics don’t work, and the average is a little higher than the five, and so on. So if the man was tired, the last two or three were thrown away. Things of this nature were improved still further. The results were that mental telepathy still exists, but this time at 5.1 on the average, and therefore all the experiments which indicated 6.5 were false. Now what about the five? . . . Well, we can go on forever, but the point is that there are always errors in experiments that are subtle and unknown. But the reason that I do not believe that the researchers in mental telepathy have led to a demonstration of its existence is that as the techniques were improved, the phenomenon got weaker. In short, the later experiments in every case disproved all the results of the former experiments. If remembered that way, then you can appreciate the situation.”

We must refine our process for probing and experimenting if we’re to get at real truth, always watching out for little troubles. Otherwise, we torture the world so that our results fit our expectations. If we carefully refine and re-test and the effect gets weaker all the time, it’s likely to not be true, or at least not to the magnitude originally hoped for.

The fourth trick is to ask the right question, which is not “Could this be the case?” but “Is this actually the case?” Many get so caught up with the former that they forget to ask the latter:

“That brings me to the fourth kind of attitude toward ideas, and that is that the problem is not what is possible. That’s not the problem. The problem is what is probable, what is happening.

It does no good to demonstrate again and again that you can’t disprove that this could be a flying saucer. We have to guess ahead of time whether we have to worry about the Martian invasion. We have to make a judgment about whether it is a flying saucer, whether it’s reasonable, whether it’s likely. And we do that on the basis of a lot more experience than whether it’s just possible, because the number of things that are possible is not fully appreciated by the average individual. And it is also not clear, then, to them how many things that are possible must not be happening. That it’s impossible that everything that is possible is happening. And there is too much variety, so most likely anything that you think of that is possible isn’t true. In fact that’s a general principle in physics theories: no matter what a guy thinks of, it’s almost always false. So there have been five or ten theories that have been right in the history of physics, and those are the ones we want. But that doesn’t mean that everything’s false. We’ll find out.”

The fifth trick is a very, very common one, even 50 years after Feynman pointed it out. You cannot judge the probability of something happening after it’s already happened. That’s cherry-picking. You have to run the experiment forward for it to mean anything:

“A lot of scientists don’t even appreciate this. In fact, the first time I got into an argument over this was when I was a graduate student at Princeton, and there was a guy in the psychology department who was running rat races. I mean, he has a T-shaped thing, and the rats go, and they go to the right, and the left, and so on. And it’s a general principle of psychologists that in these tests they arrange so that the odds that the things that happen by chance is small, in fact, less than one in twenty. That means that one in twenty of their laws is probably wrong. But the statistical ways of calculating the odds, like coin flipping if the rats were to go randomly right and left, are easy to work out.

This man had designed an experiment which would show something which I do not remember, if the rats always went to the right, let’s say. He had to do a great number of tests, because, of course, they could go to the right accidentally, so to get it down to one in twenty by odds, he had to do a number of them. And it’s hard to do, and he did his number. Then he found that it didn’t work. They went to the right, and they went to the left, and so on. And then he noticed, most remarkably, that they alternated, first right, then left, then right, then left. And then he ran to me, and he said, “Calculate the probability for me that they should alternate, so that I can see if it is less than one in twenty.” I said, “It probably is less than one in twenty, but it doesn’t count.”

He said, “Why?” I said, “Because it doesn’t make any sense to calculate after the event. You see, you found the peculiarity, and so you selected the peculiar case.”

The fact that the rat directions alternate suggests the possibility that rats alternate. If he wants to test this hypothesis, one in twenty, he cannot do it from the same data that gave him the clue. He must do another experiment all over again and then see if they alternate. He did, and it didn’t work.”

The sixth trick is one that’s familiar to almost all of us, yet almost all of us forget about every day: the plural of anecdote is not data. We must use proper statistical sampling to know whether or not we know what we’re talking about:

“The next kind of technique that’s involved is statistical sampling. I referred to that idea when I said they tried to arrange things so that they had one in twenty odds. The whole subject of statistical sampling is somewhat mathematical, and I won’t go into the details. The general idea is kind of obvious. If you want to know how many people are taller than six feet tall, then you just pick people out at random, and you see that maybe forty of them are more than six feet so you guess that maybe everybody is. Sounds stupid.

Well, it is and it isn’t. If you pick the hundred out by seeing which ones come through a low door, you’re going to get it wrong. If you pick the hundred out by looking at your friends, you’ll get it wrong, because they’re all in one place in the country. But if you pick out a way that as far as anybody can figure out has no connection with their height at all, then if you find forty out of a hundred, then in a hundred million there will be more or less forty million. How much more or how much less can be worked out quite accurately. In fact, it turns out that to be more or less correct to 1 percent, you have to have 10,000 samples. People don’t realize how difficult it is to get the accuracy high. For only 1 or 2 percent you need 10,000 tries.”

The last trick is to realize that many errors people make simply come from lack of information. They don’t even know they’re missing the tools they need. This can be a very tough one to guard against—it’s hard to know when you’re missing information that would change your mind—but Feynman gives the simple case of astrology to prove the point:

“Now, looking at the troubles that we have with all the unscientific and peculiar things in the world, there are a number of them which cannot be associated with difficulties in how to think, I think, but are just due to some lack of information. In particular, there are believers in astrology, of which, no doubt, there are a number here. Astrologists say that there are days when it’s better to go to the dentist than other days. There are days when it’s better to fly in an airplane, for you, if you are born on such a day and such and such an hour. And it’s all calculated by very careful rules in terms of the position of the stars. If it were true it would be very interesting. Insurance people would be very interested to change the insurance rates on people if they follow the astrological rules, because they have a better chance when they are in the airplane. Tests to determine whether people who go on the day that they are not supposed to go are worse off or not have never been made by the astrologers. The question of whether it’s a good day for business or a bad day for business has never been established. Now what of it? Maybe it’s still true, yes.

On the other hand, there’s an awful lot of information that indicates that it isn’t true. Because we have a lot of knowledge about how things work, what people are, what the world is, what those stars are, what the planets are that you are looking at, what makes them go around more or less, where they’re going to be in the next 2,000 years is completely known. They don’t have to look up to find out where it is. And furthermore, if you look very carefully at the different astrologers they don’t agree with each other, so what are you going to do? Disbelieve it. There’s no evidence at all for it. It’s pure nonsense.

The only way you can believe it is to have a general lack of information about the stars and the world and what the rest of the things look like. If such a phenomenon existed it would be most remarkable, in the face of all the other phenomena that exist, and unless someone can demonstrate it to you with a real experiment, with a real test, took people who believe and people who didn’t believe and made a test, and so on, then there’s no point in listening to them.”

With thanks to Shane Parrish' reminder of this in Farnham Street, The Feynman Learning Technique


Decision Making - in the words of Barack Obama

14th January 2021

"My emphasis on process was born of necessity. What I was quickly discovering about the presidency was that no problem that landed on my desk, foreign or domestic, had a clean, 100 percent solution. If it had, someone else down the chain of command would have solved it already. Instead, I was constantly dealing with probabilities: a 70 percent chance, say, that a decision to do nothing would end in disaster; a 55 percent chance that this approach versus that one might solve the problem (with a 0 percent chance that it would work out exactly as intended); a 30 percent chance that whatever we chose wouldn’t work at all, along with a 15 percent chance that it would make the problem worse.

In such circumstances, chasing after the perfect solution led to paralysis. On the other hand, going with your gut too often meant letting preconceived notions or the path of least political resistance guide a decision—with cherry-picked facts used to justify it. But with a sound process—one in which I was able to empty out my ego and really listen, following the facts and logic as best I could and considering them alongside my goals and my principles—I realized I could make tough decisions and still sleep easy at night, knowing at a minimum that no one in my position, given the same information, could have made the decision any better. A good process also meant I could allow each member of the team to feel ownership over the decision—which meant better execution ..."

Source: Barack Obama in A Promised Land (p293)

After a Bad Year for Freedom, Try Liberalism

11th January 2021

A bad year for liberty

Read more …

The Girl Who Asked Questions - An Obituary

20th March 2020

Katherine Johnson died on February 24th

The black mathematician who guided the first manned spaceflights and the first Moon landing was 101

As she ran her eyes over the flight-test calculation sheets the engineer had given her, Katherine Goble (as she then was) could see there was something wrong with them. The engineer had made an error with a square root. And it was going to be tricky to tell him so. It was her first day on this assignment, when she and another girl had been picked out of the computing pool at the Langley aeronautical laboratory, later part of nasa, to help the all-male Flight Research Unit. But there were other, more significant snags than simply being new.

Most obviously, he was a man and she was a woman. In 1953 women did not question men. They stayed in their place, in this case usually the computing pool, tapping away on their Monroe desktop calculators or filling sheets with figures, she as neatly turned out as all the rest. Men were the grand designers, the engineers; the women were “computers in skirts”, who were handed a set of equations and exhaustively, diligently checked them. Men were not interested in things as small as that.

And, most difficult of all, she was Coloured, and he was White. The lab might be recruiting black mathematicians, but the door was not fully open; her pool was called “Coloured Computing”, and was segregated. As she sat down with the new team that morning, the men next to her had moved away. She was not sure why, but the world was like that, and she refused to be bothered by it. Since the café was segregated, she ate at her desk. There was no Coloured restroom, so she used the White one. A few years back, when the bus taking her to her first teaching job in Marion, Virginia, had crossed the state line from West Virginia, all the blacks had been told to get off and take taxis. She refused until she was asked nicely. But it could be unwise to push a white man too far.

Nonetheless, this engineer’s calculation was wrong. If she did not ask the question, an aircraft might not fly, or might fly and crash. So, very carefully, she asked it. Was it possible that he could have made a mistake? He did not admit it but, by turning the colour of a cough drop, he ceded the point.

She asked more such questions, and they got her noticed. As the weeks passed, the men “forgot” to return her to the pool. Her incessant “Why?” and “How?” made their work sharper. It also challenged them. Why were their calculations of aerodynamic forces so often out? Because they were maths graduates who had forgotten their geometry, whereas she had not; her high-school brilliance at maths had led to special classes on analytic geometry in which she, at 13, had been the only pupil. Why was she not allowed to get her name on a flight-trajectory report when she had done most of the work, filling her data sheets with figures for days? Because women didn’t. That was no answer, so she got her name on the report, the first woman to be so credited. Why was she not allowed into the engineers’ lectures on orbital mechanics and rocket propulsion? Because “the girls don’t go”. Why? Did she not read Aviation Week, like them? She soon became the first woman there.

As nasa’s focus turned from supersonic flight to flights in space, she was therefore deeply involved, though still behind the scenes. This excited her, because if her first love was mathematics—counting everything as a child, from plates to silverware to the number of steps to the church—her second was astronomy, and the uncountable stars. A celestial globe now joined the calculator on her desk. She had to plot the trajectories of spacecraft, developing the launch window and making sure—as soon as humans took off—that the module could get back safely. This involved dozens of equations to calculate, at each moment, which bit of Earth the spacecraft was passing over, making allowances for the tilt of the craft and the rotation of the planet. She ensured that Alan Shepard’s Mercury capsule splashed down where it could be found quickly in 1961, and that John Glenn in 1962 could return safely from his first orbits of the Earth. Indeed, until “the girl”, as he called her (she was 43), had checked the figures by hand against those of the newfangled electronic computer, he refused to go.

That checking took her a day and a half. Later she calculated the timings for the first Moon landing (with the astronauts’ return), and worked on the Space Shuttle. She also devised a method by which astronauts, with one star observation checked against a star chart, could tell where they were. But in the galaxy of space-programme heroes, despite her 33 years in the Flight Research Unit, for a long time she featured nowhere.

It did not trouble her. First, she also had other things to do: raise her three daughters, cook, sew their clothes, care for her sick first husband. Second, she knew in her own mind how good she was—as good as anybody. She could hardly be unaware of it, when she had graduated from high school at 14 and college at 18, expert at all the maths anyone knew how to teach her. But she typically credited the help of other people, especially her father, the smartest man she knew, a farmer and a logger, who could look at any tree and tell how many board-feet he could get out of it; and who had sold the farm and moved the family so that she and her siblings could all get a fine schooling and go to college. And last, at nasa, she had not worked alone. She had been one of around a dozen black women mathematicians who were equally unknown. But when their story emerged in the 21st century, most notably in a book and a film called “Hidden Figures”, she had a nasa building named after her, a shower of honorary doctorates and—the greatest thrill—a kiss from Barack Obama as he presented her, at 96, with the Presidential Medal of Freedom.

This attention was all the more surprising because, for her, the work had been its own reward. She just did her job, enjoying every minute. The struggles of being both black and a woman were shrugged away. Do your best, she always said. Love what you do. Be constantly curious. And learn that it is not dumb to ask a question; it is dumb not to ask it. Not least, because it might lead to the small but significant victory of making a self-proclaimed superior realise he can make a mistake. 

As printed in The Economist February 29th 2020 edition

Blog Archive

site powered by - Turboweb :: Simple Web Manager