Expert Judgment, Data, and Ethics: A Cautionary Tale

The experience of a former colleague illustrates that expert judgment is often better than statistical analysis based on bad or insufficient data. It also illustrates the lack of business ethics found in some research firms and the way young analysts can be pressured to do things they shouldn’t. I have every reason to believe the following story is true; names have been changed/withheld to protect both the innocent and the guilty.

My friend Ethan was a young data analyst at an east coast research firm in the early 90’s. He was assigned to work with the firm’s senior partner on a high profile project. A regional telecom company was planning a merger with a neighboring telecom firm and had hired Ethan’s employer to estimate the combined telco’s likely market share post-merger.

This wasn’t a simple matter of adding the two companies’ current customers. The client needed estimates of their future market performance under varying competitive scenarios.

Ethan worked with the senior partner to craft a solid research design. At its center was a survey of a large, representative sample of customers and prospects in several states. Unfortunately, the client balked at the price tag. More unfortunately (albeit predictably), the senior partner agreed to a fee substantially below the original budget. Since he was unwilling to reduce his profit, the only way to cut costs was by dramatically reducing the sample size for the surveys. It should be noted that this particular client was unsophisticated enough to not even require confidence intervals.

So my friend executed the study with woefully inadequate data. When the analysis was complete, he took the results to the senior partner for review.

This is how the meeting went according to Ethan:

“He flipped through the draft report, shook his head, and muttered that it was all wrong. He took a pen and began annotating the pages with what he believed the correct market shares would be. Then he handed the report back to me and told me to work with our statistician and come up with results similar to the numbers he had just written down.”

Ethan knew what he was being asked to do was wrong, but lacked the confidence to push back. The senior partner had been a professional market researcher for 30 years and had an excellent reputation in the industry. Ethan, on the other hand, was a year out of grad school with a large student debt and an eight-month old son to feed. So he want to the firm’s statistician.

The way Ethan tells it, the statistician, for whom Ethan had tremendous respect, was obviously uncomfortable but also unwilling to stand up to the senior partner. So, through a combination of selective outlier elimination, creative weighting, and good old-fashioned making shit up, they created a path from the raw data to the answer they wanted. The client was pleased, the jerry-rigged results were submitted to the FCC in support of the client’s merger application, and everyone lived happily ever after. Except for Ethan, who feels guilty about it to this day.

Sample size is small

The irony of course, is that the fake results (or, more to the point, the intuition of the senior partner who made them up) were undoubtedly more accurate than the actual results of the study. The senior partner, while unethical, knew the industry.