[the science]
Some feedback throws you off course.
Hogarth and his colleagues formalized the problem in a 2015 paper that mapped six distinct ways learning and application can fall out of sync.
They called this the "two-settings framework." One setting is where information is acquired. The other is where it gets used. Kind environments produce a match between the two. Wicked environments produce systematic mismatches.
The research identified specific mechanisms that create these mismatches:
Survivorship bias filters out failures before they can be observed. Entrepreneurs who failed disappear from the data set. Mutual funds that underperformed get dropped from portfolios. What remains looks like success, but the sample is incomplete. Conclusions drawn from these “survivors” don’t generalize to the full population.
Censorship bias hides information beyond a cutoff point. A manager sees when employees fall short but rarely observes how much more they could have done. Basically, performance above a threshold stays invisible. Learning from what you can see teaches you nothing about what you’re blind to.
Selection bias narrows the sample before analysis begins. Investors judge future performance by looking only at top-performing funds. Analysts study successful businesses to find what made them succeed without examining the failures that did the same things. The logic feels sound, but there’s a lot hedged on inference.
The hot stove effect shows how early negative results can shut down exploration entirely. One bad outcome makes people avoid an option forever. The system never self-corrects because the rejected path is never revisited. A manager abandons a new process after one failure, never learning whether it would have worked with time.
In each case, people form confident beliefs. They really feel they have learned something true. But the structure of the environment has filtered, delayed, or distorted the information before it reached them.
The takeaway isn’t that experience is unreliable, just that reliability depends on structure. Without a match between where knowledge is gained and where it’s applied, feedback tends to reinforce whatever pattern happened to appear first.