|
How To Tell If You're Working Smarter, Or Just Harder
Complex explanations often command instant respect.
The more complex the system, the more rigorous it appears. More variables, more precision, right? Dense language carries the tone of authority. Complexity signals sophistication, and sophistication is easily mistaken for truth.
Some problems deserve that weight. Physics, markets, human behavior. These are layered systems. Yet many explanations grow complicated for a quieter reason. They are compensating for uncertainty.
When our understanding is limited, simple models fail. Predictions break down, and edge cases accumulate faster than explanations can keep pace. Our instinct isn’t to revisit the foundation, but to preserve it by adding variables, carving out exceptions, and stacking layers of abstraction that protect our original theory. So the structure grows around its failures, becoming more elaborate with each problem it absorbs, until it becomes too unwieldy to test, too hedged to falsify, and far easier to defend than to fix.
John von Neumann moved in the opposite direction. Whether formalizing game theory or designing computer architecture, he compressed messy problems into cleaner structures. His gift was a calculated form of reduction; the ability to pull clarity from the sprawl.
Complexity can reflect depth. It can also conceal confusion. Learning to tell the difference has shaped some of the sharpest minds in science.
|
|
|
[the spark]
Fitting The Elephant in the Room
Von Neumann worked across mathematics, physics, economics, and computing with a signature tool: the ability to strip problems down to their logical structure.
Where others added layers, he looked for the cleanest path forward. His skepticism wasn't aimed at complexity itself, but at complexity that didn't pay for itself in predictive power.
He’s got this famous one-liner: "With four parameters I can fit an elephant, and with five I can make him wiggle his trunk."
Bit weird, but it’s a poignant warning. Any model can be made flexible enough to match the data you already have. Add enough variables, carve out enough exceptions, and you can fit almost anything. But that flexibility comes at a cost. A model that explains everything often predicts nothing.
Von Neumann understood that a theory's real test wasn't how well it accounted for what had already happened, but whether it held up when faced with something new.
Models that grow by accumulation rather than refinement tend to collapse under their own weight. They become too tailored to past noise, too brittle to generalize. What looks like precision is often just over-adjustment.
His intuition has since been formalized in statistics and model selection theory, where researchers have confirmed the pachyderm that Von Neumann clearly saw: increasing complexity beyond a point weakens a model's ability to generalize beyond the data it was trained on.
|
|
|
[the science]
The problem with too much data.
In 1973, Japanese statistician Hirotugu Akaike formalized the concepts Von Neumann had described decades earlier. He developed the Akaike Information Criterion, a method for evaluating whether a model genuinely explains a phenomenon, or simply mirrors the data it was trained on.
The core problem Akaike saw is that complex models fit existing data better. They capture more variation and account for more edge cases. Basically, they cast a wide net to catch more fish. But that flexibility comes with a hidden cost. Big systems also capture barnacles. Some of what gets caught in the net isn’t useful, but if your system measures success by the pound, you wouldn’t know the difference.
Akaike's solution was to penalize complexity. His criterion balances what he calls goodness of fit against the number of parameters in a model. Each additional variable must earn its place. If it doesn't improve predictive power enough to justify its presence, the model is scored lower.
When researchers compared models using this criterion, a pattern emerged. Larger models often won on in-sample accuracy but lost when tested on new data. They had learned the noise, not the structure.
This tradeoff sits at the center of model selection theory. Complexity increases fit. Simplicity increases generalizability. The optimal model captures what matters and discards what doesn't.
When complexity goes unchecked, we can get a false sense of confidence in all the wrong things. The best models strike a balance, capturing what matters without chasing every ripple.
The goal isn’t to explain everything. It is to explain what matters, with as little as necessary.
|
|
|
[the takeaways]
1) Treat Complexity as a Cost Every variable you add should justify itself. Before accepting a complicated explanation, ask what predictive power it actually contributes. Complexity is not neutral. It borrows against clarity.
2) Ask What the Model Predicts A useful framework should generate testable claims about the future, not just account for what already happened.
3) Prefer Compression Over Expansion If your model keeps growing to absorb new exceptions, your grasp of the problem may not be growing with it.
4) Test Outside the Original Context Evaluate your models on new data, not the same set they were built from. Generalization is the real measure of whether a model captured what matters or simply learned to mirror what it saw.
5) Watch for Unfalsifiable Detail When a model explains everything, it really explains nothing. Excess nuance can shield weak assumptions from scrutiny. Real clarity requires the possibility of being wrong.
|
|
|
Stay tuned for next week’s newsletter to get one step closer to finding your genius. [sei] Unsubscribe · Preferences
|
|
|