When Man imagines progress through time, he usually speaks of three stages. By adding a fourth stage, we can bend the hypothetical line of progress into a circle (or something else).
When Man encounters a difficult phenomenon, he usually splits it in two, into a slightly easier and a harder variant. Things can be complicated or complex. Order can be planned or spontaneous. Arithmetic can be elementary or higher.
But what if further insight into the phenomenon calls for refinement?
First shalt thou take out the Holy Pin. Then shalt thou count to three, no more, no less. Three shall be the number thou shalt count, and the number of the counting shall be three. Four shalt thou not count, neither count thou two, excepting that thou then proceed to three. Five is right out. Once the number three, being the third number, be reached, then lobbest thou thy Holy Hand Grenade of Antioch towards thy foe, who, being naughty in My sight, shall snuff it.
No, that is not what happens. Simply proceeding to three seems a bit lame, and four too straightforward (split each variant in two again). In fact, two instances of going from two to five caught my eye.
In 1921, Frank H. Knight (and around the same time, John Maynard Keynes) thought and wrote about randomness, and he distinguished two types thereof: risk and uncertainty, the former comprising all situations in which statistics is useful (like throwing dice, or playing roulette), and the latter comprising the rest (like election outcomes). In 2010, after the financial crisis of 2007-2009 had burnt through the world of finance, Andrew W. Lo and Mark T. Mueller concluded that refinement might be in order. In their paper “WARNING: Physics Envy May Be Hazardous To Your Wealth!” they went from two to five:
Level 1 (“complete certainty”) is hardly present in the world, but some phenomena come close, like Newton’s laws of motion in not-too-eccentric situations
Level 2 (“risk without uncertainty”) corresponds to Frank H. Knights definition of risk. Think of a fair casino where the rules are public, are always followed, and are never changed.
Level 3 (“fully reducible uncertainty”) differs from level 2 in only one aspect: the rules are not public but have to be inferred. This is the realm where classical (frequentist) statistics can shine: if you only collect enough data, you will be able to infer the rules to arbitrary precision.
Level 4 (“partially reducible uncertainty”) introduces another difficulty: from time to time, the casino changes its rules. Your previous data gathering endeavours become obsolete, and you have to start anew – but first you have to find out that the rules have actually changed. On level 4, not all hope is lost; Bayesian statistics may have some value here.
Level 5 (“irreducible uncertainty”) is meant as the limit case, absolutely hopeless from a statistical point of view but maybe never observed in the wild. The casino is run by madmen who are changing the rules all the time, always faster than you can react.
Classifications like this are surely not unavoidable but can be very useful. When you try to model a certain phenomenon (financial markets, say, or the spread of a disease, or the climate 100 years from now, or the workings of the brain), the danger lies in assuming a lower level of uncertainty in your model than pertains to reality.
My second example (and no, I won’t go to five examples) is emergence, the phenomenon of “macro” versus “micro”, namely, that properties and behavior of a composite system are not readily predictable from properties and behavior of the parts.
A very common distinction (introduced by Mark A. Bedau in 1997) is between weak and strong emergence. Under strong emergence, macro properties and behavior can not be deduced from micro properties and behavior. Under weak emergence, macro properties and behavior are simply unexpected given knowledge about micro properties and behavior. All emergence is strong until proven weak.
In 2024, Sean M. Carroll and Achyuth Parola finally did the jump, and proposed a five-level hierarchy of emergence. The paper is quite technical, and I will try to give simple examples (hoping that my understanding is correct, cf. also W. Briggs’ take).
Type 0 (featureless emergence) is not even weak. Macro is not unexpected given micro. A separate macro theory is a waste of time.
Type 1a (direct, local emergence): micro completely determines macro, but macro theory allows for useful simplifications. In order to track and predict the movement of the objects in the solar system, center-of-mass motion according to Newton’s laws (there they are again) goes a long way, and you may forget about the single particles forming the sun, planets, and moons (the very thinking of such things as objects is, of course, another useful simplification). Similarly, you can explain the circling of a fluid around the drain without tracking the molecules involved.
Type 1b (direct, incompressible emergence): micro still completely determines macro, but you have to pay close attention to the micro level; simplification at the macro level will introduce too much error. For example, John H. Conway’s Game of Life allows for fascinating macro objects (like Gliders moving across the board), but change one bit and the Glider is no more.
Type 2 (nonlocal emergence): this is still weak emergence, but what happens at macro level takes on global character in that it can no longer be explained by local collections of entities at the micro level. Examples from physics quietly give way to examples from sociology. You can hardly explain a society, a war, or current fashion without consideration of all the human beings involved.
Type 3 (augmented emergence) is basically synonymous with strong emergence. The macro level resists all attempts at fully explaining it from the micro level. Consciousness is usually brought forward as an example. And the existence of a human soul would, of course, prohibit explanation of human behavior from those little neurons alone.
Just like the five-level hierarchy of randomness, the five-type hierarchy of emergence may help when thinking about modeling. We might rephrase that the danger in modeling lies in assuming a lower type of emergence in your model than pertains to reality.
But why two and five?
Why are there the big-five personality types, with two subtypes each? Is this really just the result of cold, hard empirical science?
Is it maybe the combination of our biology with our desire to communicate? Imagine giving a talk: you might start by referring to your two hands (on the one hand..., on the other hand...), and then go into detail by counting on your five fingers.
Hm, the number of permutations of two binary entities is 2^2=4. To cover the tracks, one permutation is split and then the result is 5. Is it a coincidence that Matthew 25:15 is divisible by 5? :-)