Some things are complicated, and some are complex. The former can be understood, bottom-up, a priori, and you can improve your understanding by working harder. The latter can be described, top-down, a posteriori, and you can improve your description by living with them. About complicated things we write manuals, about complex things we write poetry.
Biological classification of a virus is complicated. Infection patterns of that virus in actual human populations are complex.
Weather forecasts are complicated. The earth’s climate is complex.
In mathematics, the real numbers are complicated, but all your real analysis (convergence, continuity, derivation, integration; is anyone still reading?) will not take you closer to the aptly named complex numbers. Yet, once the veil has been lifted, you will notice that complex analysis is less complicated than real analysis.
Models foray into the complicated, and fiercely so, equipped with today’s computing power.1 But they ultimately fail when confronted with the complex.
The network is one of our templates for very complicated things. Networks consist of a finite (but often very large) number of nodes, which are connected by a finite (but even larger) number of edges, and information is exchanged between nodes along edges. Each node reacts to received information in a specific way, and sends reactions accordingly. Some nodes form the input layer, receiving information from the world outside. Some nodes form the output layer, sending information back to the world. The remaining nodes are hidden to the world, usually also organized into layers. By sheer network size, type of information (discrete versus continuous), and timing (fixed frequency versus continuous inflow), networks can be complicated indeed.
The temptation to interpret the network’s output as complex always lurks. And to be fair: not everybody is using the terms “complicated” and “complex” the way I do.2
Maybe the first example that comes to mind when thinking about networks is the World Wide Web. If all humans suddenly disappeared, the infrastructure would continue to exist for some time (probably a surprisingly short time). The World Wide Web would therefore continue to exist as well, and it would even change all the time, thanks to the algorithms roaming through it. But without the input from human beings, would you expect complexity? Complexity in the decline, maybe, as the power goes off in more and more parts of the world (or monkeys take over the buildings here and there, and hit a few buttons).
We are used to thinking about the human brain as some kind of network of neurons as well. Human behaviour is complex, for sure, but can this be attributed to the network-like structure of the brain? This is indeed an important question because we are currently trying to read complexity into Large Language Models (LLMs, such as ChatGPT) that are explicitly building on this idea of brain-as-network.
I do not buy into that. Complexity can be found in LLMs only because these are parasitic on human complexity – both with respect to the models’ training data and to the outer “alignment” layers, which consist of human beings. I have referred to 1 Corinthians 15 repeatedly, to stress the importance of bodily resurrection. If you dare, place your LLM (or other AI) in a body (more specifically, a seed), set it free, and observe what happens. Nothing, I guess.
But what about network-like objects in the large, where the nodes are human beings or institutions? We often prefer morally-hued expressions like “community”3 (if we like them) or “cabal” (if we don’t). Eugyppius has repeatedly made the point that technocratic hierarchies in liberal democracies are operating in diffuse ways, very efficiently to certain ends but completely off the mark elsewhere, burning enormous amounts of energy to achieve one specific goal, and then being almost unable to change course:
It’s the predilection of our institutions for intractable problems and highly complicated solutions via which they justify their own existence and ensure their propagation and the expansion of their jurisdiction. Once they get a hold of something like a virus, which spreads via social contact, you will see nothing but the proliferation and brutal enforcement of anti-social anti-human policies again and again.
This convergence towards certain outcomes indeed indicates that such structures are complex, not complicated. The network paradigm, as described above, is therefore inadequate as basis for a model. The Machine can not be engineered, but only given in to, or fought against.
Boy, that was different during the late 90s, when we simulated competing ant colonies on SUN workstations that would be thrashed by today’s average smartphone.
Vesper Stamper has noted this (she is using the term “network” in a less technical way).
This reminded me of when I first started to learn about complex numbers. It seemed like a bit of a miracle that you could take a problem, usually with sines and cosines, and (seemingly) make it more complicated by turning everything into complex numbers. Once you'd got the complex exponentials in place, the algebra fell into place, and you could do the work and then 'recover' the answer you were originally looking for by taking the real (or imaginary) part as necessary. Magic.
Of course, when I learned more about complex numbers it became clear that this 'magic' went much deeper. The Cauchy Integral Formula where the function 'inside' some closed boundary is determined entirely by the properties AT the boundary, a kind of 'holographic' principle, always seemed like magic of the highest order to me.
Does AI have the potential to construct this kind of 'magic'? Are we going to see beautiful theorems constructed by it?
I'm not convinced. I share Penrose's view (which he didn't fully nail down the logic on) that, in some sense, the way humans think is "non-algorithmic". In other words, you can't replace what human beings are capable of by some Universal Turing Machine. Penrose thinks you might need quantum mechanics to achieve human-like levels of intuition, but I'm not convinced by that, either.
On the other hand, if we really are just "computers made of meat" then there's no reason, in principle, why we can't (one day) create true AI.
This reminds me of one of my favorite quotes, which is also my life motto. "All that is too complex is unnecessary, and it is simple that is needed." (Mikhail Kalashnikov) ;-)