The Case for a Critical Rationalist Approach to AGI

October 28, 2019

Artificial General Intelligence

The human mind is a uniquely powerful object. It is the only object we are aware of that is able to explain all that is explicable, or to understand all that is understandable. It is this capability for explanation that allows humans to make progress in science, technology, morality, and all other important areas. Though this ability to understand, and thus to make progress, is unique to humans for the time being, it doesn't need to continue to be that way in the future.

Among the accomplishments of the human mind are two important realizations:

The theory of computation tells us that the computers we have today can simulate any computable process, if they are given enough time and memory.

Physics tells that the laws that govern the universe are computable, which means that all processes in the universe are computable processes.

These two facts, taken together, tell us that the computers we have today can simulate any physical process that takes place in the universe.

The human mind is instantiated by the human brain, which is a physical object that obeys all the normal laws of physics, so whatever the human mind is doing can be simulated on a computer. This means that, given a sufficient understanding of how human mind works, we can create a computer program that shares the ability that so far has been unique to humans: the ability to explain all that is explicable. Such a program would be what is called an Artificial General Intelligence, or AGI.

Bottom-Up vs Top-Down AGI

The most conceptually straightforward way to build an AGI would be to simulate an entire human brain on a neuron-by-neuron basis. In that case, we would merely need to understand how individual neurons work, and then scan a brain in sufficient detail such that we understood the state of each neuron at a particular moment in time, and then we could simulate each neuron in the brain and their interactions. Such a simulation would be a perfect recreation of a human mind. I'll call this approach to creating an AGI the bottom-up approach.

An alternative approach to creating an AGI is what I'll call the top-down approach. While the bottom-up approach seeks to recreate an human mind by understanding the smallest elements of the brain, and let the high-level properties of the mind emerge from the bottom up, the top-down approach seeks to understand the high-level properties of the brain, and then build a new system to implement these high-level properties, rather than just copying the human brain.

The top-down approach has several advantages over the bottom-up approach:

The top-down approach will allow us to improve on design flaws present in the human brain. The human brain was designed by evolution, and while evolution is very powerful, the first generally intelligent creature it created may not have been perfectly designed. There may be significant inefficiencies in the design of the human-brain that would be copied into an AGI created by the bottom-up approach, but the top-down approach will allow us to correct these design flaws.

The top-down approach will likely be much more efficient than the bottom-up approach. For the bottom-up approach to work, it needs to simulate the details of all the neurons in the human brain. The top-down approach, on the other hand, doesn't need to worry about simulating any other kind of hardware, and instead it can just implement intelligence directly.

The top-down approach does not require any advancement in brain-scanning technology to work, while the bottom-up approach does.

The disadvantage of the top-down approach is this: we don't yet have a sufficiently advanced understanding of the high-level properties of how the mind works. To make this approach work, we will need to advance our understanding of how the mind accomplishes what it does. In other words, we'll need to advance our understanding of epistemology.

Epistemology and AGI

Epistemology is the field of philosophy that deals with knowledge, including questions like "What is knowledge?", and "How is knowledge created?". This last question is of particular interest to AGI, because the defining feature of AGI is that it is a universal knowledge creator, or in other words it has the ability to create the knowledge to explain anything that can be explained.

One common answer to the question "How is knowledge created?" is a process known as "induction", in which knowledge is created by "generalizing" or "extrapolating" from repeated experience, based on an assumption like "the future will resemble the past" or "the unseen will resemble the seen". Schools of epistemology that believe that induction is responsible for knowledge creation are called "inductivist" schools of thought. However, there is a different school of epistemology, called Critical Rationalism (or CR), which says that induction is impossible, and isn't what's responsible for knowledge creation. Here's a criticism of inductivism written by David Deutsch, one of the most important thinkers in CR, in chapter 1 of The Beginning of Infinity:

... no one has ever managed to formulate a ‘principle of induction’ that is usable in practice for obtaining scientific theories from experiences. Historically, criticism of inductivism has focused on that failure, and on the logical gap that cannot be bridged. But that lets inductivism off far too lightly. For it concedes inductivism’s two most serious misconceptions.

First, inductivism purports to explain how science obtains predictions about experiences. But most of our theoretical knowledge simply does not take that form. Scientific explanations are about reality, most of which does not consist of anyone’s experiences. Astrophysics is not primarily about us (what we shall see if we look at the sky), but about what stars are: their composition and what makes them shine, and how they formed, and the universal laws of physics under which that happened. Most of that has never been observed: no one has experienced a billion years, or a light year; no one could have been present at the Big Bang; no one will ever touch a law of physics – except in their minds, through theory. All our predictions of how things will look are deduced from such explanations of how things are. So inductivism fails even to address how we can know about stars and the universe, as distinct from just dots in the sky.

The second fundamental misconception in inductivism is that scientific theories predict that ‘the future will resemble the past’, and that ‘the unseen resembles the seen’ and so on. (Or that it ‘probably’ will.) But in reality the future is unlike the past, the unseen very different from the seen. Science often predicts – and brings about – phenomena spectacularly different from anything that has been experienced before. For millennia people dreamed about flying, but they experienced only falling. Then they discovered good explanatory theories about flying, and then they flew – in that order. Before 1945, no human being had ever observed a nuclear-fission (atomic-bomb) explosion; there may never have been one in the history of the universe. Yet the first such explosion, and the conditions under which it would occur, had been accurately predicted – but not from the assumption that the future would be like the past. Even sunrise – that favourite example of inductivists – is not always observed every twenty-four hours: when viewed from orbit it may happen every ninety minutes, or not at all. And that was known from theory long before anyone had ever orbited the Earth.

It is no defence of inductivism to point out that in all those cases the future still does ‘resemble the past’ in the sense that it obeys the same underlying laws of nature. For that is an empty statement: any purported law of nature – true or false – about the future and the past is a claim that they ‘resemble’ each other by both conforming to that law. So that version of the ‘principle of induction’ could not be used to derive any theory or prediction from experience or anything else.

But if induction isn't responsible for creating the knowledge in human minds, what is? CR's answer is that knowledge is created through an iterative process of conjecture, or guessing by varying and combining old ideas, and criticism, which consists of attempting to find and discard incorrect ideas. In this way, the process by which knowledge is created in the human mind is very similar to the process of biological evolution. In biological evolution, genes sometimes blindly mutate to produce new genes, and then genes which are poorly adapted to surviving and reproducing die out. Similarly, in the human mind, ideas are blindly changed to produce new ideas, and then ideas which fail to stand up to criticism die out. Because of this similarity, CR is sometimes referred to as "evolutionary epistemology".

This process of conjecture and refutation does not rely on induction, because conjecture creates new ideas blindly, not by "generalizing" or "extrapolating" from observations. Therefore, CR does not have any of the problems associated with induction that other epistemologies have to deal with.

I think that CR is the best understanding of epistemology that we currently have. I think that its critique of induction, and the alternative method it suggests, are correct. Unfortunately, CR isn't widely accepted (or even understood), even in the field of AI research.

Consider deep learning, which is widely thought to be one of the most promising areas of research in Artificial Intelligence or Machine Learning. The methods by which deep neural networks are trained do not resemble the method of iterative conjecture and criticism that CR advocates. Instead, they resemble attempts to implement induction, in that they try to generalize from patterns in the data.

Because of this failure to implement conjecture and criticism, methods like deep learning will never lead to AGI. This doesn't mean that deep learning, or other Machine Learning algorithms, are useless. In fact, they have already been shown to be useful in many areas. But this failure does mean that they are fundamentally limited. Algorithms like deep learning can be useful for solving some problems, but an AGI needs to be able to solve all soluble problems. The capacity to solve all soluble problems is something that can only be accomplished by an algorithm which is creative, or in other words an algorithm which implements conjecture and criticism.

Conclusion

Our best explanations of the world tell us that it must be possible to create an algorithm on a computer that accomplishes what has so far been available only to the human mind: the ability to explain all that is explicable. The best bet we have for creating such an algorithm, an AGI, is the top-down approach of trying to understand how the mind works at a high level, and then creating a system which also implements these high-level properties.

Our best understanding of the relevant high-level properties is Critical Rationalism, a school of epistemology that, unlike other popular epistemologies, rejects induction. If CR is right, the popular approaches in fields like Machine Learning are doomed to fail at leading us to AGI. Machine Learning has developed interesting and useful computational tools, but it will never manage to create a true AGI unless it realizes its inductivist mistakes. To create a real AGI we need to make philosophical progress on the problem of how to create knowledge, which will mean taking our best current theory of epistemology, Critical Rationalism, seriously.

For a more complete understanding of Critical Rationalism, including a more detailed critique of inductivism and other common misconceptions in epistemology, I recommend the book The Beginning of Infinity by David Deutsch.

Tags: AGI, Epistemology