The basic premise is that below the dimensional limits of the quantum mechanical scale, the universe is discrete and everything that we perceive within it, such as space, time, matter and energy, are generated by a computational process. The fundamental components of our universe would then become data, the computational rules acting on that data, and the mechanism that mediates the computation.
The reason that we see physics as we do is because physics is a model of the way that we interact with the universe, and also the mathematics of physics is based primarily upon continuous functions. If the premise is correct then our physics is a ‘continuum’ based approximation to the discrete computations that occur at the deepest substrate of the ‘external’ universe.
Why is it Important?
Because if we could show that this proposition is valid then it would change the way that we view the structure of the universe. Quantum mechanics suggests that there is a limit to our ability to comprehend the fundamental structure of the universe. A computational based theory would enable us to comprehend the structures of the universe at a far deeper level. Physics cannot tell us what space, time matter, energy and many other fundamental constructs actually are. But the ‘reality’ of these concepts can be understood in terms of a computational model.
Does this mean that we live inside a computer?
I have to say yes to this because anything that computes can be termed a computer. However this does not imply that this computational machine was designed by other intelligent beings in another universe. It could be that the fundamental structure of the universe is discrete and computational just because it is, and for us to ask why it is this way has no meaning.
Where did the idea come from?
It was a guy called Konrad Zuse who was a pioneer in the development of programmable digital computers. Back in 1969 he produced a book called, ‘Rechnender Raum’, or in translation, ‘calculating space’. In his book Zuse proposed that our universe was the result of some sort of discrete computation, such as that from a cellular Automata. This idea has since flourished due to developments in several different disciplines:
Many quantum physicists started to consider that something may lie below quantum mechanics, and that it could be discrete.
The development of cellular automata theory by John Von Neumann and Stanislaw Ulam.
The discovery of complex and chaotic behaviour and its importance in physics; resulting in the understanding that very simple systems can generate very complex behaviour.
The development of powerful computers that could run complex iterative computations that highlighted this complex behaviour and enabled research into these ideas.
Who are the main supporters of this idea?
There are two scientists with significant reputations who support this premise and actively work in this area.
Professor Edward Fredkin of MIT who developed the discrete universe principle and has developed a basic approach to physics called, ‘digital mechanics’ that is based upon a computational paradigm
Stephen Wolfram used to be a physicist and is now one of the leading experts on cellular automata and supports this proposition.
Fredkin tends to present his arguments by comparing a computational approach to those of current physics theories and to me has a more rigorous approach. Wolfram on the other hand uses anecdotal similes between the characteristics of cellular automata to general concepts in physics. There are other researchers, many of whom have worked under Fredkin, who have made specific contributions to applying cellular automata to specific physical characteristics of our universe.
Are there any useful theories yet?
Not really. Most of the results so far are very anecdotal, although logically argued. However a preferred view as to what this computation would look like is emerging, and it is based upon the use of cellular automata. This is a computational method using a lattice of cells, with each cell having one of a set of allowed states. Each cell periodically computes its state, which depends upon the state of its neighbouring cells, as well as its own and the rules that govern the computation. When all the cells compute their states, over and over again very complex patterns and behaviours can be seen. The best example of this is John Conway’s ‘life’ program; you can find many examples of this on the Internet.
What would such a theory tell us?
One of the advantages of such a proposition is that it would give us a better understanding of external reality. By external reality I mean that which exists if we didn’t. Everything that we perceive, including ourselves would in fact be data and the patterns that that data creates.
How can we prove this is correct?
We can’t. The best we can do is to generate a universe, using say a cellular automata within a computer that evolves similar physics to ours, when it is perceived from within that cellular automata. We could then infer that we may also be the result of such a cellular automata. One of the important aspects of any theory is that it predicts observable and measurable events. That is why at the moment I consider this idea to be a proposition only and not a theory.
What do Physicists think of this idea?
The truth is, not a lot. I believe the reasons for this are complex and in some part political. It is fare to say that all the evidence in support of this hypothesis is anecdotal, but a large part of the physics community has for some time believed that the universe may be discrete. There is also the fact that, contrary to what you may read, there are still massive problems with the current theories, and there is no sign of the grand theory of everything that physicists have been predicting for the last thirty years. One would have thought that theorists would be willing to investigate any reasonable hypothesis to glean solutions to the current problems, but no. This is partly due to the fact that physicists are trained and brought up to believe that the universe is fundamentally mathematical and not computational. Secondly it could be said that if a computational hypothesis was shown to be possible then the physicists would lose their power base as the new theory would be the domain of the computer scientist, so it may be that they are not that keen on pursuing such ideas.
Why do you think Richard Feynman is important?
Richard Feynman is one of my heroes of science. He is considered by many to be the greatest physicist of the 20th century. He had a ‘no nonsense’ approach to physics and viewed it as a way of calculating things to predict what would happen. He was a free thinker who did not necessarily stick to the usual paradigms and conventions of physics. The quote on my home page indicates that he believed that the universe may turn out to be discrete and computational. I think that the physics community should pay more attention to his views in this regard. It should also be noted that he worked with both Ed Fredkin and Stephen Wolfram.
What is the current state of play?
If we equated the development of the computational universe hypothesis with the historical development of physics, then we are still back with the Greeks. It will take decades, possibly centuries for any true computational theory of everything to emerge. The truth is that this idea may be sending us all on a wild goose chase and we probably need some future breakthroughs in computational theory and computer science before any real progress can be made.
Why is it so difficult?
There is a fundamental difference between physics and the computational approach and here lies the difficulty. Imagine a black box with a cable attached to a keyboard and on the other side a cable connected to a display. We can type numbers on the keyboard and see a different set of numbers appear on the display. By mapping the input numbers to the output a physicist would be able to write down an equation that predicts the result, this is the physics approach. However this equation tells us nothing about what is inside the box generating the output. As far as the physicist is concerned the box could be full of pixies that are applying some rules to the numbers and sending out the results! On the other hand, the computational approach is trying to understand what is in the black box.
The problem is that if we start by assuming that the underlying substrate of the universe is some sort of computational system then our understanding of complex systems tells us that through vast numbers of simple computations, immensely complex behaviour can be generated. This means that one has to take an ‘educated guess’ at the computation, and the only way to know if it is viable is to actually let it compute over large numbers of iterations. There are no short cuts; one cannot act like physicists and model the system to see what it might do. This means that at the moment we can only use our intuition to develop possible computational systems. What we need are better methods to analyse complex behaviour that will help us identify prospective characteristics of the structure of such a computation. This problem means that by its nature such research is ‘less scientific’ than physics and it is probably one of the reasons that the physicists are sceptical of such a proposition. It is worth noting that there is a view in the world of complex systems research that says that because the human brain is a neural network that is optimised for complex pattern matching then a researchers experience of many computational approaches and experiments mean that their intuition is a very good analytical approach to solving this sort of problem as opposed to a rigorous mathematical or logical approach.
Where can I learn more about this?
Read my Beginner’s Guide. For an understanding of cellular automata, play with one of the implementations of the game of life that can be found on the Internet; also read Wolfram’s book, ‘A New Kind of Science’. I will be adding more external references when I have time.