Wednesday, April 11, 2012
Universality
1936 was an important year for computing. Before I explain why, let's imagine that we travel in time to 1936, find a person on the street, and ask "Where can I buy a computer?" What would be the response of the person from 1936? Maybe the person would say "What is a computer?" This would be a reasonable response since the electronic devices that we call computers nowadays were not invented yet. But, the person is likely to say something like "What do you mean buy a computer? You mean hire a computer, right?"
Hire a computer? Maybe he means rent a computer? It seems like we are having a communication breakdown. Actually, the problem is we are using the word "computer" to mean different things. For the person from 1936, a computer is not a device. Instead, it means a person who performs mathematical calculations. If you had some difficult calculations to do in 1936, you could hire a person called a computer and that person would compute them for you, using paper and pencil. Human computers were used for many purposes including scientific computations such as calculating orbits of comets in astronomy, and military computations such as calculating ballistics trajectories during the First and Second World Wars. In 1936 being a computer was simply a profession.
What is special about 1936, why is it an important year for computing? It is the year a mathematician named Alan Turing published an article that became the foundation of computing. And the ideas in Alan Turing's article are as important today as they were then. They explain what computing is all about.
In his article, Alan Turing was trying to answer the question that is the title of this blog: What is computing? And he was trying to do this at a time when the devices that we call computers today were yet to be invented. Turing was imagining a device that did not exist yet. He was tremendously successful in this endeavor. He managed to come up with an imaginary device that captures the essence of all computation.
Turing was a mathematician and he was thinking about automating computation. He was thinking of a machine that can automatically do the same things that human computers of his time were doing. To design his machine he first thought about how human computers work. Well, how did human computers work? They worked like we would if we wanted to compute something without the assistance of an electronic device. They used a notebook to store the numbers that result from their calculations, a pencil to write the numbers, and an eraser to erase the numbers. They probably started their computation by writing the input numbers for the computation in their notebook. At any moment during the computation, they would look at the intermediate results in their notebook and compute a new result based on the intermediate results and either write it to a new page on the notebook or maybe erase some of the previous intermediate results and write the new result.
Let me give an example to make this a little more concrete. Assume that we hired a human computer to compute the cubes of a set of numbers. So, we will give this person a set of numbers, like 2, 5, 10 and we want human computer to give us back the following results: 8, 125, 1000. Well, maybe that does not seem that difficult. How about if we gave 61, 173, 467 as input? It is not that easy to compute the answer (which is 226981, 5177717, 101847563) without using an electronic device, right? But there were no electronic devices in 1936. If you did not have much time yourself, but had some money, you could hire a human computer to do the computation for you. The human computer would probably first write the input numbers to his notebook. Then he would start with the first number. Multiply the first number with itself (which is done, if you remember the long multiplication, digit by digit while producing some intermediate results which are then added together). Then he would multiply this intermediate result again with the input number to get the cube. He would continue like this until finishing the list.
OK, we can see how a human computer can compute some stuff. But Turing was interested in figuring out how a machine can do the computation. Being a mathematician, he wanted to come up with a very precise and simple description. The description he came up with was pretty similar to how a human computer works. Here it goes: Turing assumed that the computing machine would need something to write on the input and the intermediate results. Let me call this the notebook of the machine, but it does not have to be a real notebook of course. It is something that Turing's imaginary machine uses as a scratch pad to record its input and intermediate results. To keep things simple, Turing assumed that each page of the notebook could either be blank or could contain a single digit (actually you can even assume that it contains a single binary digit, 0 or 1). He also assumed that at any point during the computation, a single page of the notebook would be open, and the machine could read the digit in that page, and then choose to erase that digit and write another digit or leave it as it is, and then move to the next page or the previous page or stay in the same page.
So far so good. It is very similar to how human computers work. But there is something missing, right? A human computer knows the steps of the computation (like how to do long multiplication) and follows those steps to do the computation. How are we going to tell the computer machines the steps of the computation? Turing again had a simple solution to this. He imagined that his machine would contain a list of steps where each step had a single action describing what to do at that step. An action for a step would be in the following form: Read the digit on the current page and based on that do the following 1) decide what to write to the current page (or to leave it as it is), 2) decide either to go to the next page or the previous page or stay in the current page, and 3) decide which step to go to next.
That's it. That is the model Turing came up with. Let's recap. Turing's computing machine consisted of a set of steps (with associated actions) and a notebook. Each step identified what to write to the notebook's current page and which page to go to and what the next step would be. Seems pretty simple.
Some of the details of Turing's model can sound arbitrary. For example, why just write a single digit in each page? Why are we wasting all that paper? First, remember that this is an imaginary machine that Turing proposed for modelling computation, so we are not really going to implement it using a real notebook. But the key point is, Turing wanted to keep things as simple as possible while capturing the essence of computation. So, why keep a single digit in each page? Because it is just simpler to define it that way and it is enough to capture the essence of computation. Same goes for moving one page at a time. It is simpler to just assume that at any step of the computation you can go one page forward or backward. Any computation that can be done by moving multiple pages at a time can also be done by moving one page at a time.
Here is something you might find surprising: Any computation that our modern day computers can do can be modeled this way. Any computation can be modeled as a Turing machine (by the way, Turing did not use the term "Turing machine" in his paper, which could have sounded pretentious, the name was given later by others). Not just multiplying numbers or computing cubes or other mathematical computations, but searching the web, playing an mp3 file, editing a photo, playing angry birds, all of them. Any computation you do on your computer or on anybody else's computer can be modeled this way. Any computation anyone does on any modern day computer can be modeled this way!
I said that Turing's computing machine consisted of a set of steps and a notebook. So what do these correspond to in a modern day computer? The notebook is what we call memory nowadays. And what about the set of steps that describes a computation? We call that a program.
A key assumption in Turing's model was that the notebook used by his machines never run out of pages. What does this mean in modern terms? He assumed that his machines can use arbitrary amount of memory. This is a key feature that gives Turing's machines the power to model any computation that can be done on any computer.
Here is something I skipped earlier. As you know, a computer can run multiple programs, not just a single program, and programmability is an essential quality of a computer. So, can a Turing machine run different programs? Well, it seems like we would need to change the set of steps that represent the computation since that is the part of a Turing machine that corresponds to a program. How do we do that? In his 1936 article Turing showed that there is a special type of Turing machine that is programmable. This programmable Turing machine takes the set of steps describing the computation as an input written on its notebook and then executes that computation based on the given set of steps. This is exactly how modern day computers work, they take a program written to their memory as input and then execute the steps of the program. In fact, any modern computer, including any smart phone, any laptop or desktop or any super computer, corresponds to this special type of programmable Turing machine. So, in this sense, your smart phone and the most powerful supercomputer are equivalent. They are capable of doing the same things, the difference is one does it much faster than the other.
The title of this post was universality. This is the universality I was referring to. Alan Turing was able to come up with a universal model of computation that is powerful enough to capture all computation even before the electronic computers were invented. His universal model of computation gives computer scientists a very useful tool for studying computers. For example, even the most powerful super-duper computer in the world cannot solve any problem that cannot be solved by a Turing machine. So by studying Turing machines we can figure out the limits of computation. And this is exactly what Turing did at the end of his 1936 paper, but that is a topic for another time.
There is a universal essence to computation and Alan Turing was able to capture that essence in his 1936 article. This year computer scientists are celebrating centenary of Alan Turing's birthday. Now you know why.
Subscribe to:
Post Comments (Atom)
Perhaps this is something which would become clear in reading Turing's explanation, but it seems to me as though something is missing from the model. There is a notebook existing in space which can read, write, erase, and turn the page of itself. How it implements these abilities is based on programs which are also written in the notebook. Where I have difficultly following the model is where those programs translate into those implementations. If the Turing machine is to behave according to this set of rules, must there not be another element to it outside of this notebook which has memory, not in a computer-science-related sense of the word, but in a human sense, where that memory is something which is always present to be called upon and influence actions? I suppose what I don't follow is how, after the machine has progressed from one page which contained a program instruction, the machine still behaves in a way which is conscious of what was on that page if all that the model consists of is the aforementioned notebook.
ReplyDeleteDespite appearing as somewhat of a convoluted system, but the cause-and-effect flowchart through the notepad is an effective way to interpret how computers process information. I remember when my 8th grade math teacher taught us binary by using our fingers to count the 1s and 0s (the thumb acted as 1, pointer as 2, thumb and pointer as 3, middle finger as 4... ie; 1, 10, 11, 100) and this discussion reminded me of that memory. Considering how many rules and exceptions each 'notepad' contains, the amount of hypothetical sheets of paper it would take to simply open this browser, find this website, and type this response is staggering. This explanation is definitely giving me a greater appreciation for how intricately computers are designed.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteI understand how the model works, but I don't know how a computer would know how to work the 'notepad' without another 'notepad' telling it how to do so and so on. There is probably a simple explanation for that, but I don't know computers well enough for it to be evident to me.
ReplyDeleteIt sounds like Alan Turning was a very intelligent man. What he did was something that took a great amount of time, and incredible thinking. I also find the beginning very interesting. How in 1936 a computer was a person who computed something, now it is a device everyone loves.
ReplyDelete