Today we're going to talk about a subject that's at the heart of theoretical computer science, a formal basis for the study of computation that was developed by the English mathematician Alan Turing in the 1930s. We'll begin with some context. Just briefly to put this material within the context of mathematics. What we're talking about is the idea of universality and computability, which are fundamental questions at the heart of understanding computation. We want to know what is a general-purpose computer, really? Are there some limits on what we can do with digital computers? Are there limits on the power of machines we can build? In the 1930s, there was pioneering work, a lot of it done at Princeton. Mathematician David Hilbert, around the turn of the 19th to 20th century, asked some basic questions. He laid out ten problems at a famous mathematical congress. And then in the early 30s, the Princeton mathematician Kurt Godel solved one of the important math problems laid out by Hilbert on a result that really stunned the mathematical world. And there was a similar problem, related, but closer to computing, that was laid out by Alonzo Church, who was a professor of mathematics at Princeton around that time. And then a young British student came to Princeton and presented work that he had done in England that really provided the answers that have kept people studying computation from a theoretical basis ever since. This is just a basic overview of the concept from a mathematical point of view. So we can think of mathematics as any formal system that's powerful enough to express arithmetic. It seems like this should be an easy thing to do, but it really gets right at the heart of the matter. And lots and lots of different formal systems have been developed for this purpose. So we say that a formal system is complete if you can prove the truth or falsity of every statement that you can make within the system. And you want it to be consistent. You want to say that within your system, you can't prove contradictions, like 2 + 2 = five when you know that 2 + 2 should equal 4. And you want it to be decidable, you want to know that an algorithm exists to determine the truth of every statement. So what Hilbert asks, around 1900, was the question, is mathematics complete and consistent? And people thought about this as something that really people thought we should be able to know. And Godel proved in 1931 that actually, no, it's not possible, to have both, that you can prove the truth or falsity of every arithmetic statement. And that you cannot prove contradictions. Now, it's quite an amazing result that was very surprising to people, and shook the foundations of the mathematical world. And Hilbert also asked the question known as the Entscheidungsproblem. Is mathematics decidable? Is there an algorithm to determine the truth of every statement? And Church and Turing, in results like we're going to describe today, also answered that question in the negative. And again, a quite surprising result that shook the foundations of the mathematical world. And really gets at the heart of our understanding of computation. Now, we're going to look at this at a level much more closely related to the simple machines that we talked about in the last lecture. But it's important to be able to come back to this context to understand why people were so interested in these kinds of results.