Papers We Love, Skills Matter, London, 7 June 2016
Certain papers change your life. McCarthy's 'Recursive Functions of Symbolic Expressions and their Computation by Machine (Part I)' (1960) changed mine, and so did Landin's 'The Next 700 Programming Languages' (1966). And I remember the moment, halfway through my graduate career, when Guy Steele handed me Reynolds's 'Definitional Interpreters for Higher-Order Programming Languages' (1972).
It is now common to explicate the structure of a programming language by presenting an interpreter for that language. If the language interpreted is the same as the language doing the interpreting, the interpreter is called meta-circular.
Interpreters may be written at differing levels of detail, to explicate different implementation strategies. For instance, the interpreter may be written in a continuation-passing style; or some of the higher-order functions may be represented explicitly using data-structures, via defunctionalisation.
More elaborate interpreters may be derived from simpler versions, thus providing a methodology for discovering an implementation strategy and showing it correct. Each of these techniques has become a mainstay of the study of programming languages, and all of them were introduced in this single paper by Reynolds.
The principle of Propositions as Types links logic to computation. At first sight it appears to be a simple coincidence---almost a pun---but it turns out to be remarkably robust, inspiring the design of theorem provers and programming languages, and continuing to influence the forefronts of computing. Propositions as Types has many names and many origins, and is a notion with depth, breadth, and mystery.
The foundations of computing lay in a coincidence: Church's lambda calculus (1933), Herbrand and Godel's recursive functions (1934), and Turing's machines (1935) all define the same model of computation. Another coincidence: Gentzen's intuitionistic natural deduction (1935) and Church's simply-typed lambda calculus (1940) define isomorphic systems. We review the history and significance of these coincidences, with an eye to Turing's role.
People might be forgiven for thinking that computing is not so much a science as an industry. Ask someone to name a prominent computer scientist and you are more likely to hear the name Bill Gates than Alan Turing. In fact, computing is both a science and an industry, each stimulating the other.
Everyone knows that logic and computing have something to do with each other, but few understand the remarkable correspondence that links them. A model of logic and a model of computing, each published at the dawn of the computer era, turned out, half a century later, to coincide exactly. We will follow this correspondence through three strands of work, connecting three varieties of logic, three researchers at Edinburgh's Laboratory for the Foundations of Computer Science, and three applications to web technology.
Computing is in its infancy, laying the foundations for the work to come. Two hundred and fifty years ago, Edinburgh nurtured the work of David Hume, James Hutton, and Adam Smith. Two hundred and fifty years from now, to what extent will Edinburgh be recognized as founding the theory and practice of Informatics?
Whether a visitor comes from another place, another planet, or another plane of being we can be sure that he, she, or it will count just as we do: though their symbols vary, the numbers are universal. The history of logic and computing suggests a programming language that is equally natural. The language, called lambda calculus, is in exact correspondence with a formulation of the laws of reason, called natural deduction. Lambda calculus and natural deduction were devised, independently of each other, around 1930, just before the development of the first stored program computer. Yet the correspondence between them was not recognized until decades later, and not published until 1980. Today, languages based on lambda calculus have a few thousand users. Tomorrow, reliable use of the Internet may depend on languages with logical foundations.
Logics of the 19'th century, developed by philosophers like Gottlieb Frege concerned with the nature of truth, have direct impact on programming languages for the 21'st century, designed by engineers like James Gosling concerned with programming the web. The type systems developed by logicians underlie the safety properties of Java that make exchange of programs possible. One of the most fascinating tales of the 20'th century is how Gentzen's natural deduction (a model of logic) and Church's lambda calculus (a model of programming) were discovered, a half century after their invention, to be in precise correspondence. One aspect of the correspondence is an association between second-order quantification and generic types. The talk describes GJ, which is based on this correspondence, and serves as the basis of Sun's plans to add generics to Java. The talk also describes Featherweight Java, a model of Java that is comparable in simplicity and power to lambda calculus, and serves as a tool for designing the generation of languages to follow Java.
(Based on work with Martin Odersky, Gilad Bracha, David Stoutamire, Benjamin Pierce, and Atsushi Igarashi.)
As the 19th century drew to a close, logicians formalized an ideal notion of proof. They were driven by nothing other than an abiding interest in truth, and their proofs were as ethereal as the mind of God. Yet within decades these mathematical abstractions were realized by the hand of man, in the digital stored-program computer. How it came to be recognized that proofs and programs are the same thing is a story that spans a century, a chase with as many twists and turns as a thriller. At the end of the story is a new principle for designing programming languages that will guide computers into the 21st century.