C#, Dart, Pyret, Racket, TypeScript, VB: many recent languages integrate dynamic and static types via gradual typing. We systematically develop four calculi for gradual typing and the relations between them, building on and strengthening previous work. The calculi are as follows: λB, based on the blame calculus of Wadler and Findler (2009); λC, inspired by the coercion calculus of Henglein (1994); λS inspired by the space-efficient calculus of Herman, Tomb, and Flanagan (2006); and λT based on the threesome calculus of Siek and Wadler (2010). While λB and λT are little changed from previous work, λC and λS are new. Together, λB, λC, λS, and λT provide a coher- ent foundation for design, implementation, and optimization of gradual types. We define translations from λB to λC, from λC to λS, and from λS to λT. Much previous work lacked proofs of correctness or had weak correctness criteria; here we demonstrate the strongest correctness criterion one could hope for, that each of the translations is fully abstract. Each of the calculi reinforces the design of the others: λC has a particularly simple definition, and the subtle definition of blame safety for λB is justified by the simple definition of blame safety for λC. Our calculus λS is implementation-ready: the first space-efficient calculus that is both straightforward to implement and easy to understand. We give two applications: first, using full abstraction from λC to λS to establish an equational theory of coercions; and second, using full abstraction from λB to λS to easily establish the Fundamental Property of Casts, which required a custom bisimulation and six lemmas in earlier work.
We describe a design for generics in Go inspired by previous work on Featherweight Java by Igarashi, Pierce, and Wadler. Whereas subtyping in Java is nominal, in Go it is structural, and whereas generics in Java are defined via erasure, in Go we use monomorphisation. Although monomorphisation is widely used, we are one of the first to formalise it. Our design also supports a solution to The Expression Problem.
# Available in: pdf, doi.Consider two widely-used definitions of equality. That of Leibniz: one value equals another if any predicate that holds of the first holds of the second. And that of Martin-Löf: the type identifying one value with another is occupied if the two values are identical. The former dates back several centuries, while the latter is widely used in proof systems such as Agda and Coq.
Here we show that the two definitions are isomorphic: we can convert any proof of Leibniz equality to one of Martin-Löf identity and vice versa, and each conversion followed by the other is the identity. One direction of the isomorphism depends crucially on values of the type corresponding to Leibniz equality satisfying functional extensionality and Reynolds' notion of parametricity. The existence of the conversions is widely known (meaning that if one can prove one equality then one can prove the other), but that the two conversions form an isomorphism (internally) in the presence of parametricity and functional extensionality is, we believe, new.
Our result is a special case of a more general relation that holds between inductive families and their Church encodings. Our proofs are given inside type theory, rather than meta-theoretically. Our paper is a literate Agda script.
# Available in: pdf, doi.One of the leading textbooks for formal methods is Software Foundations (SF), written by Benjamin Pierce in collaboration with others, and based on Coq. After five years using SF in the classroom, we came to the conclusion that Coq is not the best vehicle for this purpose, as too much of the course needs to focus on learning tactics for proof derivation, to the cost of learning programming language theory. Accordingly, we have written a new textbook, Programming Language Foundations in Agda (PLFA). PLFA covers much of the same ground as SF, although it is not a slavish imitation.
What did we learn from writing PLFA? First, that it is possible. One might expect that without proof tactics that the proofs become too long, but in fact proofs in PLFA are about the same length as those in SF. Proofs in Coq require an interactive environment to be understood, while proofs in Agda can be read on the page. Second, that constructive proofs of preservation and progress give immediate rise to a prototype evaluator. This fact is obvious in retrospect but it is not exploited in SF (which instead provides a separate normalise tactic) nor can we find it in the literature. Third, that using extrinsically-typed terms is far less perspicuous than using intrinsically-typed terms. SF uses the former presentation, while PLFA presents both; the former uses about 1.6 as many lines of Agda code as the latter, roughly the golden ratio.
The textbook is written as a literate Agda script, and can be found here:
This paper introduces λdB, a blame calculus with dependent types. It supports dependent functions, predicate refinement at all types, the dynamic type, and full blame tracking. It is inspired by and extends previous work on hybrid types and Sage, by Flanagan and others; manifest contracts, by Greenberg, Pierce, and Weyrich; and blame calculus by Wadler and Findler. While previous work only allows refinement over base types, λdB supports refinement over any type. We introduce novel techniques in order to prove blame safety for this language, including a careful analysis that reduces open judgments on terms to closed ones on values, and the idea of ‘subtyping with a witness’, which fix flaws in the previous work of Wadler and Findler. These technical contributions mean that we can achieve a completely inductive (finitistic) account of the metatheory of our language, and thereby avoid many of the subtle technical issues which have bedevilled earlier work in this area.
# Available in: pdf.Session types are a rich type discipline, based on linear types, that lifts the sort of safety claims that come with type systems to communications. However, web-based applications and microservices are often written in a mix of languages, with type disciplines in a spectrum between static and dynamic typing. Gradual session types address this mixed setting by providing a framework which grants seamless transition between statically typed handling of sessions and any required degree of dynamic typing. We propose Gradual GV as a gradually typed extension of the functional session type system GV. Following a standard framework of gradual typing, Gradual GV consists of an external language, which relaxes the type system of GV using dynamic types; an internal language with casts, for which operational semantics is given; and a cast-insertion translation from the former to the latter. We demonstrate type and communication safety as well as blame safety, thus extending previous results to functional languages with session-based communication. The interplay of linearity and dynamic types requires a novel approach to specifying the dynamics of the language.
# Available in: pdf, doi.Lambda calculi are often used as intermediate representations for compilers. However, they require extensions to handle higherlevel features of programming languages. In this paper we show how to construct an IR based on System Fμω which supports recursive functions and datatypes, and describe how to compile it to System Fμω. Our IR was developed for commercial use at the IOHK company, where it is used as part of a compilation pipeline for smart contracts running on a blockchain.
# Available in: pdf, doi.System F, also known as the polymorphic λ-calculus, is a typed λ-calculus independently discovered by the logician Jean-Yves Girard and the computer scientist John Reynolds. We consider Fωμ, which adds higher-order kinds and iso-recursive types. We present the first complete, intrinsically typed, executable, formalisation of System Fωμ that we are aware of. The work is motivated by verifying the core language of a smart contract system based on System Fωμ. The paper is a literate Agda script.
# Available in: pdf, doi.Process calculi based in logic, such as πDILL and CP, provide a foundation for deadlock-free concurrent programming, but exclude non-determinism and races. HCP is a reformulation of CP which addresses a fundamental shortcoming: the fundamental operator for parallel composition from the π-calculus does not correspond to any rule of linear logic, and therefore not to any term construct in CP. We introduce HCPND, which extends HCP with a novel account of non-determinism. Our approach draws on bounded linear logic to provide a strongly-typed account of standard process calculus expressions of non-determinism. We show that our extension is expressive enough to capture many uses of non-determinism in untyped calculi, such as non-deterministic choice, while preserving HCP's meta-theoretic properties, including deadlock freedom.
# Available in: pdf.Implicit programming (IP) mechanisms infer values by type-directed resolution, making programs more compact and easier to read. Examples of IP mechanisms include Haskell’s type classes, Scala’s implicits, Agda’s instance arguments, Coq’s type classes and Rust’s traits. The design of IP mechanisms has led to heated debate: proponents of one school argue for the desirability of strong reasoning properties, while proponents of another school argue for the power and flexibility of local scoping or overlapping instances. The current state of affairs seems to indicate that the two goals are at odds with one another and cannot easily be reconciled. This paper presents COCHIS, the Calculus Of CoHerent ImplicitS, an improved variant of the implicit calculus that offers flexibility while preserving two key properties: coherence and stability of type substitutions. COCHIS supports polymorphism, local scoping, overlapping instances, first-class instances and higher-order rules, while remaining type-safe, coherent and stable under type substitution. We introduce a logical formulation of how to resolve implicits, which is simple but ambiguous and incoherent, and a second formulation, which is less simple but unambiguous, coherent and stable. Every resolution of the second formulation is also a resolution of the first, but not conversely. Parts of the second formulation bear a close resemblance to a standard technique for proof search called focusing. Moreover, key for its coherence is a rigorous enforcement of determinism.
# Available in: pdf, doi.The most profound connection between logic and computation is a pun. The doctrine of Propositions as Types asserts that a certain kind of formal structure may be read in two ways: either as a proposition in logic or as a type in computing. Further, a related structure may be read as either the proof of the proposition or as a programme of the corresponding type. Further still, simplification of proofs corresponds to evaluation of programs.
Accordingly, the title of this book also has two readings. It may be parsed as "(Programming Language) Foundations in Agda" or "Programming (Language Foundations) in Agda" — the specifications we will write in the proof assistant Agda both describe programming languages and are themselves programmes.
The book is aimed at students in the last year of an undergraduate honours programme or the first year of a master or doctorate degree. It aims to teach the fundamentals of operational semantics of programming languages, with simply-typed lambda calculus as the central example. The textbook is written as a literate script in Agda. The hope is that using a proof assistant will make the development more concrete and accessible to students, and give them rapid feedback to find and correct misaprehensions.
The book is broken into two parts. The first part, Logical Foundations, develops the needed formalisms. The second part, Programming Language Foundations, introduces basic methods of operational semantics.
# Available in: html, github.One of the leading textbooks for formal methods is Software Foundations (SF), written by Benjamin Pierce in collaboration with others, and based on Coq. After five years using SF in the classroom, I have come to the conclusion that Coq is not the best vehicle for this purpose, as too much of the course needs to focus on learning tactics for proof derivation, to the cost of learning programming language theory. Accordingly, I have written a new textbook, Programming Language Foundations in Agda (PLFA). PLFA covers much of the same ground as SF, although it is not a slavish imitation.
What did I learn from writing PLFA? First, that it is possible. One might expect that without proof tactics that the proofs become too long, but in fact proofs in PLFA are about the same length as those in SF. Proofs in Coq require an interactive environment to be understood, while proofs in Agda can be read on the page. Second, that constructive proofs of preservation and progress give immediate rise to a prototype evaluator. This fact is obvious in retrospect but it is not exploited in SF (which instead provides a separate normalise tactic) nor can I find it in the literature. Third, that using raw terms with a separate typing relation is far less perspicuous than using inherently-typed terms. SF uses the former presentation, while PLFA presents both; the former uses about 1.6 as many lines of Agda code as the latter, roughly the golden ratio.
The textbook is written as a literate Agda script, and can be found here:
[Winner of SBMF 2018 Best Paper Award, 1st Place.]
# Available in: pdf.Gradual typing has emerged as the tonic for programmers with a thirst for a blend of static and dynamic typing. Contracts provide a lightweight form of gradual typing as they can be implemented as a library, rather than requiring a gradual type system.
Intersection and union types are well suited to static and dynamic languages: intersection encodes over- loaded functions; union encodes uncertain data arising from branching code. We extend the untyped lambda calculus with contracts for monitoring higher-order intersection and union types, for the first time giving a uniform treatment to both. Each operator requires a single reduction rule that does not depend on the constituent types or the context of the operator.
We present a new method for defining contract satisfaction based on blame behaviour. A value positively satisfies a type if applying a contract of that type can never elicit positive blame. A continuation negatively satisfies a type if applying a contract of that type can never elicit negative blame. We supplement our definition of satisfaction with a series of monitoring properties that satisfying values and continuations should have.
# Available in: pdf, doi.We introduce Refinement Reflection, a new framework for building SMT-based deductive verifiers. The key idea is to reflect the code implementing a user-defined function into the function’s (output) refinement type. As a consequence, at uses of the function, the function definition is instantiated in the SMT logic in a precise fashion that permits decidable verification. Reflection allows the user to write equational proofs of programs just by writing other programs, e.g. using pattern-matching and recursion to perform case-splitting and induction. Thus, via the propositions-as-types principle, we show that reflection permits the specification of arbitrary functional correctness properties. Finally, we introduce a proof-search algorithm called Proof by Logical Evaluation that uses techniques from model checking and abstract interpretation, to completely automate equational reasoning. We have implemented reflection in Liqid Haskell and used it to verify that the widely used instances of the Monoid, Applicative, Functor, and Monad typeclasses actually satisfy key algebraic laws required to make the clients safe, and have used reflection to build the first library that actually verifies assumptions about associativity and ordering that are crucial for safe deterministic parallelism.
# Available in: pdf, doi.The polymorphic blame calculus integrates static typing, including universal types, with dynamic typing. The primary challenge with this integration is preserving parametricity: even dynamically-typed code should satisfy it once it has been cast to a universal type. Ahmed et al. (2011) employ runtime type generation in the polymorphic blame calculus to preserve parametricity, but a proof that it does so has been elusive. Matthews and Ahmed (2008) gave a proof of parametricity for a closely related system that combines ML and Scheme, but later found a flaw in their proof. In this paper we present an improved version of the polymorphic blame calculus and we prove that it satisfies relational parametricity. The proof relies on a step-indexed Kripke logical relation. The step-indexing is required to make the logical relation well-defined in the case for the dynamic type. The possible worlds include the mapping of generated type names to their types and the mapping of type names to relations. We prove the Fundamental Property of this logical relation and that it is sound with respect to contextual equivalence. To demonstrate the utility of parametricity in the polymorphic blame calculus, we derive two free theorems.
# Available in: pdf, doi.Session types are a rich type discipline, based on linear types, that lift the sort of safety claims that come with type systems to communications. However, web-based applications and micro services are often written in a mix of languages, with type disciplines in a spectrum between static and dynamic typing. Gradual session types address this mixed setting by providing a framework which grants seamless transition between statically typed handling of sessions and any required degree of dynamic typing. We propose GradualGV as an extension of the functional session type system GV with dynamic types and casts. We demonstrate type and communication safety as well as blame safety, thus extending previous results to functional languages with session-based communication. The interplay of linearity and dynamic types requires a novel approach to specifying the dynamics of the language.
# Available in: pdf, doi.TypeScript participates in the recent trend among programming languages to support gradual typing. The DefinitelyTyped Repository for TypeScript supplies type definitions for over 2000 popular JavaScript libraries. However, there is no guarantee that implementations conform to their corresponding declarations. We present a practical evaluation of gradual typing for TypeScript. We have developed a tool for use with TypeScript, based on the polymorphic blame calculus, for monitoring JavaScript libraries and TypeScript clients against the TypeScript definition. We apply our tool, TypeScript TPD, to those libraries in the DefinitelyTyped Repository which had adequate test code to use. Of the 122 libraries we checked, 62 had cases where either the library or its tests failed to conform to the declaration. Gradual typing should satisfy non-interference. Monitoring a program should never change its behaviour, except to raise a type error should a value not conform to its declared type. However, our experience also suggests serious technical concerns with the use of the JavaScript proxy mechanism for enforcing contracts. Of the 122 libraries we checked, 22 had cases where the library or its tests violated non-interference.
# Available in: pdf, doi, artifact.Channel- and actor-based programming languages are both used in practice, but the two are often confused. Languages such as Go provide anonymous processes which communicate using buffers or rendezvous points---known as channels---while languages such as Erlang provide addressable processes---known as actors---each with a single incoming message queue. The lack of a common representation makes it difficult to reason about translations that exist in the folklore. We define a calculus lambda-ch for typed asynchronous channels, and a calculus lambda-act for typed actors. We define translations from lambda-act into lambda-ch and lambda-ch into lambda-act and prove that both are type- and semantics-preserving. We show that our approach accounts for synchronisation and selective receive in actor systems and discuss future extensions to support guarded choice and behavioural types.
# Available in: pdf, doi.Quantified class constraints have been proposed many years ago to raise the expressive power of type classes from Horn clauses to the universal fragment of Hereditiary Harrop logic. Yet, while it has been much asked for over the years, the feature was never implemented or studied in depth. Instead, several workarounds have been proposed, all of which are ultimately stopgap measures.
This paper revisits the idea of quantified class constraints and elaborates it into a practical language design. We show the merit of quantified class constraints in terms of more expressive modeling and in terms of terminating type class resolution. In addition, we provide a declarative specification of the type system as well as a type inference algorithm that elaborates into System F. Moreover, we discuss termination conditions of our system and also provide a prototype implementation.
# Available in: pdf, doi.The data abstraction mechanism of Miranda may be adapted to a dynamically typed programming language by applying ideas from gradual typing.
# Available in: pdf, doi.Wadler introduced Classical Processes (CP), a calculus based on a propositions-as-types correspondence between propositions of classical linear logic and session types. Carbone \emph{et al.}\ introduced Multiparty Classical Processes, a calculus that generalises CP to multiparty session types, by replacing the duality of classical linear logic (relating two types) with a more general notion of coherence (relating an arbitrary number of types). This paper introduces variants of CP and MCP, plus a new intermediate calculus of Globally-governed Classical Processes (GCP). We show a tight relation between these three calculi, giving semantics-preserving translations from GCP to CP and from MCP to GCP. The translation from GCP to CP interprets a coherence proof as an arbiter process that mediates communications in a session, while MCP adds annotations that permit processes to communicate directly without centralised control.
We connect three ways to achieve relational parametricity: uni- versal types, runtime type generation, and cryptographic sealing. We study a polymorphic blame calculus, &\lambda;B, inspired by that of Ahmed, Findler, Siek, and Wadler (2011), that ties universal types to runtime type generation; and a cryptographic lambda calculus, &\lambda;K, inspired by that of Pierce and Sumii (2000), that relies on cryp- tographic sealing. Our &\lambda;B calculus avoids the ‘topsy turvy’ aspects of Ahmed et al., who evaluate terms one would expect to be val- ues, and leave as values terms one would expect to be evaluated. We present translations from &\lambda;B to &\lambda;K and back that we show to be simulations. We extract from &\lambda;B the subset &\lambda;G that corre- sponds to the polymorphic lambda calculus &\lambda;F of Girard (1972) and Reynolds (1974); &\lambda;G is also a subset of the system G studied by Neis, Dreyer, and Rossberg (2009). We present translations from &\lambda;F to &\lambda;G and back that we show to be fully abstract. Further, we shed light on the embedding given by Pierce and Sumii of &\lambda;F into &\lambda;K, describing how it is related to the composition of our transla- tions from &\lambda;F to &\lambda;G and &\lambda;B to &\lambda;K, and that the conversions and casts of λB relate to the C and G components of their embedding.
Papers We Love, Skills Matter, London, 7 June 2016
Certain papers change your life. McCarthy's 'Recursive Functions of Symbolic Expressions and their Computation by Machine (Part I)' (1960) changed mine, and so did Landin's 'The Next 700 Programming Languages' (1966). And I remember the moment, halfway through my graduate career, when Guy Steele handed me Reynolds's 'Definitional Interpreters for Higher-Order Programming Languages' (1972).
It is now common to explicate the structure of a programming language by presenting an interpreter for that language. If the language interpreted is the same as the language doing the interpreting, the interpreter is called meta-circular.
Interpreters may be written at differing levels of detail, to explicate different implementation strategies. For instance, the interpreter may be written in a continuation-passing style; or some of the higher-order functions may be represented explicitly using data-structures, via defunctionalisation.
More elaborate interpreters may be derived from simpler versions, thus providing a methodology for discovering an implementation strategy and showing it correct. Each of these techniques has become a mainstay of the study of programming languages, and all of them were introduced in this single paper by Reynolds.
We describe a new approach to domain specific languages (DSLs), called Quoted DSLs (QDSLs), that resurrects two old ideas: quotation, from McCarthy's Lisp of 1960, and the subformula property, from Gentzen's natural deduction of 1935. Quoted terms allow the DSL to share the syntax and type system of the host language. Normalising quoted terms ensures the subformula property, which guarantees that one can use higher-order types in the source while guaranteeing first-order types in the target, and enables using types to guide fusion. We test our ideas by re-implementing Feldspar, which was originally implemented as an Embedded DSL (EDSL), as a QDSL; and we compare the QDSL and EDSL variants.
The principle of Propositions as Types links logic to computation. At first sight it appears to be a simple coincidence---almost a pun---but it turns out to be remarkably robust, inspiring the design of theorem provers and programming languages, and continuing to influence the forefronts of computing. Propositions as Types has many names and many origins, and is a notion with depth, breadth, and mystery.
C#, Dart, Pyret, Racket, TypeScript, VB: many recent languages integrate dynamic and static types via gradual typing. We systematically develop three calculi for gradual typing and the relations between them, building on and strengthening previous work. The calculi are: λB, based on the blame calculus of Wadler and Findler (2009); λC, inspired by the coercion calculus of Henglein (1994); λS inspired by the space-efficient calculus of Herman, Tomb, and Flanagan (2006) and the threesome calculus of Siek and Wadler (2010). While λB is little changed from previous work, λC and λS are new. Together, λB, λC, and λS provide a coherent foundation for design, implementation, and optimisation of gradual types.
We define translations from λB to λC and from λC to λS. Much previous work lacked proofs of correctness or had weak correctness criteria; here we demonstrate the strongest correctness criterion one could hope for, that each of the translations is fully abstract. Each of the calculi reinforces the design of the others: λC has a particularly simple definition, and the subtle definition of blame safety for λB is justified by the simple definition of blame safety for λC. Our calculus λS is implementation-ready: the first space-efficient calculus that is both straightforward to implement and easy to understand. We give two applications: first, using full abstraction from λC to λS to validate the challenging part of full abstraction between λB and λC; and, second, using full abstraction from λB to λS to easily establish the Fundamental Property of Casts, which required a custom bisimulation and six lemmas in earlier work.
Contracts, gradual typing, and hybrid typing all permit less-precisely typed and more-precisely typed code to interact. Blame calculus encompasses these, and guarantees blame safety: blame for type errors always lays with less-precisely typed code. This paper serves as a complement to the literature on blame calculus: it elaborates on motivation, comments on the reception of the work, critiques some work for not properly attending to blame, and looks forward to applications. No knowledge of contracts, gradual typing, hybrid typing, or blame calculus is assumed.
Generic programming (GP) is an increasingly important trend in programming languages. Well-known GP mechanisms, such as type classes and the C++0x concepts proposal, usually combine two features: 1) a special type of interfaces; and 2) implicit instantiation of implementations of those interfaces.
Scala implicits are a GP language mechanism, inspired by type classes, that break with the tradition of coupling implicit instantiation with a special type of interface. Instead, implicits provide only implicit instantiation, which is generalized to work for any types. Scala implicits turn out to be quite powerful and useful to address many limitations that show up in other GP mechanisms.
This paper synthesizes the key ideas of implicits formally in a minimal and general core calculus called the implicit calculus (\lambda_?), and it shows how to build source languages supporting implicit instantiation on top of it. A novelty of the calculus is its support for partial resolution and higher-order rules (a feature that has been proposed before, but was never formalized or implemented). Ultimately, the implicit calculus provides a formal model of implicits, which can be used by language designers to study and inform implementations of similar mechanisms in their own languages.
Continuing a line of work by Abramsky (1994), by Bellin and Scott (1994), and by Caires and Pfenning (2010), among others, this paper presents CP, a calculus in which propositions of classical linear logic correspond to session types. Continuing a line of work by Honda (1993), by Honda, Kubo, and Vasconcelos (1998), and by Gay and Vasconcelos (2010), among others, this paper presents GV, a linear functional language with session types, and presents a translation from GV into CP. The translation formalises for the first time a connection between a standard presentation of session types and linear logic, and shows how a modification to the standard presentation yield a language free from deadlock, where deadlock freedom follows from the correspondence to linear logic.
The principle of Propositions as Types links logic to computation. At first sight it appears to be a simple coincidence---almost a pun---but it turns out to be remarkably robust, inspiring the design of theorem provers and programming languages, and continuing to influence the forefronts of computing. Propositions as Types has many names and many origins, and is a notion with depth, breadth, and mystery.
We systematically present four calculi for gradual typing: the blame calculus of Wadler and Findler (2009); a novel calculus that pinpoints blame precisely; the coercion calculus of Henglein (1994); and the threesome calculus of Siek and Wadler (2010). Threesomes are given a syntax that directly exposes their origin as coercions in normal form, a more transparent presentation than that found in Siek and Wadler (2010) or Garcia (2013).
Three two-hour talks cover a range of topics:
Language-integrated query is receiving renewed attention, in part because of its support through Microsoft's LINQ framework. We present a theory of language-integrated query based on quotation and normalisation of quoted terms. Our technique supports abstraction over values and predicates, composition of queries, dynamic generation of queries, and queries with nested intermediate data. Higher-order features prove useful even for constructing first-order queries. We prove that normalisation always succeeds in translating any query of flat relation type to SQL. We present experimental results confirming our technique works, even in situations where Microsoft's LINQ framework either fails to produce an SQL query or, in one case, produces an avalanche of SQL queries.
Earlier versions of this paper were named "The essence of language-integrated query"
Five talks covering a range of topics:
Advice from Hamming, Strunk and White, Knuth, and others on how to best conduct and communicate your research.
Continuing a line of work by Abramsky (1994), by Bellin and Scott (1994), and by Caires and Pfenning (2010), among others, this paper presents CP, a calculus in which propositions of classical linear logic correspond to session types. Continuing a line of work by Honda (1993), by Honda, Kubo, and Vasconcelos (1998), and by Gay and Vasconcelos (2010), among others, this paper presents GV, a linear functional language with session types, and presents a translation from GV into CP. The translation formalises for the first time a connection between a standard presentation of session types and linear logic, and shows how a modification to the standard presentation yield a language free from deadlock, where deadlock freedom follows from the correspondence to linear logic.
The foundations of computing lay in a coincidence: Church's lambda calculus (1933), Herbrand and Godel's recursive functions (1934), and Turing's machines (1935) all define the same model of computation. Another coincidence: Gentzen's intuitionistic natural deduction (1935) and Church's simply-typed lambda calculus (1940) define isomorphic systems. We review the history and significance of these coincidences, with an eye to Turing's role.
Several programming languages are beginning to integrate static and dynamic typing, including Racket (formerly PLT Scheme), Perl 6, and C# 4.0, and the research languages Sage (Gronski, Knowles, Tomb, Freund, and Flanagan, 2006) and Thorn (Wrigstad, Eugster, Field, Nystrom, and Vitek, 2009). However, an important open question remains, which is how to add parametric polymorphism to languages that combine static and dynamic typing. We present a system that permits a value of dynamic type to be cast to a polymorphic type and vice versa, with relational parametricity enforced by a kind of dynamic selaing along the line proposed by Matthews and Ahmed (2008) and Neis, Dreyer, and Rossberg (2009). Our system includes a notion of blame, which allows us to show that when casting between a more-precise type and a less-precise type, any failure are due to the less-precisely-typed portion of the program. We also show that a cast from a subtype to its supertype cannot fail.
We introduce the arrow calculus, a metalanguage for manipulating Hughes's arrows with close relations both to Moggi's metalanguage for monads and to Paterson's arrow notation. Arrows are classically defined by extending lambda calculus with three constructs satisfying nine (somewhat idiosyncratic) laws; in contrast, the arrow calculus adds four constructs satisfying five laws (which fit two well-known patterns). The five laws were previously known to be sound; we show that they are also complete, and hence that the five laws may replace the nine.
How to integrate static and dynamic types? Recent work focuses on casts to mediate between the two. However, adding casts may degrade tail calls into a non-tail calls, increasing space consumption from constant to linear in the depth of calls.
We present a new solution to this old problem, based on the notion of a threesome. A cast is specified by a source and a target type---a twosome. Any twosome factors into a downcast from the source to an intermediate type, followed by an upcast from the intermediate to the target---a threesome. Any chain of threesomes collapses to a single threesome, calculated by taking the greatest lower bound of the intermediate types. We augment this solution with blame labels to map any failure of a threesome back to the offending twosome in the source program.
Herman, Tomb, and Flanagan (2007) solve the space problem by representing casts with the coercion calculus of Henglein (1994). While they provide a theoretical limit on the space overhead, there remains the practical question of how best to implement coercion reduction. The threesomes presented in this paper provide a streamlined data structure and algorithm for representing and normalizing coercions. Furthermore, threesomes provide a typed-based explanation of coercion reduction.
A constraint programming system combines two essential components: a constraint solver and a search engine. The constraint solver reasons about satisfiability of conjunctions of constraints, and the search engine controls the search for solutions by iteratively exploring a disjunctive search tree defined by the constraint program. In this paper we give a monadic definition of constraint programming in which the solver is defined as a monad threaded through the monadic search tree. We are then able to define search and search strategies as first-class objects that can themselves be built or extended by composable search transformers. Search transformers give a powerful and unifying approach to viewing search in constraint programming, and the resulting constraint programming system is first class and extremely flexible.
Several recent language designs have offered a unified language for programming a distributed system, with explicit notation of locations; we call these "location-aware" languages. These languages provide constructs allowing the programmer to control the location (the choice of host, for example) where a piece of code should run, which can be useful for security or performance reasons. On the other hand, a central mantra of WWW system engineering prescribes that web servers should be "stateless": that no "session state" should be maintained on behalf of individual clients—that is, no state that pertains to the particular point of the interaction at which a client program resides. Many implementations of locationaware languages are not at home on the web: they hold some kind of client-specific state on the server. We show how to implement a symmetrical location-aware language on top of a stateless server.
We present a language that integrates statically and dynamically typed components, similar to the gradual types of Siek and Taha (2006), and extend it to incorporate parametric polymorphism. Our system permits a dynamically typed value to be cast to a polymorphic type, with the type enforced by dynamic sealing along the lines proposed by Pierce and Sumii (2000), Matthews and Ahmed (2008), and Neis, Dreyer, and Rossberg (2009), in a way that ensures all terms satisfy relational parametricity. Our system includes a notion of blame, which allows us to show that when more-typed and less-typed portions of a program interact, that any type failures are due to the less-typed portion.
The blame calculus of Wadler and Findler gives a high-level semantics to casts in higher-order languages. The coercion calculus of Henglein, on the other hand, provides an instruction set for casts whose normal forms ensure space efficiency. In this paper we address two questions: 1) can space efficiency be obtained in a high-level semantics? and 2) can we precisely characterize the relationship between the high and low-level semantics of casts? Towards answering both of these questions, we design a cast calculus that summarizes a sequence of casts as a threesome cast that contains a source type, a target type, and a third middle type that is the greatest lower bound of all the types in the sequence. We show that the threesome calculus is equivalent to the blame calculus and to one of the coercion-based, blame-tracking calculi of Siek, Garcia, and Taha. We also show that the threesome calculus is space efficient and obtain a tighter bound than that of Herman, Tomb, and Flanagan.
We introduce the blame calculus, which adds the notion of blame from Findler and Felleisen's contracts to a system similar to Siek and Taha's gradual types and Flanagan's hybrid types. We characterise where positive and negative blame can arise by decomposing the usual notion of subtype into positive and negative subtyping, and show that these recombine to yield naive subtyping. Naive typing has previously appeared in type systems that are unsound, but we believe this is the first time naive subtyping has played a role in establishing type soundness.
Abstraction is the cornerstone of high-level programming; HTML forms are the principal medium of web interaction. However, most web programming environments do not support abstraction of form components, leading to a lack of compositionality. Using a semantics based on idioms, we show how to support compositional form construction and give a convenient syntax.
We revisit the connection between three notions of computation: Moggi's monads, Hughes's arrows and McBride and Paterson's idioms (also called applicative functors). We show that idioms are equivalent to arrows that satisfy the type isomorphism A ~> B = 1 ~> (A -> B) and that monads are equivalent to arrows that satisfy the type isomorphism A ~> B = A -> (1 ~> B). Further, idioms embed into arrows and arrows embed into monads.