Continuing a line of work by Abramsky (1994), by Bellin and Scott (1994), and by Caires and Pfenning (2010), among others, this paper presents CP, a calculus in which propositions of classical linear logic correspond to session types. Continuing a line of work by Honda (1993), by Honda, Kubo, and Vasconcelos (1998), and by Gay and Vasconcelos (2010), among others, this paper presents GV, a linear functional language with session types, and presents a translation from GV into CP. The translation formalises for the first time a connection between a standard presentation of session types and linear logic, and shows how a modification to the standard presentation yield a language free from deadlock, where deadlock freedom follows from the correspondence to linear logic.

Continuing a line of work by Abramsky (1994), by Bellin and Scott (1994), and by Caires and Pfenning (2010), among others, this paper presents CP, a calculus in which propositions of classical linear logic correspond to session types. Continuing a line of work by Honda (1993), by Honda, Kubo, and Vasconcelos (1998), and by Gay and Vasconcelos (2010), among others, this paper presents GV, a linear functional language with session types, and presents a translation from GV into CP. The translation formalises for the first time a connection between a standard presentation of session types and linear logic, and shows how a modification to the standard presentation yield a language free from deadlock, where deadlock freedom follows from the correspondence to linear logic.

This paper introduces a new way of attaching proof terms to proof trees for classical linear logic, which bears a close resemblance to the way that pattern matching is used in programming languages. It equates the same proofs that are equated by proof nets, in the formulation of proof nets introduced by Dominic Hughes and Rob van Glabbeek; and goes beyond that formulation in handling exponentials and units. It provides a symmetric treatment of all the connectives, and may provide programmers with improved insight into connectives such as "par" and "why not" that are difficult to treat in programming languages based on an intuitionistic formulation of linear logic.

Two different operational interpretations of intuitionistic linear logic have been proposed in the literature. The simplest interpretation recomputes non-linear values every time they are required. It has good memory-management properties, but is often dismissed as being too inefficient. Alternatively, one can memoize the results of evaluating non-linear values. This avoids any recomputation, but has weaker memory-management properties. Using a novel combination of type-theoretic and operational techniques we give a concise formal comparison of the two interpretations. Moreover, we show that there is a subset of linear logic where the two operational interpretations coincide. In this subset, which is sufficiently expressive to encode call-by-value lambda-calculus, we can have the best of both worlds: a simple and efficient implementation, and good memory-management properties.

Benton's adjoint models of Girard's linear logic also provide models of Moggi's computational metalanguage. We consider three different translations of lambda calculus into other calculi: direct, call-by-name, and call-by-value. We show that the three translations (mainly due to Moggi) into the computational metalanguage correspond to three translations (mainly due to Girard) into intuitionistic linear logic. We also extend these results to languages with recursion.

A number of useful optimisations are enabled if we can determine when
a value is accessed at most once. We extend the Hindley-Milner type
system with *uses*, yielding a type-inference based program
analysis which determines when values are accessed at most once. Our
analysis can handle higher-order functions and data structures, and
admits principal types for terms.

Unlike previous analyses, we prove our analysis sound with respect to
call-by-need reduction. Call-by-name reduction does *not*
provide an accurate model of how often a value is used during lazy
evaluation, since it duplicates work which would actually be shared in
a real implementation.

Our type system can easily be modified to analyse usage in a call-by-value language.

Girard described two translations of intuitionistic logic into linear
logic, one where `A -> B` maps to `(!A) -o B`, and
another where it maps to `!(A -o B)`. We detail the action of
these translations on terms, and show that the first corresponds to a
call-by-name calculus, while the second corresponds to call-by-value.
We further show that if the target of the translation is taken to be
an affine calculus, where `!` controls contraction but weakening is
allowed everywhere, then the second translation corresponds to a
call-by-need calculus, as recently defined by Ariola, Felleisen,
Maraist, Odersky and Wadler. Thus the different calling mechanisms
can be explained in terms of logical translations, bringing them into
the scope of the Curry-Howard isomorphism.

This tutorial paper provides an introduction to intuitionistic logic and linear logic, and shows how they correspond to type systems for functional languages via the notion of `Propositions as Types'. The presentation of linear logic is simplified by basing it on the Logic of Unity. An application to the array update problem is briefly discussed.

There is a standard syntax for Girard's linear logic, due to Abramsky, and a standard semantics, due to Seely. Alas, the former is incoherent with the latter: different derivations of the same syntax may be assigned different semantics. This paper reviews the standard syntax and semantics, and discusses the problem that arises and a standard approach to its solution. A new solution is proposed, based on ideas taken from Girard's Logic of Unity. The new syntax is based on pattern matching, allowing for concise expression of programs.

Surprisingly, there is not a good fit between a syntax for linear logic in the style of Abramsky, and a semantics in the style of Seely. Notably, the Substitution Lemma is valid if and only if !A and !!A are isomorphic in a canonical way. An alternative syntax is proposed, that has striking parallels to Moggi's language for monads. In the old syntax, some terms look like the identity that should not, and vice versa; the new syntax eliminates this awkwardness.

Past attempts to apply Girard's linear logic have either had a clear
relation to the theory (Lafont, Holmström, Abramsky) or a clear
practical value (Guzmán and Hudak, Wadler), but not both. This
paper defines a sequence of languages based on linear logic that span
the gap between theory and practice. Type reconstruction in a linear
type system can derive information about sharing. An approach to
linear type reconstruction based on *use types* is presented.
Applications to the *array update* problem are considered.

The linear logic of J.-Y. Girard suggests a new type system for functional languages, one which supports operations that ``change the world''. Values belonging to a linear type must be used exactly once: like the world, they cannot be duplicated or destroyed. Such values require no reference counting or garbage collection, and safely admit destructive array update. Linear types extend Schmidt's notion of single threading; provide an alternative to Hudak and Bloss' update analysis; and offer a practical complement to Lafont and Holmström's elegant linear languages.

Philip Wadler,