For much of the course I will be following Harper's book. I indicate below which sections/pages each lecture corresponds to. Page numbers refer to the version for printing; the version for reading online has different pagination.
I also indicate which sections/pages of Paulson's book each lecture corresponds to. This correspondence is looser, and you may find it difficult to read the material in this exact order because his examples sometimes depend on the intervening material. (But almost always the intervening material will be covered later, so it will do you no harm to read it as well!)
[11 Jan 2011] I introduced the course, explaining the meaning of "Functional Programming" and "Specification" and briefly outlining the features of Standard ML (the vehicle used in this course for teaching Functional Programming) and Extended ML (likewise, for Specification). I explained the following features of the course:
I introduced the ML computational model, explaining values,
types and effects, and the way that expressions are
typechecked and evaluated.
Reading:
Paulson
chapter 1
plus page 17;
or
Harper
page 3
plus
sections 2.1-2.2
[14 Jan 2011]
I introduced declarations, with value bindings and type bindings,
sequential and parallel composition of declarations, let/local
declarations, and scope. I introduced functions, covering
function expressions, function types and function application,
and call-by-value evaluation.
Reading:
Paulson
sections 2.1-2.6 until the bottom of page 27,
pages 38 (middle) - 40 (middle),
and
pages 53 (middle) - 57 (middle);
or
Harper
sections 2.3-5.1.
I issued Practical 1.
[18 Jan 2011]
I explained the connection between local variable declarations and
parameter passing in function application, and how function
definitions influence scope.
I introduced tuples and their types (=products), and pattern matching
to access components of data structures.
I covered
the use of products and
records to give functions with multiple arguments and/or multiple results.
I then introduced pattern matching for case analysis.
Pattern matching with case analysis allows "clausal" definitions of
functions via a sequence of pattern/result rules. These can also be
used in case expressions, which are an abbreviation for an
application of a clausally-defined function.
Reading:
Paulson
pages 27 (bottom) - 32 (bottom),
130 (bottom) - 132 (middle),
and 133;
or
Harper
sections 5.2-6.2.
[21 Jan 2011]
I explained that sequences of clauses are supposed to be exhaustive
and non-redundant, and that ML checks this for you.
I explained recursive function
definitions,
gave two recursive
definitions of factorial, one of which is in "tail recursive" form,
and showed how to define functions by mutual recursion.
I gave a brief introduction to polymorphic types.
I then introduced lists and a simple list function.
Reading:
Paulson
40 (middle) - 44 (middle),
plus Harper section 6.4
(since Paulson doesn't explain ML's warnings
for redundant and/or non-exhaustive sequences of patterns),
48
- 53 (middle),
57 (middle) - 58,
and chapter 3 but omitting sections
3.14-3.16 (there are a lot of examples in this chapter, and you don't
need to go through all of them!); or
Harper rest of chapter 6,
chapter 7 except for 7.3, and chapter 9 up to page 87 (middle).
(If you are following Harper's book, it would be well worth looking at some
of the material on lists in Paulson's book too.)
[25 Jan 2011]
I showed a few more functions on lists and
began talking about user-defined datatypes, with examples of
non-recursive datatypes including
option and its uses,
and binary trees with integer labels as a recursive datatype.
Reading:
Paulson chapter 4 through page 128,
page 141 (middle) - 147,
and page 154 (middle) - 164 (middle),
ignoring mentions of signatures and structures;
or
Harper chapter 10 through section 10.3
(If you are following Harper's book, it would be well worth looking at some
of the examples involving trees in Paulson's book too.)
[28 Jan 2011]
I showed a number of variants of trees and
functions on trees, represented as recursive datatypes. These
included: binary trees with integer labels and explicit
leaf nodes; polymorphic binary trees; trees with both binary and unary
nodes; trees with an arbitrary number of children at each node;
binary trees with labels on branches rather than nodes.
I showed how to define a datatype that is the disjoint union of
two types, useful e.g. for building lists and trees with labels of
different types.
I then explained how to use datatypes to represent the abstract syntax
of languages.
I closed with an introduction to higher-order functions.
Reading:
Paulson
pages 129 - 130 (bottom),
page 164 (middle) - 170,
and chapter 5 through page 173;
or
Harper sections 10.4,
10.5,
11.1 and 11.2.
[1 Feb 2011]
I continued with higher-order functions, covering function composition
and iterated function composition, the
notion of a closure as a semantic and implementation concept
underlying function definitions with static scope, and the difference
between curried and uncurried functions, with functions that
convert between these forms.
(Supposedly, "currying" is also known as "Schönfinkelisation".)
I showed how to use higher-order functions to
distill common patterns of computation, covering map and
foldr/foldl.
Reading:
Paulson
pages 174-191 middle;
or
Harper sections 11.3-11.4.
(Again, Paulson gives many more examples than Harper does.
My foldr corresponds to Harper's reduce, but
his version is uncurried, with the arguments in a different order)
[4 Feb 2011] I covered lazy and eager evaluation, explaining that ML uses eager evaluation but it is possible to program lazy evaluation. This is an example of functions as components of data structures. Reading: Lecture note 1, minus the last half page. (Harper chapter 15 is relevant and is worth reading, but it uses a non-standard feature of ML that is not available in Moscow ML. Harper chapter 31 is also somewhat relevant.)
[8 Feb 2011]
I explained that in ML, not all types admit equality, and
for example function types do not.
I began the ML modules language with an overview of its main concepts
(signatures, structures, functors)
and how it relates to the ML core language.
I introduced signatures together with signature inheritance via
inclusion and specialization.
Reading:
Paulson sections 3.14-3.16,
pages 59-63,
and pages 148-154 middle
or
the last half page of
Lecture note 1,
Harper page 153,
and chapter 18 through section 18.1.
I issued Practical 2.
[22 Feb 2011]
I introduced structures together with qualified names
and explained signature matching.
Reading:
Paulson chapter 7 up to the bottom of page 265
and page 269 bottom - page 271 bottom;
or
Harper section 18.2,
chapter 19,
and chapter 20 up to the end of section 20.1
plus section 20.3.
[25 Feb 2011]
I explained signature
ascription. There are two forms, opaque and transparent signature
ascription; opaque signature ascription supports data abstraction.
I began on the use of substructures to form module hierarchies.
Reading:
Paulson bottom of page 265 - bottom of page 266
and page 283 middle - 285 top
plus Harper chapter 21
(since Paulson's coverage of substructures is too thin);
or
Harper chapter 20 from section 20.2
and chapter 21.
[1 Mar 2011]
I finished explaining substructures.
I explained the use of functors - functions taking
structures to structures - to create generic modules.
Reading:
Paulson pages 271 bottom - 283 middle
and pages 285 top - 290 middle
or
Harper chapter 23 up to the end of section 23.1.
I postponed the deadline for Practical 2 to 4pm on Tue 8 Mar.
[4 Mar 2011]
I presented an example aimed at showing how sharing
specifications can be used to describe the required relationships
between parts of a large assembly of structures.
I explained how in functors, sharing specifications are
required again, to specify required relationships between functor
parameters.
Reading:
Paulson sections 7.12
and 7.13;
or
Harper chapter 22
and
section 23.2.
(Section 23.3
is interesting but may be skipped.)
[8 Mar 2011]
I began discussing
specification and proof with a quick introduction to Extended ML.
and the motivation for writing formal specifications of programs.
I then introduced the use of logical axioms for specifying
ML functions by means of a series of simple examples.
I explained how
it is sometimes desirable to avoid spelling out details
unnecessarily by writing "loose" specifications.
Reading:
Lecture Note 2
I issued Practical 3.
[11 Mar 2011]
I reviewed proof by induction for natural numbers and showed how to
use structural
induction on lists for proving properties of ML functions. I showed how to
read off a structural induction principle for most ML datatypes.
Reading:
Lecture Note 3.
Harper chapter 26 and Paulson pages 229 middle - 233
both cover essentially the same ground as pages 2-3 of
Lecture Note 3.
[15 Mar 2011]
I explained how to specify structures in Extended ML via signatures
containing axioms, and
how to define partial implementations via structures containing axioms
and question marks.
I then explained how to prove that a structure satisfies a signature
containing axioms,
including the case of substructures and auxiliary functions in
signatures.
I finished the material on specification and proof by explaining how
to specify and prove correctness of functors.
Reading: Lecture note 4 and
Lecture note 5.
[18 Mar 2011]
The final lecture came back to the ML core language to explain
effects, covering exceptions and references.
I didn't have time to cover input/output - see the reading for an
explanation of the relevant library module.
Reading:
pages 134-141 middle,
and chapter 8 through page 335 (top)
and from page 340;
or
Harper chapters 12,
13,
and 14.