In Standard ML programs, types are almost always inferred and there are only a few cases where additional information must be supplied by the programmer in order for the system to be able to compute the type of an expression or a value in a declaration. These cases arise because of underdetermined record types--as we saw with the version of the initials function [*]. Another complication is overloading.
Overloading occurs when an identifier has more than one definition and these definitions have different types. For example, ``+'' and ``-'' are overloaded since they are defined for the numeric types: int, word and real. The ``~'' function is overloaded for int and real. The relational operators are overloaded for the numeric types and the text types, char and string. Overloading is not polymorphism: there is no way for the Standard ML programmer to define overloaded operators. To see this, consider the following simple square function.
fun square x = x * x;
Would it be possible to assign the type 'a -> 'a to this function? No, we should not because then square could be applied to strings, or even functions, for which no notion of multiplication is defined. So, Standard ML must choose one of the following possible types for the function.
|square:||int -> int|
|square:||word -> word|
|square:||real -> real|
The type of the function ordered shown below is (int * int) -> bool where here there were five possible types.
fun ordered (x, y) = x < y;
Default overloading is not restricted to functions, it also applies to constants of non-functional type. Thus in an implementation of the language which provides arbitrary precision integers we might write (100 : BigInt.int) in order to obtain the right type for a constant.
Our conclusion then is that overloading has a second-class status in Standard ML. Other programming languages, notably Ada [Bar96], provide widespread support for overloading but do not provide type inference. The Haskell language provides both.