Standard ML is a strongly and statically typed programming language. However, unlike many other strongly typed languages, the types of literals, values, expressions and functions in a program will be calculated by the Standard ML system when the program is compiled. This calculation of types is called type inference. Type inference helps program texts to be both lucid and succinct but it achieves much more than that because it also serves as a debugging aid which can assist the programmer in finding errors before the program has ever been executed.
Standard ML's type system allows the use of typed data in programs to be checked when the program is compiled. This is in contrast to the approach taken in many other programming languages which generate checks to be tested when the program is running. Lisp is an example of such a language. Other languages serve the software developer even less well than this since they neither guarantee to enforce type-correctness when the program is compiled nor when it is running. The C programming language is an example of a language in that class. The result of not enforcing type correctness is that data can become corrupted and the unsafe use of pointers can cause obscure errors. A splendid introduction to this topic is [Car96].
The approach of checking type correctness as early as possible has two clear advantages: no extra instructions are generated to check the types of data during program execution; and there are no insecurities when the program is executed. Standard ML programs can be executed both efficiently and safely and will never `dump core' no matter how inexperienced the author of the program might have been. The design of the language ensures that this can never happen. (Of course, any particular compiler might be erroneous: compilers are large and complex programs. Such errors should be seen to be particular to one of the implementations of the language and not general flaws in design of the language.)