Today, I happened to watch a panel discussion from Lang.NEXT 2012. It featured Anders Hejlsberg, Martin Odersky, Gilad Bracha and a certain Peter Alvaro, and was anchored by the inimitable Erik Meijer. At the end of what turned out to be a very interesting discussion, a Q&A session was hosted. The last few questions relate to effects-checking at the language level. Odersky responded by saying that effects-checking in static type systems is currently at about the same primitive level as where static typing itself was in Pascal: clunky and cumbersome!
Watching that video finally prompted me to collect my thoughts into a written note.
The first time I pondered the question of effects was around 1994-5, when I started working on C++ programs with sizes in the range of a hundred thousand to a million lines of code. A prominent stimulus was the
const annotation on methods. The C++ compiler was capable of tracing changes to the current object made by a given method. In particular, I took note of the transitive nature of
const. This transitive nature was simultaneously useful and painful. The
const-ness problem in C++ is well known and well documented as well.
Curiously, a method annotated
const could still call external code, perform I/O, etc., as long as it did not mutate the current object. Often, I wanted a
const method to not get too intelligent and perform unspecified operations. This was particularly true of third-party code without accompanying source. But, C++ provided no mechanism to declare and enforce any such.
I went on to write programs for IBM 360/370 family of mainframe operating systems, a few flavours of UNIX, Windows NT, etc. Over the years, numerous times, I felt the same need for better guarantees on methods (and functions/procedures). I found it interesting that none of the languages that I worked with, in any of those environments, provided a solution to this problem.
Every once in a while, I would think of effects. Those were mostly unstructured thoughts, though. In addition, I was a typical applications programmer, with no formal background in computer science and programming languages. Having moved from theoretical Physics into programming, I often tried drawing analogies and parallels. Some of them were useful – sometimes and to some extent – but would always breakdown eventually.
By 1998-9, I had begun developing a better appreciation for dynamically-typed languages. Not weakly-typed languages such as Perl and Tcl, but strongly-typed ones such as Smalltalk, Python and (later) Ruby. I had accidentally come across the Smalltalk Blue Book by Goldberg and Robson. It opened my eyes to several new windows and doors! I employed Python and Ruby in a variety of projects, with great results for my clients. In the process, for a few years, I did not explore statically-typed languages. Nonetheless, the issue of effects surfaced time and again, particularly as the sizes of code bases and teams increased.
I returned to a large (> a million LoC) C++ project in 2002, and that work stirred my thoughts on effects yet again. Based on my experiences, I began collecting a wish list of the kinds of effects that I wanted the compiler to track. Towards, that I began comparing my thoughts to the facilities provided by some of the languages that I used or became aware of.
Java's checked exceptions force a method to either handle a thrown exception or re-throw the same, and declare so in the method signature. While no other effects can be declared, a consumer knows from the signature that such a method may not return a value, but may throw one of the specified exceptions.
I came across Haskell in 2003. I found the basics easy enough to follow, and wrote small exercise programs to gain some familiarity with it. Those days, there were not many easy tutorials for beginners, requiring some research into the scant documentation rather frequently. As I read about Haskell, I found three interesting aspects standing out :
- all Haskell functions are technically unary,
- its system of type classes, and
- its effects system.
The latter, of course, is relevant to the current discussion. Haskell does not require us to say anything specific about a pure function. On the other hand, when a function is not pure, Haskell requires us to utilise an appropriate type (usually a monad) to indicate the specific manner in which the function causes side effects. This allows for user-defined monads to specialise the kind of effects caused by a function. Once defined and used, these monads are utilised by Haskell's powerful type system to ensure consistency of use across the program.
Much later, I happened to watch the video of a talk by Erik Meijer, in which he remarked there are many ways for a function to be impure, but there is only one way to be pure. And, the dots connected!
D follows the approach of C++, but takes it further. Unlike C++'s
pure function in D must be free of side effects. This is a much stronger guarantee, and helps significantly. However, notice the difference between Haskell's philosophy and D's: functions (and methods) in D are assumed impure by default. And, thus, pure functions (and methods) have to be explicitly marked
An interesting variation can be found in Nimrod. It provides some pragmas to specify effects. In particular, we can specify the possible exceptions thrown by a
proc or a method. If it does not declare any exceptions, it is assumed to throw the base exception type. To avoid that, it has to expressly declare an empty list of exceptions.
There are plans to implement read and write tracking in Nimrod. In addition, an interesting feature is the capability to tag a
proc or a method with some types. The meaning of those types is ascribed by the user; Nimrod doesn't appear to care! However, once specified, these tagged types are tracked by the compiler analogously to how exceptions are tracked. Thus, it provides an expressive mechanism to introduce user-defined effect types as long as they behave similarly to exceptions.
Evolution of My Thoughts
The numerous projects that worked on shaped the development of my own thoughts on effects. Apart from working on huge assembler, COBOL, PL/I and Rexx code bases on IBM mainframes, I worked on large projects that used C, C++, Java, Python, Ruby, etc. in a wide variety of application domains. Particular combinations of application domains and languages sometimes led to specific realisations.
I believe that effects tracking can be effectively implemented in both statically-typed languages and dynamically-typed ones. Type systems for effects appear to be orthogonal to those for values. Accordingly, the following discussion does not distinguish between the static vs. dynamic nature of types for values. Similarly, it does not distinguish between object-oriented and non-object-oriented languages. It does, on the other hand, assume that there is an ahead-of-time or just-in-time compilation phase — i.e. parsing the source should not result in an AST that is directly executed immediately.
An analysis of the program by the compiler is necessary for any effects system to be useful. The signature of each function or method in the program has to be verified against inferred effects of that function or method. All deviations have to be marked as errors, and the compiler should refuse to compile such. Effects should be annotated as a possible combination of:
- mutates object state,
- mutates one or more input parameters passed by reference,
- reads input from the world: heap, message queues, files, network, etc.,
- writes output to the world: heap, message queues, files, network, etc.,
- invokes untrusted external code,
- may not return due to self recursion,
- may not return due to mutual or more indirect recursion, and
- may throw one or more exceptions.
A function or method with none of the above effects is considered pure: it is a mathematical function!
User-defined effects are second class citizens. They are tracked like compiler-defined ones, but the compiler itself cannot relate to the meaning of such effects.
Propagation of Effects
throws and user-defined effects are mutable. It should be possible to handle them, and stop their propagation. Whether the resolution of such handled exceptions is nominal, involves subtyping, etc., is dependent on the type system of the language. Except for those, all other effects are fundamentally transitive in nature. Each effect propagates up the call hierarchy from where it occurs.
This mandates run-time compilation in the cases of languages supporting fully separate compilation of modules/packages/…. Cross-boundary passing of lambdas and methods to higher-order functions and methods necessitates dynamic compiler checks at run time. Violations of effects guarantees should lead to a designated run-time exception that cannot be handled.
This can yet be avoided should it be possible to perform a whole-program analysis upon dynamic linking. But, may be that opens a different Pandora's box!
 At that time, I could not comprehend the machinery behind those. Not that I fully comprehend it now, too; my current understanding is only marginally better! ↩