- In the first post, we learned that we can implement functions as objects in Java and pass them as arguments to other functions or more traditional methods. Functions that can be passed around are called first-class functions. First-class functions can be implemented in C/C++ as function pointers, while languages that are more conventionally "functional", like Lisp, Haskell, Scala, or Javascript, simply treat functions as another "thing" that can be passed around. Functions that take other functions as parameters are called higher-order functions. This is similar to passing an object that implements a callback interface. Listeners in Swing are like function objects, though they may have multiple entry-points. Our function objects have a single "apply" method that basically means "call me".
- In the second post, we learned about currying and partial application, which effectively say that any function with
n
parameters can be thought of as function with one parameter that returns a function withn-1
parameters. Once we hit zero parameters, we actually invoke the logic we've specified. (In this post, we're going to look at not invoking the logic right away.) We also looked at two of the more important higher-order functions, map and fold. The map function transforms a collection to another collection by applying a function of one parameter to each element in the original collection and accumulating the results in the new collection. The fold function aggregates values from a collection, producing a new single value, by applying a binary function that take the current output and a value from the original collection. In some contexts, fold is also called reduce. Thus, these two higher-order functions form the basis of so-called map-reduce algorithms, where you would typically apply the map step in parallel on multiple nodes, and aggregate the results via a fold on a single node. (Sometimes the fold can be parallelized, though it's not generally as trivial as parallelizing a map operation. As an example of a parallelizable fold, consider computing the sum of a list of integers. The sum of the whole list can be computed by partitioning into sublists and mapping the sum fold over each sublist, then folding the sum function over the resulting list of sums. Anyway, this is just a teaser for a future post on parallelizing function evaluation.)
Lazy Evaluation
In the second post, I introduced partial application, giving the example of a Function2
that returns a Function1
, and explained that this can be generalized to any FunctionN
returning a Function(N-1)
after binding the first parameter.
Similarly, we can apply the same logic to a Function1
, and have it return a Function0
, instead of a result.
What is a Function0
? It's a function that takes no parameters, and returns a value. Since one of the tenets of functional programming is consistency (that is, a function, given the same arguments, should always return the same value), a Function0
(taking no arguments) produces a constant value. That is, a Function0
effectively is a constant value. For example, you could define a Function0
called Five
, whose apply()
method returns the Integer
5. However, if a Function0
is produced from a Function1
(or a FunctionN
), it doesn't evaluate that value until you ask for it. If it retains the value from the first evaluation, and returns it rather than reevaluating on subsequent calls to apply()
(since every evaluation should produce the same value), then you've effectively implemented lazy evaluation.
Recall the definition of Function1, from the first post in this series:
We have a function that takes a T
and returns an R
, for some types T
and R
, via the apply
method. We can change this, so instead of returning an R
directly, we return a Function0<R>
. That Function<R>
will return the R
once its apply()
method is invoked.
Function0
with lazy evaluation:
And here is a modified version of Function1
, where we've renamed the previous abstract apply
method to evaluate
, to highlight the fact that when you call it, the code actually gets evaluated. When you simply apply
the function, you get back something that can eventually be evaluated (ideally via Function0
's get
method).
We can also extend this to any FunctionN
, by redefinining their total application methods to use partial application (taking advantage of currying to treat them as Function1
s):
I've chosen to make evaluate
public throughout to allow the option of strict evaluation, meaning "I want you to actually compute this now, and not store up the things that you are going to do when I eventually call get()
". While lazy evaluation with functional programming kind of allows you to build up a planned execution path at runtime and then trigger it with one call to get()
, sometimes you do need to fully compute something before returning from the current function.
Memoization
If you were paying close attention on the currying section of the previous post, you may have noticed that we are creating new function objects every time we do a partial function application. This same problem extends to our creation of Function0
objects above. This is expensive, since each of these function objects consumes some memory (which may be short-lived, but there is still some overhead), and it kind of defeats the purpose of lazy loading. In particular, if we're creating a new Function0
every time we finish passing all of our parameters, then we would need to retain that specific Function0
to ensure that we don't reevaluate the logic the next time we need the result.
We can extend Function1
to remember the value passed as the parameter to apply
, and hold onto the resulting Function0
, to return it the next time we're called with that same parameter. This process of returning the same value based on remembered arguments is known as memoization.
I am making the assumption that the parameter passed is of a type T
that implements hashCode
and equals
in a reasonable way.
Using an instance of this MemoizedFunction1
class, we know that its logic will be evaluated exactly once for each parameter value. Of course, this comes at a cost: for each parameter value, we retain the evaluation result object in memory (wrapped in a Function0
, which itself adds a few more bytes). If you have a function that will be applied over a significant number of inputs, or is rarely applied to the same value twice (e.g. string processing functions operating on user input sent to a server have a decent chance of seeing billions of different strings), then the overhead of memoization likely would not pay off. This is why I chose to create a separate class, rather than simply adding memoization to Function1
directly. You can use a regular Function1
when you don't need memoization (which is likely to be most of the time), and a MemoizedFunction1
when you do. Since they use the same API (you just override evaluate
), there's no additional thinking cost./p>
Let's confirm that the memoization works with a unit test, using everyone's favourite memoization and dynamic programming example, the Fibonacci function:
I selected the expected timings to give consistent results on somewhat faster and slower machines than mine. On my machine, the unmemoized version takes around 3.5 seconds, while the memoized version takes less than 1 millisecond.
Runtime memoization and recursive memoization
The previous example shows that, when writing your code, the memoized and unmemoized versions of a function are identical, except that one is instantiated as a MemoizedFunction1
and the other just a regular Function1
. What if you want to dynamically memoize a function at runtime? We can implement a utility method to "upgrade" a regular Function1
as follows:
Let's create a test for it:
Unfortunately, that test will not pass (yet). The problem is with our method call targets, and the fact that we're creating a MemoizedFunction1
and wrapping it around a regular Function1
. How do the method calls work out?
What we really want is something more like this (since MemoizedFunction1
's apply
method is where the memoization happens):
In order to route the call back to the outer MemoizedFunction1
, it would be ideal if we could override this
in the inner Function1
. Unfortunately, we can't do that. Really, though, object-oriented programming can be mapped to procedural programming (and, indeed, I believe this is more or less how the first implementation of C++ worked) roughly as follows:
- Every method is converted to a procedure with an additional
this
parameter, which is basically like a pointer to a Cstruct
. - Every method call is transformed into a procedure call where the address of the object (or rather, it's
struct
representation) on the left-hand side of the.
, or the pointer on the left-hand side of a->
, is passed as thethis
parameter. - Unqualified calls to methods or references to fields within the current objects can be thought of as using an explicit
this->
.
So, while we can't redefine this
, we can make Function1
's evaluate
method take an extra parameter which will act in place of this
for recursive calls. We'll call it self
.
Note that for the base Function1
, self
really is this
, as far as apply
is concerned.
Now, we can redefine our Fibonacci function to use self.apply
instead of just apply
, and change MemoizationUtils
to pass the MemoizedFunction
's this
as the self
parameter.
With these changes, the testRuntimeMemoization
test shown earlier now passes.
Memoizing functions of N
parameters
Recall that because of currying, every function of N
parameters can be thought of as a Function1
that returns a function of N - 1
parameters. This means that by implementing MemoizedFunction1
, we get MemoizedFunctionN
for free (almost). We just need to override the way that we curry, to return a MemoizedFunction1
that returns a MemoizedFunctionN-1
:
And MemoizationUtils
can have convenience methods to memoize them at runtime:
Summary
This is a post that gets into some of the "meat" of functional programming. Lazy evaluation can be quite powerful, as we defer evaluation of results until they are needed. Combined with memoization, we can ensure that we never reevaluate a function with the same parameters, using the previously computed value, instead. Memoization is not something you want to use everywhere, though. It's kind of a form of optimization, and, as Knuth said, "Premature optimization is the root of all evil". In a general program, if you memoize everything, you'll quickly exhaust your available memory. That said, because the functional paradigm allows considerable separation of concerns, we don't need to build memoization into any functions. The functions themselves can focus on their logic, and memoization can be tacked on as a cross-cutting concern (even at runtime, as we showed).
Where is memoization useful? In dynamic programming problems, you can define the problem in terms of a subproblem (or a problem on lesser inputs), with an eventual base-case. If you can write a function (or a collection of functions) that encapsulate the program logic in a naive way, you can probably just stick on a call to memoize
and guarantee reuse of subproblem solutions. To a constant factor, it won't be as efficient as maintaining results in arrays with a hand-coded solution, but it's so much easier to write. If you need to solve dynamic programming problems as part of your job, and not just to compete in online contests against C programmers, writing your memoization logic once and transparently attaching it where needed may be a good approach.
The more interesting thing, in my opinion, about memoization is that it demonstrates how easy it is to attach a cross-cutting or orthogonal concern to your logic (without needing to resort to aspect-oriented programming). The same way that I wrote MemoizationUtils
to attach memoization to a function could be used for other purposes. For example, we could attach a tracer that collects a capacity-bounded set of parameters passed to a function and the number of times the function is called. If a function is usually called with, fewer than, say, 1000 different parameters (of reasonable size to be used as keys) but is called millions or billions of times during a typical execution of the program, that function may be a good candidate for memoization. Similarly, I believe we could simply wrap each of our functions in a tracer that keeps track of the function's summed execution time and call count in a ThreadLocal
variable to implement a basic in-process profiler.
This ties in with the higher-order functions in the previous post. Whereas a typical object-oriented programmer will again and again write code to turn a list into another list or compute some sort of aggregate value from a list (whether it be a max, min, sum, or whatever), functional programming allows you to separate what you are doing (transforming/aggregating a list) from how you are doing it (the problem-specific logic).
In the next post, I think I will introduce concurrency via parallel/asynchronous evaluation. Assuming that's where I go (since I tend to get sidetracked between posts), Function0
will be very prominent, as it's effectively a simplified version of a java.util.concurrency.Future
. If I don't feel like diving into concurrency, I will probably end up covering immutability and immutable data structures. Immutable data structures are more traditional (and fundamental) to functional programming, but concurrency is pretty cool. It's always an adventure guessing where I'm going to go next.