Basically, it creates a chain of 100,000 microthreads (tasklet, goroutine, process ... take your pick), sends a value in one end and waits for the result at the other end. The number is incremented by each microthread it passes through.
The code comes in two parts. Firstly, a chain.erl module:
-module(chain). -export([run/1]). run(Num) -> Tail = chain(Num, self()), Tail ! 0, receive Result -> Result end. chain(0, Tail) -> Tail; chain(Num, Tail) -> chain(Num-1, spawn(fun() -> f(Tail) end)). f(Tail) -> receive Num -> Tail ! Num+1 end.
And secondly a simple escript to start it off (mostly to make it easy to run under time):
#!/usr/bin/env escript %%! +P 1000000 -smp disable -export([main/1]). main() -> Result = chain:run(100000), io:format("~p~n", [Result]).
And the run times:
$ time ./chain 100000 real 0m1.520s user 0m1.012s sys 0m0.468s $ time ./go-chain 100000 real 0m3.371s user 0m1.672s sys 0m1.000s
A couple of things to point out/mention:
- The Erlang code is just beautiful (well, maybe not the escript so much ;-)). To me, it's more readable than either the Python or Go versions.
- I turned SMP off. Yes, it's an optimisation but then the tasks are running in series so it's never going to help.
- The Go version was compiled and linked using 8g and 8l.
- The Go version didn't always complete and sometimes took a *very* long time ... just not when running under time for some reason.
- Go seemed to use about 3x as much memory.
What does this prove? Absolutely nothing! Firstly, it's an unrealistic application. Also, Go is really quite new and I'm sure performance and memory use will improve in the coming months.
So why did I do this? Simply, because I really enjoy playing with Erlang and Go is interesting and a hot topic. (In my opinion anything with concurrency built into the runtime is onto a good thing ... I sure wish we didn't have to resort to Twisted, Stackless, greenlets or generator hacks in Python.)