So it’s been a whole week since my interesting post about the OwnershipFilter. I was investigating several things all at once, all the while wrestling with internal motivation (another post, at another time). In any case, I thought I’d blog about it entirely when I had something full and concrete to show you. However, if it takes me that long, I might as well blog about it as I go along–and it might be more fun for you anyway.
Trace it back
It started with an article about how the free lunch is over for software engineers that a friend sent to me about two years ago. It basically stated that developers have been riding on the wave of Moore’s law to save their butts, and it’s not going to last forever. In addition, it’s been known for a while that chip manufacturers are moving towards multi-core processors to increase performance. If developers are going to take advantage of hardware like they have been, they’re going to have to learn how to program concurrent programs.
The problem is, programmers suck at it. It’s well known that concurrent programming, as it stands, is not easy for humans. Even Tim Sweeney, the guy that architected the Unreal Engine (no mere programming mortal), thought it was hard. It was when I started looking beyond threads as a concurrency abstraction that I tripped over a programming language developed specifically for concurrency.
Yet Another Programming Language
A friend of mine, who is a teacher (i.e. not a programmer), recently asked me, “Why learn more than one programming language?” Ahh, little did she know that programming languages inspire what verges on religious debates between programmers. My short answer was, “Each tool is better at one task than another.” I was looking for another language that might do concurrency better.
I had always thought that I should learn more about functional programming. It seemed like an odd beast to me, especially since you don’t change state through side-effects. “How do you get anything done?” It’s kinda like when you first learned that you don’t need GOTO, and subsequently, when you learned that FOR loops suck.
And yet, I never really found a need or a small project I could do with functional programming that might prove to be satisfying. It was only due to the search for better concurrency abstractions that I ran across Erlang, a functional programming language that is used explicitly because it’s good at concurrency. In fact, it’s pretty much the only one out there that touts concurrency as its strength.
It uses the actor model, where processes share no data and just pass messages around. Because there’s nothing shared, there’s no issue of synchronization or deadlocks. While not as sexy-sounding as futures or software transactional memory, the actor model falls nicely along the lines of complex and emergent systems–systems that have locally interacting parts with a global emergent behavior. Hrm…could one of these systems be good for a small side project to do in Erlang?
How Gestalt, Mr. Brain
Artificial neural networks seemed to be the perfect thing actually. A quick, quick diversion into what they are.
A feed-forward artificial neural network is basically a network of perceptrons that can be trained to classify (ie. recognize) patterns. You give the network a pattern as an input, it can tell you the classification of that input as an output.
You can think of a perceptron much like a neuron in your brain, where it has lots of inputs and one output. It’s connected to other perceptrons through these inputs and outputs and there are weights attached to the input connections. If there is a
certain type pattern of input, and it passes a threshold, the perceptron ‘fires’ (i.e. outputs a value). This in turn might activate other perceptrons.
Even simpler, a perceptron is modeled as a function that takes a vector x as an input and outputs a number y. All it does is take the dot product of the input vector x with weights vector w, and pass it through a non-linear and continuous thresholding function, usually a sigmoid function. And you connect them up in layers, and you get an artificial neural network, that can learn to
recognizeclassify patterns if you train it with examples.
It has to learn patterns by adjusting the weights between perceptrons in the network after each training example, and you tell it how wrong it was in recognizing the pattern. It does this by an algorithm called back propagation. It’s the same page I lifted all these pictures from. I put all their pictures in an animated gif to illustrate (click on it to watch):
In the first part, the example propagates forward to an output. Then it propagates back the error. Lastly, it propagates forward the adjusted weights from the calculated error.
I think the shoe fits, sir
Why would this be a good fit as a subject to play with Erlang? Well, if you’ll notice, each perceptron only takes input from its neighboring perceptrons, and only outputs to its neighbors. This is very much in line with the actor model of concurrency. Each process would be a perceptron, and would act as an autonomous agent that only interacts with other processes it comes into contact with–in this case, only other perceptrons it’s connected to.
In addition, you’ll also notice that in the animation, the perceptron values are calculated neuron by neuron. In a concurrent system, there’s no reason to do this! You can actually do the calculation layer by layer, since the calculations of any individual perceptron only comes from the outputs of the perceptrons in the layers before it. Therefore, all outputs for perceptrons in a layer can be calculated in parallel.
Notice, however, that layers need to be calculated serially. I had originally thought that with the learning process propagating back and forth, maybe it could be pipelined. On closer examination, however, the best one can do is to feed-forward the next input one layer behind the adjusting of weights, to make the learning process go faster.
So I’m going to give it a shot on the side, and document it along the way, since a quick search on google revealed that though people talked about it, no one’s ever really given some snippets on it. Wish me luck!