Thanks for the reader that submitted my article to Trapexit.org, the guy that runs it asked me to post the full article on there. It ended up being about 7th on programming.reddit.com for about a day or two. So far, I haven’t had any complaints on it, so either people haven’t read it, or it’s been pretty accurate. 🙂
I didn’t think writing that was going to be such a big job when I started, but I suppose I took the position that anyone reading it had minimal mathematical and Erlang background. Therefore, there was a lot of explain. I can appreciate why good textbooks are hard to come by now. 😛
But I did learn a lot about Erlang and functional style programming. In addition, Neural networks aren’t really mystifyingly magical, like I think many people think of them. I use to think you can just solve anything with them. And while they solve a particular class of problems quite well, they’re essentially just a high dimensional gradient descent.
I’ll probably work on the neural network code later on–as I haven’t written a trainer for it. And I’ll probably try to do a particle swarm optimization article in Erlang some other time. In the meantime, I have other things to experiment with and work on. You’ll hear about it here first!