How to install emacs major mode for javascript

As emacs is a harsh way of life, having documented things for other people is a good way to give back. Too bad it’s hard for first timers. But I guess that’s why people stick with it…it’s a point of pride.

So how do you install a major mode for emacs? First, you need to find out what your emacs load path is.

C-h v load-path

Then you go and find a major mode file (javascript-mode.el) and put it in one of those directories. Since I don’t know better, I put it in ‘/usr/local/share/site-lisp’. (Anyone else know a better place to put javascript major modes?)

Then put the following in your “.emacs” file. This file exists in your home directory, and if it doesn’t exist, create it.

;; for javascript files
(autoload 'javascript-mode "javascript-mode" "JavaScript mode" t)
(setq auto-mode-alist (append '(("\\.js$" . javascript-mode))
auto-mode-alist))

And there you go, emacs major mode for javascript. If there’s already a package in your favorite distribution of linux, install that package instead. It’s way easier.

Advertisements

Emacs is a harsh way of life

I’ve had a fellow engineering friend quip to me:

Emacs. It’s not just a text editor, it’s a way of life. – Ian Martins

However, it’s sometimes a harsh way of life, because most everything is hidden.

So I was looking up how to change fonts in emacs, since the recent Ubuntu upgrade of X windows gives you square boxes in your emacs. Linked in the title is what I’ve found, and it’s pretty helpful.

I also found a small tidbit about how to search-and-replace with a newline.

M-%

to invoke search and replace. And then type whatever you’re looking for. And then in order to replace it with a newline, type:

C-q
C-j

There you go.

each vs. inject vs. map

Despite working in Matlab for a fair amount of time, where you have to think in terms of matrix operations, it’s still hard to shake the “loop it through” kind of thinking when dealing with collections of things from a C heritage. So say I have posts with comments, and I want to get all the comments of all posts.

all_comments = []
posts.each { |post| all_comments += post.comments }

This was the way that I use to do it using a more C-like thinking. I really never liked doing it this way because you have that floating initialization with all_comments, and it can get separated from the actual loop when you have all sorts of stuff doing on. Then I found inject:

posts.inject([]) { |all_comments, post| all_comments + post.comments }

This code does the same thing, pretty much with the initialization in the loop itself. I liked it a lot better. However, map has its uses:

posts.map { |post| post.comments }.flatten

I’m not sure which one is faster on my machine, but the last one has an appeal in that a “map” operation tells me that this piece of code can be done in parallel. Given the way it’s written, semantically it means that every piece in the collection can be independently calculated from each other, and then put together at the end (with flatten)–regardless of how it’s actually implemented right now. Not that this isn’t true for the first two code pieces, however, “inject” and “each” does not immediately imply that it can be parallelized.

I know that compilers nowadays are pretty sophisticated, with pipelining and all. But having a programmer use “map” could only be a help to the compiler figure out which part of the code parallelizes, no?

Update: I found another post talking about closures in Haskell, since the java people are resisting closures in Java. It gives a better argument over why a for loop is no good.

Convention over configuration is a culture over reinvention

Most programs nowadays are written with some code that the developer writes that are built on top libraries. We string together and manipulate the libraries in interesting ways in order build an application. No one goes and rewrites a graphics engine or a networking library anymore (unless you’re doing something special or unique in that arena). So besides debugging, for every new type of application a developer is doing, they are mostly spending time figuring out how to use libraries.

I’ve found that some libraries were really hard to figure out how to use. Other libraries were a lot easier to adopt. Why? I’ve found that it was mainly due a culture that was common across the libraries, and often times, this is more apparent in some languages than in others.

When I started using Rails, I was struck by how some things weren’t dictated by the syntax of the language, but merely a convention set by the framework developer. And as long as you followed the convention, things just worked. This not only simplified the complexity of rails, because it didn’t have to handle all sorts of cases, but it also made it simpler to find your way around the framework once you knew its ‘culture’.

“Convention over configuration”, is what the creator of Rails calls it, and after thinking about it for a little bit, it makes sense. In Rails you don’t have to make the mapping of class names to database table names, or their id attribute. There is already a convention, a culture, for doing that. Culture of a framework allows you to make assumptions that lets you say more with less.

It is the same with any human language. Idioms such as, “famous last words”, “a rose by any other name”, and “rolling a rock uphill” have a lot of meaning behind the words that are the result of references that most other people know–culture. So with just a couple words, you can convey quite a bit. The catch is that the other person has to understand that culture in order for you to convey much with little.

I think programming language–libraries especially–would benefit from a culture, so that programmers can more easily hop from library to library. Culture also has the advantage of being malleable over time. This way programmers can shift their conventions over time if the old ones aren’t working.

No such file to load — mkmf | mentalized

I was trying to install the Hpricot gem, but it wasn’t working. Ends up that you need to install the ruby1.8-dev package on Ubuntu…that’s where the mkmf file resides. I suppose it’s cuz Hpricot has things it needs to compile, and all those things are in the dev package. It makes sense now, but kind of annoying when you couldn’t have guessed. Good thing for Google.

For the love of programmers, we need better concurrency abstractions

Lately, I’ve been pretty interested in parallelism. Processors are moving to multi-core architectures. And while I expect that computers will keep following Moore’s Law for a while more, I think that there’s a lot to be gained for figuring how to best make use of multiple processors, especially for the tasks that can be easily parallelized, such as 3D graphics, image processing, and certain artificial intelligence algorithms. If compilers and subsequently programmers can’t take advantage of these multiple processors, we won’t see a performance gain in future software.

However, in terms of programming for multiple processors, the general consensus among programmers is, “avoid it if you can get away with it.” Multi-threaded programming has generally been considered hard, and with good reason. It’s not easy to think about multiple threads running the same code all at the same time at different points, and the side effects that it will have. Synchronization and mutex locks don’t make for an easy abstraction that works well as the code base gets larger.

One of the ways that people have been able to get around it is to reduce the amount of sharing of data that different threads and processes needs to have. Sometimes, this is enforced by a no side-effects policy in functional programming, and other times, it’s by the use of algorithms that are by nature share nothing. Google’s MapReduce seems to be a good example of this.

But there are some programs and algorithms that require the sharing of data, multithreaded programming for shared data is in some sense, unavoidable. Since that’s what we’re introduced with as THE thing for multi-threaded programming, that’s all I knew for a long while. Therefore, I started to wonder, is the current concurrent programming abstraction with synchronization of threads and mutexes the only one that exists?

Apparently not. Like all good ideas in CS, they seemed to have all come from the 1960’s. However, there here are futures, software transactional memory, actors, and joins (scroll down to concurrency). The post from Moonbase gives a probable syntax for these abstractions in ruby–they don’t exist yet, but he’s thinking what it might look like. I’m excited about this, if it makes programming multi-threaded applications easier. That way, programmers can more easily exploit multi-core processors or clusters for speed.

Most of the time, parallelism is exploited for speed, but I think parallelism can be also exploited for robustness. It’s no secret to programmers that code is fairly brittle. A single component that isn’t working correctly is a runtime bug for the entire system. I think parallelism can also be exploited to alleviate this problem, for a trade off of greater execution speed due to parallelism.

The only reason that I think this might be an interesting area to explore is because of the relatively recent interest in natural complex and emergent systems such as ants foraging for food, sugarscape, and termites gathering wood piles. A more technical example are the decentralized P2P technologies, as well as Bittorrent. This seems to be nothing new, as agent based modeling has been around for a while, in the form of genetic algorithms and ant algorithms. However, none of the current popular programming languages has good abstractions for it to exploit it as parallelism-for-robustness.

This might be a bit hard to design for, since it relies on the building of simple actors that will have an emergent system effect, while only sharing or using local information. It’s not always easy to ascertain what the global effect of many interacting simple actors will be analytically, since it might not always be tractable. In the reverse, given a desired emergent global system effect, to find the simple actor that will do that isn’t a walk in the park. However, I think once achieved, it will have the robustness that will make it more adaptable than current systems.

If anyone out there knows of such things, post a comment and let me know.

Update: I found that there was just a debate on Software Transactional Memory just now, and a nice post on how threading sucks. I know nothing compared to these people.

A link back to referring page in Rails

So here’s the quick problem. You have a long list of records in your rails application, such as posts. You end up using the standard pagination offered in rails, which we all know is slow. Regardless, it’s pretty annoying to go down into an individual post, edit it in-place, and then hit the back button, since there’s no link back to the previous page.

Well, since it’s a pagination, there’s not a static page to link a “back to list” link. You could pass in the page, but that’s a pain in the butt. It’s much easier to do:

This’ll put a link back to the same page in the pagination list that you came from. Saves pains. Ends up reading the HTTP spec on headers is a helpful, in addition to some Rails source.

Ends up that “redirect_to :back” also uses the same trick. You can use that in your controller to just redirect back to whatever method called it.