How to install emacs major mode for javascript

As emacs is a harsh way of life, having documented things for other people is a good way to give back. Too bad it’s hard for first timers. But I guess that’s why people stick with it…it’s a point of pride.

So how do you install a major mode for emacs? First, you need to find out what your emacs load path is.

C-h v load-path

Then you go and find a major mode file (javascript-mode.el) and put it in one of those directories. Since I don’t know better, I put it in ‘/usr/local/share/site-lisp’. (Anyone else know a better place to put javascript major modes?)

Then put the following in your “.emacs” file. This file exists in your home directory, and if it doesn’t exist, create it.

;; for javascript files
(autoload 'javascript-mode "javascript-mode" "JavaScript mode" t)
(setq auto-mode-alist (append '(("\\.js$" . javascript-mode))
auto-mode-alist))

And there you go, emacs major mode for javascript. If there’s already a package in your favorite distribution of linux, install that package instead. It’s way easier.

Emacs is a harsh way of life

I’ve had a fellow engineering friend quip to me:

Emacs. It’s not just a text editor, it’s a way of life. – Ian Martins

However, it’s sometimes a harsh way of life, because most everything is hidden.

So I was looking up how to change fonts in emacs, since the recent Ubuntu upgrade of X windows gives you square boxes in your emacs. Linked in the title is what I’ve found, and it’s pretty helpful.

I also found a small tidbit about how to search-and-replace with a newline.

M-%

to invoke search and replace. And then type whatever you’re looking for. And then in order to replace it with a newline, type:

C-q
C-j

There you go.

each vs. inject vs. map

Despite working in Matlab for a fair amount of time, where you have to think in terms of matrix operations, it’s still hard to shake the “loop it through” kind of thinking when dealing with collections of things from a C heritage. So say I have posts with comments, and I want to get all the comments of all posts.

all_comments = []
posts.each { |post| all_comments += post.comments }

This was the way that I use to do it using a more C-like thinking. I really never liked doing it this way because you have that floating initialization with all_comments, and it can get separated from the actual loop when you have all sorts of stuff doing on. Then I found inject:

posts.inject([]) { |all_comments, post| all_comments + post.comments }

This code does the same thing, pretty much with the initialization in the loop itself. I liked it a lot better. However, map has its uses:

posts.map { |post| post.comments }.flatten

I’m not sure which one is faster on my machine, but the last one has an appeal in that a “map” operation tells me that this piece of code can be done in parallel. Given the way it’s written, semantically it means that every piece in the collection can be independently calculated from each other, and then put together at the end (with flatten)–regardless of how it’s actually implemented right now. Not that this isn’t true for the first two code pieces, however, “inject” and “each” does not immediately imply that it can be parallelized.

I know that compilers nowadays are pretty sophisticated, with pipelining and all. But having a programmer use “map” could only be a help to the compiler figure out which part of the code parallelizes, no?

Update: I found another post talking about closures in Haskell, since the java people are resisting closures in Java. It gives a better argument over why a for loop is no good.

Convention over configuration is a culture over reinvention

Most programs nowadays are written with some code that the developer writes that are built on top libraries. We string together and manipulate the libraries in interesting ways in order build an application. No one goes and rewrites a graphics engine or a networking library anymore (unless you’re doing something special or unique in that arena). So besides debugging, for every new type of application a developer is doing, they are mostly spending time figuring out how to use libraries.

I’ve found that some libraries were really hard to figure out how to use. Other libraries were a lot easier to adopt. Why? I’ve found that it was mainly due a culture that was common across the libraries, and often times, this is more apparent in some languages than in others.

When I started using Rails, I was struck by how some things weren’t dictated by the syntax of the language, but merely a convention set by the framework developer. And as long as you followed the convention, things just worked. This not only simplified the complexity of rails, because it didn’t have to handle all sorts of cases, but it also made it simpler to find your way around the framework once you knew its ‘culture’.

“Convention over configuration”, is what the creator of Rails calls it, and after thinking about it for a little bit, it makes sense. In Rails you don’t have to make the mapping of class names to database table names, or their id attribute. There is already a convention, a culture, for doing that. Culture of a framework allows you to make assumptions that lets you say more with less.

It is the same with any human language. Idioms such as, “famous last words”, “a rose by any other name”, and “rolling a rock uphill” have a lot of meaning behind the words that are the result of references that most other people know–culture. So with just a couple words, you can convey quite a bit. The catch is that the other person has to understand that culture in order for you to convey much with little.

I think programming language–libraries especially–would benefit from a culture, so that programmers can more easily hop from library to library. Culture also has the advantage of being malleable over time. This way programmers can shift their conventions over time if the old ones aren’t working.

No such file to load — mkmf | mentalized

I was trying to install the Hpricot gem, but it wasn’t working. Ends up that you need to install the ruby1.8-dev package on Ubuntu…that’s where the mkmf file resides. I suppose it’s cuz Hpricot has things it needs to compile, and all those things are in the dev package. It makes sense now, but kind of annoying when you couldn’t have guessed. Good thing for Google.

For the love of programmers, we need better concurrency abstractions

Lately, I’ve been pretty interested in parallelism. Processors are moving to multi-core architectures. And while I expect that computers will keep following Moore’s Law for a while more, I think that there’s a lot to be gained for figuring how to best make use of multiple processors, especially for the tasks that can be easily parallelized, such as 3D graphics, image processing, and certain artificial intelligence algorithms. If compilers and subsequently programmers can’t take advantage of these multiple processors, we won’t see a performance gain in future software.

However, in terms of programming for multiple processors, the general consensus among programmers is, “avoid it if you can get away with it.” Multi-threaded programming has generally been considered hard, and with good reason. It’s not easy to think about multiple threads running the same code all at the same time at different points, and the side effects that it will have. Synchronization and mutex locks don’t make for an easy abstraction that works well as the code base gets larger.

One of the ways that people have been able to get around it is to reduce the amount of sharing of data that different threads and processes needs to have. Sometimes, this is enforced by a no side-effects policy in functional programming, and other times, it’s by the use of algorithms that are by nature share nothing. Google’s MapReduce seems to be a good example of this.

But there are some programs and algorithms that require the sharing of data, multithreaded programming for shared data is in some sense, unavoidable. Since that’s what we’re introduced with as THE thing for multi-threaded programming, that’s all I knew for a long while. Therefore, I started to wonder, is the current concurrent programming abstraction with synchronization of threads and mutexes the only one that exists?

Apparently not. Like all good ideas in CS, they seemed to have all come from the 1960’s. However, there here are futures, software transactional memory, actors, and joins (scroll down to concurrency). The post from Moonbase gives a probable syntax for these abstractions in ruby–they don’t exist yet, but he’s thinking what it might look like. I’m excited about this, if it makes programming multi-threaded applications easier. That way, programmers can more easily exploit multi-core processors or clusters for speed.

Most of the time, parallelism is exploited for speed, but I think parallelism can be also exploited for robustness. It’s no secret to programmers that code is fairly brittle. A single component that isn’t working correctly is a runtime bug for the entire system. I think parallelism can also be exploited to alleviate this problem, for a trade off of greater execution speed due to parallelism.

The only reason that I think this might be an interesting area to explore is because of the relatively recent interest in natural complex and emergent systems such as ants foraging for food, sugarscape, and termites gathering wood piles. A more technical example are the decentralized P2P technologies, as well as Bittorrent. This seems to be nothing new, as agent based modeling has been around for a while, in the form of genetic algorithms and ant algorithms. However, none of the current popular programming languages has good abstractions for it to exploit it as parallelism-for-robustness.

This might be a bit hard to design for, since it relies on the building of simple actors that will have an emergent system effect, while only sharing or using local information. It’s not always easy to ascertain what the global effect of many interacting simple actors will be analytically, since it might not always be tractable. In the reverse, given a desired emergent global system effect, to find the simple actor that will do that isn’t a walk in the park. However, I think once achieved, it will have the robustness that will make it more adaptable than current systems.

If anyone out there knows of such things, post a comment and let me know.

Update: I found that there was just a debate on Software Transactional Memory just now, and a nice post on how threading sucks. I know nothing compared to these people.

A link back to referring page in Rails

So here’s the quick problem. You have a long list of records in your rails application, such as posts. You end up using the standard pagination offered in rails, which we all know is slow. Regardless, it’s pretty annoying to go down into an individual post, edit it in-place, and then hit the back button, since there’s no link back to the previous page.

Well, since it’s a pagination, there’s not a static page to link a “back to list” link. You could pass in the page, but that’s a pain in the butt. It’s much easier to do:

This’ll put a link back to the same page in the pagination list that you came from. Saves pains. Ends up reading the HTTP spec on headers is a helpful, in addition to some Rails source.

Ends up that “redirect_to :back” also uses the same trick. You can use that in your controller to just redirect back to whatever method called it.

Spore Gameplay Video – Google Video

Spore Gameplay Video – Google Video

I was fascinated by the video of the new Will Wright game. He’s the same guy that created SimCity and The Sims. This looks like nothing but good ol’ sandbox type of fun, but on a ‘powers of ten’ scale. Often times, when I describe Will Wright’s games, most people ask, “What’s the point? What’s the goal?” The goal or point is whatever you make it to be.

At this point, some people that ask that question seemed stunned, since it seems to them that a lot of effort was put into something that was pointless. I imagine these are the same people that can’t handle open-ended problems.

Other people get it immediately. When you get to make your own goals, you have to rely on your own imagination, and you start to own the world. That actually make it much more fun, and something that you don’t tire of easily.

Apple’s open letter not quite convincing for music companies

Apple – Thoughts on Music

By now, the world has had some time to chew the fat on this letter, which seems much of a surprise to people. In summary, he says:

  • Apple can sell music, but only if DRM’d according to license
  • DRM requires secrets, and they can be broken by smart people
  • Alternative 1: do as we’ve been doing
  • Alternative 2: license FairPlay DRM
  • Alternative 3: abolish DRMs

He certainly wins consumers over with this letter. And I agree with him in principle, but I don’t think his arguments will convince the music companies, who are the ones that need the most convincing.

All the arguments he gives about DRM-free music makes things easier for Apple not for the Music Monguls. Primarily, Apples doesn’t have to use up resources to keep working on DRM. Secondly, if this were to happen, he would make commoditize his complement. A music store’s complement is music. And if there were nothing to differentiate the music (one can play music from any store on any device), that’s an advantage for Apple. It was the same strategy employed by Netscape: since browsers and servers are complements of each other, we’ll give away the browsers (make it a commodity), and sell servers.

One last thing is that the argument at the end doesn’t quite hold up. Even if music companies are currently selling over 90 percent of their music DRM-free on CDs, CDs aren’t where the future revenues will be coming from, and CD revenue will certainly be declining. So of course music companies will be all hot and bothered by no DRM.

I think DRM-free is the way to go, and music companies will have to accept that the world is changing. In addition, they’ll have have to lower their costs in publishing music, as well as finding artists. No more of this, pick a handful, throw it on the wall and see what sticks–and hope that the hit artist will make up for losses with everyone else.

Two submit buttons with form_remote_tag

At the beginning stages of any new framework or language, you can get away with just posting stupid-programming tricks. This was the case with Ruby and Rails. It was fairly new about a year or two ago, but now, it’s pretty much old hat, so I sometimes don’t bother posting things that I figure out, since I figured it’s old news.

Except, I’m still discovering little things here and there. It’s not just with the new fancy RJS templates either. Old favorites like link_to_remote() still have unexplored corners (which we’ll get to in a minute). Although this one wasn’t that painful, I found it as a footnote in the Rails documentation, that others might have easily missed.

So the answer to the common question I was looking for was “How do I have two submit buttons in the same form?” Usually, we need this when there is a set of user-entered data that has more than one action associated with it. For example, in a blog editing page, you usually want to do two things: “save as draft” or “publish”. And when you think about it, the file system on your desktop operates with the same metaphor. You select a bunch of files (the user-data), and right-click to select an action (move, copy, delete, etc).

So with a normal form submission, there is the solution of just naming the submit tags, and since the submit button will submit its value along with the form, you can tell which button was pressed:

 :post %>
"save" %>
"preview" %>

And then, in your controller, you simply check whether params[“save”] or params[“preview”] exists, and do the appropriate action, or call the appropriate callback.

def post
if params["save"]
save
render :action => :save
else
preview
render :action => :preview
end

def save
# do saving stuff
end

def preview
# do preview stuff
end

Cake, right? But what if you wanted to do this with a form submitted by XML_HTTP_REQUEST? This hack doesn’t work with form_remote_tag, since it serializes the entire form, regardless of which buttons was pressed. I was all ready to buckle down and really learn some javascript, instead of the bits and pieces that I know.

But after combing through the documentation, apparently someone on the Rails team needed to do something similar before. If you look at the very end of the documentation for link_to_remote(), you’ll find this:

:submit: Specifies the DOM element ID that‘s used as the parent of the form elements. By default this is the current form, but it could just as well be the ID of a table row or any other DOM element.

It ends up that you can use this to serialize anything containing form elements. But not only that, there are two other bonuses.

One, you can stylize the submit links to look far better than the ugly submit buttons. Two, you no longer need a dispacher method. Each link_to_remote “submit button” routes to its respective actions, which means you can have way more than two submit buttons, and not have a gigantic “if elsif” statement in a dispatcher method in your controller. Neat.






{ :action => :group }, :submit => :image_collection %>
{ :action => :delete }, :submit => :image_collection %>

Notice that the parent tag doesn’t have to be a form. It can be a div.

One downside to this is design related. Because the link_to_remote call can be anywhere on the page, unlike a submit button which has to be inside a form tag, one can put it anywhere. This added flexibility means that you have to be careful to put the newly minted submit button where it is obvious and intuitive that it submits the intended data. But if you’re aware of that, have fun with your new and shiny multiple action form.

Update:
If you use :confirm in conjunction with :submit, make sure :confirm is in front of submit, as order seems to make a difference here.