Render_to_string only counts if failed

render_to_string followed by render issues – Ruby Forum

I was puzzled by this yesterday, and good thing I went to do something else instead of tearing my hair out. Came back fresh today and found this. Apparently, render_to_string normally doesn’t count as a render–given that it succeeds. If it throws an exception for some reason, then it’ll get count as a double render if you rendered elsewhere. Small tip!


DRYing up your views with a TableBuilder

In the last two months, when I was adding more features to mobtropolis, I found it painful to try and lay things out all over again from scratch. As a result, it sucked to see ugly layouts on the new pages juxtaposed with all the styling I had done before. It wasn’t until a week ago that I said to myself, “Stop the madness!” and started refactoring my views–something I never thought of doing much of until now. When you don’t, the barrage of angle brackets blows out of proportion and it starts to look pretty damn fugly with complex views.

What I should be able to do is take common mini-layouts in my views and make them available as helpers so that I can think in terms of bigger chunks to piece together pages, rather than in divs and spans. In addition, it makes your interface more consistent for your users.

Some good resources were presentations from the 2007 RailsConf, like V is for Vexing and Keeping your views DRY. While a lot of view DRYing talks about form builders, I didn’t see any on table builders, so I decided to take a stab at it. Personally, I don’t like to overuse tables for layouts. But as certain elements in my page layouts have been repeated, I refactored them into first helpers, and then when I did more than one, I extracted it out into a simple table builder. This is how you’d use it:

For example, I have a mini-layout where I show simple stats:

Here’s how I used a simple table builder to display the above:

And I find that I started using the same sort of thing in other places, like in a user’s profile:

I cut out some details so you can see that it’s just a block that gets passed a ScoreCard object, from which you call placard to add another score to the score_card. It sure beats writing


over and over again.

To declare the helper, we create a helper with the structure of the table inside the declaration of a ScoreCard object. We have a ScoreCard object to hold the contents of the placards. When they’re called in the block above in the template, they get stored in the ScoreCard object, and not written out to erb immediately. That way, we can place them wherever in the table we please, by making the call to card.display(:placards):

module ScorecardHelper
def score_card(html_options = {}, &block)
options = { :class => :scorecard, :width => "100%" }.merge(html_options) do |xm, card|
xm.table(options) do => :top) do
xm << card.display(:placards)

So then what’s ScoreCard look like? Pretty simple. It has a call to each cell that can be filled in the mini-layout. It’s kinda analogous to how form_for passes in a form object, on which you can call text_field, etc.

require 'lib/table_builder'

# Used to create a scorecard helper
class ScoreCard < TableBuilder
cells :placards

# a placard is placed into scorecards
def placard(text, text_id, score, widget)
xm =
@placards[:html] += do
xm.span(:style => "font-size: 1.2em") { xm << "#{text}" }
xm.em(:id => text_id, :class => "primary_number") { xm << "#{score}" }
xm << "#{widget}"


Notice that there’s a call to cells() to initialize the type of cell, and a method of the same name that builds the html for that cell. If you have other types of cells, you simply put it in the list of cells, and then create a method for it that is called in the template. By convention, you’d stick the html of the cell contents in “@#{name_of_cell}”[:html], and if you wanted, pass in the html_options, and stick it in “@#{name_of_cell}”[:options]. Then, you can access those in the helper wherever you want.

Let’s try another one. I have a mini_layout with a picture, and some caption underneath it, like a polaroid.

The associated helper and PolaroidCard object are pretty simple:

module PolaroidCardHelper
# a polaroid card is used to show a picture with a caption below it.
def polaroid_card(html_options = {}, &block)
options = { :class => :polaroidcard, :style => "display: inline;" } do |xm, card|
xm.table(options) do { {
xm << card.display(:picture)
}} { => "caption")) {
xm << card.display(:caption)

require 'lib/table_builder'

class PolaroidCard < TableBuilder
cells :picture, :caption

def picture(html_options = {}, &block)
@picture[:html] = capture(&block)
@picture[:options] = html_options

def caption(html_options = {}, &block)
@caption[:html] = capture(&block)
@caption[:options] = html_options

I’ve tried to pull all the plumbing out into TableBuilder (dropped it into lib/), and only leave the flexibility of creating the table structure in the helper, and the format of the cell in the object. It ends up TableBuilder isn’t too complex either. It uses some metaprogramming to create some instance variables. I know it pollutes the object instance variable namespace, but I wanted to be able to say @caption[:html], rather than @cells[:caption][:html].

class TableBuilder < ActionView::Base
class << self
# used in the child class declaration to name and initialize cells
def cells(*names)
define_method(:initialize_cells) do
@cell_names = { |n| "@#{n}".to_sym }
@cell_names.each do |name|
if instance_variable_defined?(name)
raise"name clash with ActionView::Base instance variables")
instance_variable_set(name, { :html => "", :options => {} })

def initialize(decor_block, &table_block)
html =, self)
concat(html, decor_block.binding)

def display(cell_name)

def html_options(cell_name)


I’ve found have these helpers cleans up my views significantly, though I have to admit, it’s not exactly easiest to use yet. In addition, I’m not exactly thrilled about having TableBuilder inherit from ActionView::Base, but it was the only way I could figure out to get the call to concat() to work. In any case, the point is to show you that refactoring your views into helpers is a good idea, and even something like a table builder goes a long way, even if you don’t do it the way I did it. Lemme know if this helps or hinders you. snippet!

What the heck is a Ycombinator

I woke up at 4am this morning for no reason and decided to go through Ycombinator in Ruby. Given that I read Hacker News daily, it’s a shame I didn’t know what a Ycombinator was. I thought the article was pretty clear, at least compared to Raganwald’s article. As an aside, from purely a learning experience, it was kinda fun and eye opening. At least I can see why geeks think it’s neat.

I had written a reply to a comment on hacker news about it, and it turned out to be really long, so I just extracted it out here. I was suppose to start some coding, but I got wrapped up in writing up the comment. My comment ended up to be pretty long, but it was shorter than it could have been. You guys get to suffer through the long version. Haha. But I’m not going to go into details. You can read that in the linked articles. Instead, I’ll paint some broad strokes.

Y = λf·(λx·f (x x)) (λx·f (x x))

or in Ruby, copied from the aforementioned article:

y = proc { |generator|
proc { |x|
proc { |*args|*args)
}.call(proc { |x|
proc { |*args|*args)

or if you prefer elegance, Tom’s solution in response to Raganwald

def y(&f)
lambda { |x| x[x] } [
lambda { |yf| lambda { |*args| f[yf[yf]][*args] } } ]

I found lambda calculus above hard to read. However, if you go through the code in Y Combinator in Ruby, you’ll find it’s not too bad. I find that this lecture is also pretty good, as it takes you step by step, with a little bit of humor as well.

If I had to take a stab at it, A Ycombo is a way to implement recursion mechanism when the language doesn’t provide named recursion, loops, or iterators, and all you get are first-class functions and a few substitution rules.

Functions are just mappings from one set of things to another set of things–ie. give it a number, and it’ll point to another number. Ycombo relies on a property of functions that sometimes, when you put something into a function, you get the exact same thing out, i.e. something gets mapped to itself, like f(x) = x^2, where f(1) = 1 is an example of this property. They call this the fixed point of a function.

The thing about functions is that they don’t just have to map numbers to numbers. They can map functions to other functions. A derivative is an example of a function that takes one function as an input, and spits another function back out. like d(x^2) = 2x. Where is the fixed-point of a derivative? One of them is when you put e^x into it. d(e^x) = e^x. I’m sure there are more.

This is important because if you can find the point in which a function can return a function unchanged, you can use that to call the function again, which is what we call recursion. And all the trickiness you see in ycombinator that you see is mainly a result of functional programming not keeping state, so you have to pass everything you need into a function. So if you have to do recursion, you have to have a mechanism pass the function itself, so you can call it again. And this mechanism kinda bends back onto itself, and that’s why you see a part of the ycombinator repeated twice in the code above, and in the lambda calculus.

It seems pretty inane to use ycombo given that modern high level languages provide named recursions, and if anything, for loops and iterators with closures. But what if you don’t? How do you process lists/arrays without loops or named recursion? Generally, you’d have to make your own recursion mechanism.

When would you ever have languages that don’t have those things? Probably not often. But Forth comes to mind. I’ve never used Forth, but from what little I know about it, the language starts off with some basic primitives and that’s it. No loops, no if statements. How do you get anything done? Well, you build your own control structures. People use Forth because it’s possible to build your own very small compiler from the ground up written in Forth itself, and still understand the entire thing. I suspect it’s used in embedded programming because of that reason.

So you’d pretty much use it when you want to have a powerful system from first principles. I’m guessing it’s possible to build your own computer by implementing only functions and substitution rules in hardware, and then you can derive everything else, numbers, pairings, and even recursions in software. That way, you keep your hardware costs down, while retaining the power of a Turing machine.

Speculating here…but another thing that might be interesting is that ycombinator might be a way to compress code. Software size should be reducible if the code is compressible. And recursion can be thought of as a compressing code, and expanding it when the recursion is run. I wonder if there’s other ways to reduce software bloat with the use of a ycombinator besides recursion?

(Aha!) Part of the reason why great hackers are 10 times as productive

I knew in college that some dudes were faster than I was in terms of programming. Since peer programming wasn’t exactly encouraged in college, and at work I did mostly prototyping work, I never really knew how fast other programmers worked.

So when I read Paul Graham (and Joel’s) claim that great hackers are at least ten times as productive as average programmers (too lazy to cite right now), I was kinda shocked. Surely, ten times is an order of magnitude! Something that takes an average programmer a 40 hour week to do the great hacker can do in a 4 hour afternoon?

I wondered about that, since there are times when I get stuck on something, then I just start stabbing around randomly out of frustration. I had assumed that great hackers were faster only because they had either the experience or insight to side-step whatever I was doing wrong.

But lately, I’ve been re-reading everyone’s essays that write about programming productivity. And one thing that caught my eye the second time around was when Paul Graham was talking about bottom up programming and how he didn’t really believe in objects, but rather, he bent the language to his will. He was building new blocks for himself so he could think about the problem at a higher level of abstraction.

This is basic stuff! I mean, that’s the whole point of higher-level programming. When you refactor methods out, you’re creating a vernacular so that you can express the problem in terms of the problem domain, not in terms of general computing. This is much like if you want to drive a car, you’d want to be able to step on the gas, rather than time the firings of the pistons. And if you want to control traffic in a city, you’d rather tell all cars to go to a destination, rather than stepping on the gas and turning the steering wheel for each one.

But taken into the light of bending a language to your will, it makes it more clear for me as to how great hackers are ten times as productive. Great hackers are productive not only because they know what problems to sidestep and can problem solve systematically and quickly, but they also build a set of tools for the problem domain as they go along. They are very good pattern recognizers and will be able to generalize a particular pattern of code, so that they can use it again. But not only that, great hackers will create an implicit understanding attached to the abstraction, ie. what we might call common sense.

A case in point. Before Ruby, I’d used for loops over and over again, never really thinking that I could abstract a for loop. It wasn’t until they were taken away in Ruby did I realize that map, inject, and their cousins are all abstractions of the for loop. When I see “map” I know that it performs a transformation on every element. But I also know that the array I get back will be the same size, that each element operation doesn’t affect other elements, among other things. These are implicitly stated, and they allow for shorter code.

When that happens, you can simply read “map”, and get all the connotations it comes with, and hence it comes with meaning. It also becomes easier to remember, since it’s a generalized concept that you can apply in different places in the code. The more times you use it, the easier it is to remember, instead of having specialized cases of the same kind of code where the behavior is different in different parts of the code.

A great hacker will take the initial time upfront to create this generalized code, and will save in the long run being able to use it. Done over and over again, it all adds up. So it’s not that for any given problem, a great hacker will be done in 4 hours what it takes an average programmer 40 hours, but that over time, a great hacker will invest the time to create a tools and vocabulary that lets him express things easier. That leads to substantial savings in time over the long haul.

I hesitated writing about it, as it’s nothing I (nor you) haven’t heard before. But I noticed that until recently, I almost never lifted my level abstraction beyond what the library gave me. I would always be programming at the level of the framework, not at the level of the domain. It wasn’t until I started writing plugins for rails extracted from my own work and reading the Paul Graham article that a light went off for me. It was easier to plug things like act_as_votable together, rather than to still mess around with associations (at the level of the framework). I still believe you should know how things work underneath the hood, but afterwards, but all means, go up to the level of abstraction appropriate for the problem domain.

DSLs (Domain specific languages) are really just tool-making and vernacular creation taken to the level of bending the language itself. It’s especially powerful if you can add implicit meaning to the vernacular in your DSL. It’s not only a way of giving your client power in their expression, but it’s also a refactoring tool so that you can better express your problem in the language of the problem domain. Instead of only adding methods to your vernacular, you can change how the language works. It was with this in mind that I did a talk on DSLs this past weekend at the local Ruby meetup. First part is on Dwemthy’s Array, and the second is using pattern matching to parse logo. Both seemed pretty neat when I first read about it. Enjoy!

DSL in Ruby through metaprogramming and pattern matching

Generating rdoc for gems that refuse to generate docs

I recently upgraded to capistrano 2.1, and it’s woefully lacking in documentation. Jamis Buck had already picked a documentation coordinator about a month ago, but nothing seemed to be happening since.

So it was time to go dumpster diving in cappy code. I at least wanted to see what the standard variables were. To my surprise, there were some docs in code, but I couldn’t generate it with

gem rdoc capistrano

For those of us that had never made a gem, all you have to do to force it to is to edit the associated specification for the gem (/usr/lib/ruby/gems/1.8/specifications/), and add

s.has_rdoc = true

Maybe if I dig enough stuff out of it, I’ll pose some prelim documentation for cappy 2.1 and donate it to the documentation effort.

As an aside and musing, ideally, the code itself would be documentation. However, just because you’re reading code, doesn’t mean that you can get the overall picture of how to use it. Even if you filtered out the details, and saw just the class and method declarations, that still wouldn’t be enough since you can’t see how things fit together. I don’t know what a good solution would be. The simplest API is often complex enough to be nearly useless without good documentation.

Small tip!

The user owns this and that

In most any community-based website, your users are creating objects. And sometimes, you’d only like to display them when they’re owned by the user. Usually, it’s easy enough to do the check based on the association in the template views.


However, for the sake of perhaps over-reaching, but something more readable, we try:

class Story < ActiveRecord::Base
belongs_to :account

def owned_by?(user)
self.account == user


But this gets to suck, because you have duplicate code in different classes that are just slightly different due to association names. One way to solve it is to put it in a module, and include it in different classes. After all, that’s what mixins are for.

module Ownership
def owned_by?(user)
self.account == user

class Story < ActiveRecord::Base
include Ownership
belongs_to :account
# blah blah blah

class Post < ActiveRecord::Base
include Ownership
belongs_to :account

Or alternatively, you can put it in the account, but in reverse. But now, you have to search through the associations of the owned ActiveRecord object.

class Account < ActiveRecord::Base
def owns?(ar_object)
ar_object.class.reflect_on_all_associations(:belongs_to).detect { |assoc|
ar_object.send( == self
} ? true : false

I find the tertiary operator to be kinda ugly there, but it doesn’t make sense to return the reflection object. Regardless, this lets you do:


However, this doesn’t work for self-associated AR objects, or objects that have two or more belongs_to to the same account. It relies on a unique belongs_to association for every object belonging to account. I’m not sure yet, which way’s the better way to go, and in the end it probably doesn’t matter as much, but I do like being able to say user.owns?(anything) for any object without really thinking about what the associations are. half-tip.

A simple distributed crawler in Ruby

A week ago, I took a break from Mobtropolis, and…of all things ended up writing a simple distributed crawler in Ruby. I hesitated posting it at first, since crawlers are conceptually pretty simple. But eh, what the heck.

This was just an exercise to do some DRb and Hpricot, so don’t use this for your production work, whatever it may be. An actual crawler is far more robust than what I wrote. And don’t keep it running hammering at stuff, since it’ll get you banned.

First, this is how you use it:

WebCrawler.start("") do |doc|
puts "#{"title").inner_html}"

And that’s it. It returns documents in an XPath traversable form, courtesy of Hpricot.

A web crawler is a program that simply downloads pages, takes notes of what links there are on that page, and puts those links on its queue of links to crawl. Then it takes the next link off its queue and downloads that page and does the same thing. Rise and Repeat.

First, we create a class method named start that creates an instance of a webcrawler and then starts it. Of course, we could have done without this helper method, but it makes it easier to call.

module Crawler
class WebCrawler
class << self
def start(url)
crawler =
crawler.start(url) do |doc|
yield doc
return crawler

So next, we define the initialization method.

module Crawler
class WebCrawler
def initialize
puts "Starting WebCrawler..."
DRb.start_service "druby://localhost:9999"
puts "Initializing first crawler"
puts "Starting RingServer..."

puts "Starting URL work queue"
@work_provider =,, "Queue of URLs to crawl")

puts "Starting URL visited tuple"
@visited_provider =,, "Tuplespace of URLs visited")
rescue Errno::EADDRINUSE => e
puts "Initialize other crawlers"
puts "Looking for RingServer..."
@ring_server = Rinda::RingFinger.primary

@urls_to_crawl =[:name, :urls_to_crawl, nil, nil])[2]
@urls_status =[:name, :urls_status, nil, nil])[2]
@delay = 1

This bears a little explaining. The first webcrawler you start will create a DRb server if it doesn’t already exist and do the setup. Then, every subsequent webcrawler it’ll connect to the server and start picking URLs off the work queue.

So when you start a DRb server, you call start_server with a URI, then you start a RingServer. What a RingServer provides is a way from subsequent clients to find services provided by the server or other clients.

Next, we register a URL work queue and a URLs visited hash as services. The URL work queue is a TupleSpace. If you haven’t heard of TupleSpace, the easiest way to think of it is as like a bulletin board. Clients post items on there, and other clients can take them out. This is what we’ll use as a work queue of URLs to crawl.

The URLs visited is a Hash so we can check which URLs we’ve already visited. Ideally, we’d use the URL work queue, but DRb seems to only provide blocking calls for reading/taking from the TupleSpace. That doesn’t make sense, but I couldn’t find a call that day. Lemme know if I’m wrong.

module Crawler
class WebCrawler
def start(start_url)
@urls_to_crawl.write([:url, URI(start_url)])
crawl do |doc|
yield doc


def crawl
loop do
url = @urls_to_crawl.take([:url, nil])[1]
@urls_status[url.to_s] = true

doc = download_resource(url) do |file|
end or next
yield doc

time_begin =
add_new_urls(extract_urls(doc, url))
puts "Elapsed: #{ - time_begin}"

Here is the guts of the crawler. It loops forever taking a url off the work queue using take(). It looks for a pattern in the TupleSpace, and finds the first one that matches. Then, we mark it as ‘visited’ in @urls_status. Then, we download the resource at the url and use Hpricot to parse it into a document then yield it. If we can’t download it for whatever reason, then we grab the next URL. Lastly, extract all the urls in the document and add it to the work queue TupleSpace. Then we do it again.

The private methods download_resource(), extract_urls(), and add_new_urls() are merely details, and I won’t go over it. But if you want to check it out, you can download the entire file. There are weaknesses to it that I haven’t solved, of course. If the first client goes down, everyone goes down. Also, there’s no central place to put the processing done by the clients. But like lazy textbook writers, I’ll say I’ll leave that as an exercise for the readers. snippet!


Communicating your intent in Ruby

I’ve been using Ruby most everyday for about two years now. While I’m no expert, I know enough to be fairly productive in it. And beyond liking the succinctness and power that you often hear other people talk about, it’s made me a better programmer. But there’s an aspect of Ruby that worries me somewhat.

To start, programming is recognized rightfully as a means to build something from pure thought. But it’s also a form of communication, to other programmers that will touch your code later, and to yourself when you look at it months from now. We’re at a point that other than embedded and spacecraft programming, we have the luxury of using programming languages that focus ease for the programmer, rather than for the ease of the machine. Fundamentally, that’s the philosophy that Ruby takes.

And while Ruby’s nice in a lot of ways, I’m not sure about how it communicates an object’s interface. When you’re allowed to modify objects and classes on the fly, how do you communicate interfaces between modules you mixin and methods/modules you add? By interface, I mean, how do you use this class so that it does what it’s suppose to? Normally, it’s pretty obvious–you look at the names of the methods declared in the code. A well-written class has public methods exposed, or you look at its ancestor’s public methods. You might need some documentation to figure out how to call them in the right order, but generally, you have some idea just by looking at the method signatures.

However, when you throw mixins and metaprogramming in the mix, it becomes less easy to tell just from looking at the method signatures in the code–the structure of the code. You have to specifically read the code, or you have to rely on someone who knew intent to document it in detail.

An example communicating interfaces for mixins: the module Enumerable contains a lot of the Collections related methods. The cool thing is that if you wanted these functions in your own class, all you have to do is define each() in your class, mixin the Enumerable module, and you get all of these “for free”. However, outside of documentation explicitly stating it, it’s not as immediately obvious in method signatures that this is what you have to do in order to use it. It’s only after scanning through the entire code that you notice each() being used for all the methods.

Of course, Ruby contains enough metaprogramming power to protect yourself against this. one can do something like this:

class MethodNeededError < RuntimeError
def initialize(method_symbol, klass)
super "Method #{method_symbol.to_s} needs to be in client class #{klass.inspect}"

module Enumerable
def self.included(mod)
raise, mod) unless mod.method_defined?(:each)

This only works if you put the include after you define each(). That’s just asking for trouble when the order of your definitions in your class matter.

A fair number of people are writing mini-DSLs in ruby using metaprogramming tricks. One of the common ones is the use method_missing to define or execute methods on the fly. ActiveRecord’s dynamic finds are implemented this way. The advantage of communication of interface here in the structure of the code is obvious. Unless it was documented well, you can’t tell just by looking at the method signatures.

Why do I harp on interface signatures? I mean, in the instance of requiring each(), it works by just letting it fail in the enumerated methods, since it’ll complain about each itself. In the instance of method_missing, just read the regex in the body. While these are true, none of these allow for rdoc to generate proper documentation. The whole point of documentation is to show you the interface–how to use that piece of code. I’m just afraid that given Ruby’s philosophy of being able to write clear, powerful, and succinct code, it might fall short when people start using these metaprogramming tricks like alias_method_chain and method_missing more and more. Maybe rdoc needs to be more powerful and read code bodies for regex in method_missing?. It already documents yields in code bodies, but that seems awfully specific.

I’m not a exactly a fan of dictating interfaces like in Java. When you’re first coding something up, you’re sketching, so things are bound to change. Having plumbing like interface declaration gets in the way, imo. However, when something’s a bit more nailed down, it’d be nice to be able to communicate to other programmers your intent without them having to read code bodies all the time.

In the end, I side on flexibility. However, I kinda wish Ruby had some type of pattern matching for methods so I didn’t have to read method_missing all the time. But then again, that would be messy in all but the simplest schemes. Can you imagine a class that responded to email addresses as method calls? I guess I’d have to file this one under “bad ideas”

Don’t reopen ActiveRecord in another file

The power of Ruby lies partially in how one can reopen classes to redefine them. Besides namespace clashes, this is usually a good way to extend and refine classes to your own uses. However, last night, I got bitten in the ass trying to refactor a couple classes. In Rails, you’re allowed to extend associations by adding a class the association call.

class User < ActiveRecord::Base
has_many :stories, :through => :entries, :source => :story,
:extend => StoryAssociationExtensions

where StoryAssociationExtensions is a class module holding methods, like expired() that I can perform on the challenges association, so I can do stuff like

@user = User.find(:first)
@user.stories.expired # gives all expired stories

So when refactoring and cleaning up, I renamed StoryAssociationExtensions to AssociationExtensions and reopened up Story class and put it in there. I just wanted to clean up the namespace, and put the association extensions somewhere that made semantic sense. Naturally, I thought putting association extensions for a class belongs in a class. Well, it doesn’t work. And don’t do it. Hopefully, I’m saving you some pain.

class Story < ActiveRecord::Base
module AssociationExtensions
def expired { |c| c.expired? }

Well, this works if you’ve reopened the class within the same model file, story.rb in this case. However, if you reopen the class in another file elsewhere, your model definition won’t get loaded properly, which leads to associations and methods you defined not to exist.

So imagine my bewilderment when associations didn’t work on only certain ActiveRecord Models. In addition, they worked on the unit tests and script/console, but didn’t work when the server was running. All that at 3am in the morning. 😦

Good thing for source control, so I could revert (but I have to say, svn isn’t as easy to use as it could be).

I ended up creating a directory in model called collection_associations and putting the associations in there under a module CollectionAssociations namespace. Not exactly the best arrangement but it’ll do for now.

I’m still not sure why ActiveRecord::Base instances don’t like being reopened, but I’m guessing it has something to do with only getting loaded once. If anyone has an explanation, I’d like to read about it.

free warning!

State change observer for ActiveRecord

When I started writing some code recently, I noticed that my controllers were getting fat. There was much to do, but there was a bunch of stuff in there that didn’t have anything to do with actually carrying out the action–things like sending notifications. ActiveRecord already has observers to take action on certain callbacks. However, what I needed was to take actions on certain state transitions. Not seeing any immediate solutions in the Rails API, I decided to test myself and try writing one. I was bored too. So while I’m not sure if it was worth the time writing it, it certainly was kinda interesting. Here’s what I came up with:

Just as a contrived example, let’s say we are modeling the transmission of a car. It has three modes: “park”, “reverse”, “drive”. We want to send a notification when a user tries to change it from “reverse” to “drive”, but not when he tries to change it from “park” to “drive”. If it didn’t matter, and we just wanted to send notifications when the state changed to drive, we’d just use the observers that came with ActiveRecord. But since we do care where the state transition came from, here’s what I came up with:

class CreateCarTransmission < ActiveRecord::Migration
def self.up
create_table :car_transmission do |t|
t.column :engine_id, :integer, :null => false
t.column :mode, :string, :null => false, :default => "park"

def self.down
drop_table :car_transmission

class CarTransmission < ActiveRecord::Base
include StateTransition::Observable
state_observable CarTransmissionNotifier, :state_name => :mode

So then for my notifier I have:

class CarTransmissionNotifier < StateTransition::Observer
def mode_from_drive_to_reverse(transmission)
# send out mail and flash lights about how this is bad.

And that’s it. Whenever in the controller, I change the state from “reverse” to “drive”, lights will flash and emails will be sent out condemning the action, and my controllers stay small and lean.

class CarController < ApplicationController
def dismantle
@car = Car.find(params[:id])
@car.update_attribute :mode, "reverse"
@car.update_attribute :mode, "drive"

So where’s the magic? It took a bit of digging around. There were two major things I had to do. I had to insert observers during initialization and I had to override setting of attributes to include an update to notify observers.

ActiveRecord doesn’t exactly allow you to override the constructor. I don’t think I tried too hard to mess around with it. Looking on the web, I happened upon has_many :through again, where he has some good tips that helped me through Rail’s rough edges. While I didn’t exactly follow his advice, I did find out about the call back, :after_initialize. It must be something new, because I don’t see it in the 2nd edition of the Rails book, and the current official API doesn’t list it. Other Rails API manuals seem to be more comprehensive, like RailsBrain and Rails Manual.

Then overridding attributes has always been a bit of a mystery. I found a listing of the attribute update semantics, which was helpful to figure out what I was looking for, but it was false, in that you can’t use the first one (article.attributes[:attr_name]=value) to set an attribute. Looking in the Rails code for 1.2.3, it shows that attributes is a read_only hash. But it’s right that you should override the second one (article.attr_name=value), since update_attribute() and update_attributes() depends on it.

Again, it ends up that the function I was looking for wasn’t found in the official API as a method, other than a short mention in the description of ActiveRecord under Overriding Attributes, which makes it harder to find. Ends up that we can use write_attribute().

So that’s pretty much it. Using some standard meta-programming like how plugins do it, you wrap it up, and it’s pretty simple:

require 'observer'

module StateTransition
module Observable
class StateNameNotFoundError < RuntimeError
def message
"option :state_name needs to be set to the name of an attribute"

def self.included(mod)

module ClassMethods

def state_observable(observer_class, options)
raise if options[:state_name].nil?
state_name = options[:state_name].to_s

include Object::Observable

define_method(:after_initialize) do

define_method("#{state_name}=") do |new_state|
old_state = read_attribute(state_name)
if old_state != new_state
write_attribute(state_name, new_state) # TODO yield the update method
notify_observers(self, state_name, old_state, new_state)



class Observer
def update(observable, state_name, old_state, new_state)
send("#{state_name}_from_#{old_state}_to_#{new_state}", observable)
rescue NoMethodError => e
# ignore any methods not found here


I had a difficult time figuring out how to define methods for an instance of a class. The only thing I came up with was to use define_method, or to include a module with instance methods in them. instance_eval() didn’t work. The meta programming for ruby gets rather confusing when you’re doing it inside a method–it seems hard to keep track of which context you’re in.

So if you can make a use of this, great. If you think it’s worth moving it into a plugin, let me know that too. If you know of a better way, by all means, let me know. tip!