Barcodes encoded in the angular dimension

But the new system uses a whole new approach, encoding data in the angular dimension: Rays of light coming from the new tags vary in brightness depending on the angle at which they emerge. "Almost no one seems to have used" this method of encoding information, Raskar says. "There have been three ways to encode information optically, and now we have a new one."

One of the better stories I've seen on the front page of HN in a while.  

are usually used to identify normally inanimate things.  When we have
things that have processing power, we normally go over other lines of
communication, such as wireless, to communicate between things.  But as human-computer interfaces get better and are more ubiquitous, we sometimes would want communications that are inherently proximity or interaction based.  A good example of this type of interaction is something like Bump technologies.  That way, standard cameras would be able to be used, and the information doesn't need to make a roundtrip to the server.

I imagine barcodes being more prevalent in everyday objects not only to tell the world of what it is, but how to interact with it, and the protocols that it accepts to talk to it.

Prezi uses zoomable user interfaces for presentations

Hrm, a different take on presentations. This reminds me of photosynth and Jeff Han’s multitouch.

The advantage of this slide format is also its weakness. Our visual-perception system is pretty good at making spatial relationships between things, especially if there’s a movement animation between on thing and another. This way, you can demonstrate relationships between things that you couldn’t before.

However, with a single canvas that one pans and zooms around in, you need to make it fit on a 2D page. Something like a network graph can easily overwhelm this format (and every other 2D format. However, this is a small price to pay. We’ve been cramming multi-dimensional information into 2D formats, like paper, for a long time now.

Building a culture of teaching and learning by Dr. Tae!

Bah.  Posterou’s share thing isn’t working.  It lost my post. I’m going to stop using its bookmarklet because it’s done that more than once, and it’s woefully frustrating.

To recap, I didn’t realize that I had met Dr. Tae until the end of the talk.  But I met him when he was a physics grad student.


Regular Expression Matching and Postfix Notation

As the compiler scans the postfix expression, it maintains a stack of computed NFA fragments. Literals push new NFA fragments onto the stack, while operators pop fragments off the stack and then push a new fragment. For example, after compiling the abb in abb.+.a., the stack contains NFA fragments for a, b, and b. The compilation of the . that follows pops the two b NFA fragment from the stack and pushes an NFA fragment for the concatenation bb.. Each NFA fragment is defined by its start state and its outgoing arrows:

The snippet doesn’t make much sense unless you read the article, but this part, I thought was rather neat. Usually, when I wrote my crappy, one-off parsers, I just used regexes to pull out the tokens that I needed. Never thought too much about how it was implemented. But what’s detailed here makes sense. Regexes are just state machines where you track whether the string you’re matching against lets you traverse all the way through the state machine. And to do that, it pushes each fragment of the regex onto a stack until it reaches an operator, which then pops it off and works on it. While I’ve usually left post-fix notation is ass-backwards from a user perspective, I can see the elegance of the implementation. I suspect Forth and Factor are similar in this regard.