The possibility of a reshapable keyboard

I have a bunch of hard drives that failed. I suspect that it’s the controller card that is broken, but either way, I can’t use it. What to do with old hard drives? That’s when I started looking around, and it ends up that there are rare-earth magnets inside. (as well as a voice actuator that you can hook up to an amp to get hard drive speakers.)

And thus, I found a long article on rare earth magnets. Magnets have long fascinated people, as it makes all sorts of things possible, like speakers, hard drives, motors, and generators. But I didn’t know about magnetic braking and that you can buy ferrofluids (also described in the article).


Ferrofluids are liquids that responds to a magnetic field. When you put a magnetic field near it, it responds by getting spiky. The stronger the field, the more dense the spikes. I was able to play with some in a enclosed sac once. It’s kinda weird. You can actually feel resistance in the liquid when you put a magnet by it, like something’s in the liquid.

While the optimus keyboard lets you re-display the keys in any way you wish, I’ve always wanted a keyboard that I can reshape. I’d rather have the keyboard actually be a membrane stretched over a flat rectangular plate. And depending on the application, the membrane would be able to take on different shapes. So instead of having keys when I’m looking at a map, the “keyboard” would be in the shape of the terrain I’m manipulating. Then I can pan and tilt. If I’m flying a plane, I’d rather have a joystick I can manipulate. I suppose you can make a rudimentary one with ferrofluids in an enclosed membrane. Not only can you reshape the liquid with controlled electromagnetic fields, but you should also be able to detect human interaction with the membrane by how it changes the magnetic field.

If a reshapable keyboard were to exist, you can also hook up two together through the internet. That way, you can interact with other people through touch, and not just text. If I put my hand on the reshapable keyboard and push down, the connected keyboard at the other end should have an imprint of my hand, pushing up out of the membrane. I’d also be able to augment my interactions so that my hand can appear to be holding something that it might not really be on my end. An inane thing would be to play paper, rock, scissors, where instead of the hand gestures, you’d actually see a sheet of paper, a chunk of rock, or pair of scissors rise out of the reshapable keyboard. A more useful application might be to keep family members or loved ones in touch–literally.

And if the membrane were embedded with OLEDs then it can be possible to add color to the membrane, so the interface would be something you can directly manipulate.

When I dreamt this up, I was thinking of gaming applications or remote surgery. Imagine the kind of fun and good you can do with the technology! However, after a bit of thought, I think a more likely scenario is that geeks adopt it for that, and then the porn industry makes it widespread. Just wait and see.

Advertisements

Hackszine.com: Detecting and reducing power consumption in Linux

Hackszine.com: Detecting and reducing power consumption in Linux

Power consumption was usually something of secondary concern for desktop computer engineers for a while. But not so today. When you’re cramming all those transistors in such a small space, operating at high speeds, power definitely becomes an issue. Now that mobile and embedded devices are experiencing their slow infiltration of our daily lives, power should be on the table for improvement. Google has been doing some work for this, and they now build their own power supplies, and even build their data centers where power is cheaper (old news, so I won’t link it).

Battery life is one area of computing that really is way behind. I’m still hoping for some feat of chemical engineering to save us…but hopefully, in this time of scarce energy for mobile devices will drive some creative hardware engineering.

Innovation is force fed; someone get the lube!

In an earlier post, I had talked about what users know and what you know, when it comes to listening to your users. That said, when it comes to building new products, either in another line, or something to replace your old product, you should go back to not listening to your users–at least on the first draft. The act of creation is effectively the effort of one (or the few). At least when it comes to first drafts, too many cooks do spoil the broth. That might be a bit Ayn Randian, but the only thing I’ve ever heard of where design by committee was successful was the Space Shuttle and the Lunar Lander. (If there’s more examples, please enlighten me.)

When you’re building a product you’re essentially forcing your world view onto others. You’re basically saying, “I find this to be a pain. And this is not the world as it should be. As a builder, I can correct it after mouthing off for a while.” And this is usually why people don’t warm up to innovative ideas readily–someone is shoving their world view in your face. And unless you’re someone that has been looking for a solution to the same problem when it’s introduced to you, you won’t be receptive to it. Even innovative people suffer from this affliction of shortsightedness.

“Don’t worry about people stealing an idea. If it’s original, you will have to ram it down their throats.” – Howard Aiken

Because innovative products can be so jarring, they should soften the blow a bit–or as others like to call it lowering the barriers. This is where influences from design, gaming, and etiquette can help.

Beyond the current trend of sleek lines, horn-rimmed glasses and black turtle necks of designers, design isn’t just about putting a gradient background on your web app, or painting things in pastel colors. Hackers making a product should understand that design is the study of how to best solve communication and usability problems with limiting constraints. What information would the user need to know right this second, and how should you convey it to make it as easy to understand as possible? And from the answers to those questions will emerge a form that is also pleasing to the eye.

Gaming is an avenue more familiar to hackers than design is. However, games are often seen as mere trifles of play reserved for kids–though this is changing. If you’ve played enough video games and thought about WHY they’re fun, will help also, because to bring out the essence of fun in what’s normally perceived as tedium will give your product an edge. In the lecture about the ESP game by Luis von Ahn, he laments the fact that there’s millions of cycles of human computation wasted. There was 9 billion hours played of solitaire last year (est.). Considering that the Empire State Building took 7 million hours and Panama Canal took 10 million hours, that’s a lot of wasted hours. We should be able to put those cycles to good use by making people play games to solve problems that computers can’t yet solve. So a symbiosis of humans and computers can be considered a large distributed computer to solve hard problems, such as object recognition in images. You might have played it.

In other web apps, the idea of a collection is a powerful mechanism of play. Social networking sites play on the idea of collecting friends, much in the same way that in Pokemon, you “gotta catch them all!”. In others, the idea of a scoreboard is a powerful motivator, as seen on Digg and Reddit.

And last of all, the idea of etiquette seems far removed from being applicable to innovative products. However, no matter how much technology people surround themselves with, we are still social beings and will have social tendencies. Because of that, we expect certain behaviors and interactions between ourselves and our machines. We get mad and frustrated at computers and devices because they’re usually not very polite. They stop responding when they’re busy doing something, but don’t tell you what they’re doing. They don’t remember what you told them last time and asks us over and over again. And when they don’t know how to ask for help when something goes wrong, since the error messages are unintelligible to most users. These are all hallmarks of an annoying person, and were it a real person, I’d have kick them to the curb.

The iPod, and in general, Apple products, are known for their politeness. When I first got an iPod, it was the 5th generation. I was surprised that it stopped the music, if the ear buds got unplugged, and that it turned itself off, after it’s been paused for a while. Basically, it knew what was going on, and reacted to it in a fashion that makes sense to its owner. That sounds like the promise of Agent based software hyped so long ago. Maybe it should make a slow come-back.

The sad thing is, computer apps and devices have been annoying us for so long, that we have kinda gotten use to it. I think as research on classifiers become more readily available to programmers as being embedded in the language, and the rising influence of designers in applications, we should see a trend towards more polite products. If you can make a product that is polite, it’ll go a long way in gathering fans.

In the end, you want people to use what you build if it has value. And users want to GET THINGS DONE, so they can move on with their lives. All products should solve problems, there’s no doubt that it’s essential. All other points are moot if your product is useless. But given that it does solve a problem, if it is also beautiful, fun, and polite, it will go a long way in lowering barriers so that we can all have pearls Before Breakfast.

Comments on the death of computing

This article is starts off as a complaint or a lament in the area of edge CS, and probably serves as a warning, though the conclusion is probably not as hopeful or optimistic as it could be. Or it could possibly be the lack of imagination. To start:

There was excitement at making the computer do anything at all. Manipulating the code of information technology was the realm of experts: the complexities of hardware, the construction of compliers and the logic of programming were the basis of university degrees.

However, the basics of programming have not changed. The elements of computing are the same as fifty years ago, however we dress then up as object-oriented computing or service-oriented architecture. What has changed is the need to know low-level programming or any programming at all. Who needs C when there’s Ruby on Rails?

Well, part of it is probably a lament by the author–presumably a scholar–on the loss of status and the general dilution in the quality of people in the field. And the other part is about how there’s nowhere interesting left to explore in the field.

To address the first part, it’s well known that engineers, programmers (or any other profession) likes to work with great and smart people. Usually, when a leading field explodes you’re going to attract these great and smart people to the field. However, the nature of the field of technology is to make doing something cheaper, faster, or easier. And as technology matures, the more the barriers to entry in the field lowers. And as a result, you’ll get more people that couldn’t make it before in the field and the average quality of people dilutes. People use to do all sorts of research on file access. But now, any joe programmer doesn’t think about any of that and just uses the ‘open’ method to access files on disk. But that’s the nature of technology, and it’s as it should be.

The environment within which computing operates in the 21 century is dramatically different to that of the 60s, 70s, 80s and even early 90s. Computers are an accepted part of the furniture of life, ubiquitous and commoditised.

And again, this is the expected effect of technology. Unlike other professions, in engineering one is able to make technology which gives people leverage over those that don’t use it. This gives the advantage of acceleration and productivity that’s scalable that you won’t find in other professions. If you’re a dentist, there is an upper limit to the number of patients you can see. In order to be even more productive, you’ll need to create a clinic–a dentist farm–to parallelize patient treating and you need other dentists to do that. If you’re an engineer, the technology that you build is a multiplier, and you don’t even need other people to use the multiplier.

But at a certain point, the mass adoption of a technology makes it cheaper, and hence, your leverage over other people isn’t that great, and you begin to look for other technologies to make your life easier or give you an edge over your competition. But these are all applications arguments to CS; while important in attracting new talent, it doesn’t address where the field has yet left to go on the edge.

As for whether CS is really dead or not, I think there’s still quite a bit of work to be done at the edges. Physics in the late 1800’s claimed that there wasn’t much interesting going on there until General Relativity blew up in their face. Biology has had its big paradigm shift with Darwin, but there’s still a host of interesting unknown animals being discovered (like the giant squid) and I’m sure alien biology or revival of Darwin’s sexual selection would help open up another shift. Engineering suffered the same thing in the early 1900’s, when people with only a background in electromechanical and steam powered devices thought there wasn’t much left to invent or explore, until the advent of computing spurred on by the Second World War.

In terms of near-term computing problems, there’s still a lot of work to be done in AI, and all its offshoot children, such as data mining, information retrieval, and information extraction. We still can’t build software systems reliably, so better programming constructs are being ever-explored. Also, since multi-core processors are starting to emerge, so better concurrent programming constructs are being developed (or rather, taken up again…Seymour Cray was doing vector processors a long while back)

But I’m guessing the author of the article is looking for something like a paradigm shift, something so grand that it’ll be prestigious again, and attract some bright minds again.

In the end, he is somewhat hopeful:

The new computing discipline will really be an inter-discipline, connecting with other spheres, working with diverse scientific and artistic departments to create new ideas. Its strength and value will be in its relationships.

There is a need for innovation, for creativity, for divergent thinking which pulls in ideas from many sources and connects them in different ways.

This, I don’t disagree with. I think far-term computing can draw from other disciplines as well as being applied to others. With physics, there’s currently work on quantum computers. In biology, there’s contribution to biology from bioinformatics and the sequencing of genes, as well as drawing from it like ant optimization algorithms and DNA computers. In social sciences, there’s contribution to it using concurrent and decentralized simulation of social phenomenon, as well as drawing from it like particle swarm optimization.

One day, maybe it will be feasible to hack your own bacteria, and program them just as you would a computer. And then, a professor might lament that any 14 year old kid can hack his own lifeform when it use to be in the realm of professors. But rest assured, there will always be other horizons in the field to pursue.