Twittering as a platform

Amazon is posting their deals on twitter. I’m not quite sure that people would want deal ads on their cell phones all the time…

I’m kind of amazed, as are other people in the naysayer category have been, that Twitter had taken off as it has. At its basic form, it’s just passing back and forth messages, a problem seemingly solved by email decades ago. However, twitter obvious is not a question of the underlying technology, but rather, how it is presented to and used by people. It’s gotten people use to the idea of instant self-expression, no matter how inane–for better or worse. I would have chalked it up for sensors to monitor and log ourselves, but twitter demonstrated that people will report or say anything if there’s an audience. Perhaps trolls have already paved the way in this regard.

That said, I think it’s easy to write Twitter off as a fad, since the world’s largest collection of quips doesn’t quite seem to make the world a better place. My guess is that there’s probably value in Twitter, but only when it’s married with other sorts of data or text processing. Just off the top of my head, geospatial data and emotion detection algorithm on twitter data could generate a heat map of how people are feeling place to place, or time to time. I imagine advertisers would find this information valuable, since they can set up targeted advertising when people are statistically most vulnerable to impulse buying at a certain time or place.

If twitter can manage an API or platform to support this sort of thing, they’ll be around for a while, I think. If not, well, at least we’d have the largest collection of quips for the archaeologists of the 22nd century.

Advertisements

Photosynth: stitching photos in 3D

Photosynth presentation | Venture Itch

I think this is a bit of old news, since I wasn’t running windows XP in order to view the demo at their website. For the last 10 years or so, I’ve always thought that computer vision has been still trapped in the realm of research labs. But things are starting to bear fruit. Image registration (lining up images) isn’t an easy task, since lighting, shape, perspective all have to be taken into account. It becomes especially from difficult if you have to do it from 3 space, as is done in the demo. However, it seems like everything’s preprocessed, so it looks fast.

I don’t think it’s a far stretch to say that you can also register people’s faces, so you can find all the images with your face in it, taken from different perspectives.

I also wouldn’t be surprised that all the tagging of people going on in facebook photos is training a classifier to recognize and register people’s faces.

The ones that push innovation and create new markets are the ones that open up possibilities, and show others what was previously thought impossible.

Google Gears Lets Developers Take Apps Offline

Google Gears Lets Developers Take Apps Offline

This is certainly newsworthy. Google announced Gears, which is something that you install on your desktop to be able to operate online applications offline. I remember about 3 to 5 years ago when Google said, no, we’re not interested in desktop, because it’s not what we’re good at. We’re doing search.

If anything I think they learned from Netscape’s mistake in the past. Marc Andersen, the founder of Netscape, announced that, as a startup, they were taking on Microsoft, and was going to beat it to the ground. Of course, when you use strong words like that, you’re going to get Bill Gate’s attention, and it’s always dangerous to wake a sleeping dragon, when you’re not bigger yourself.

Despite the ever growing ubiquity of wireless connections and connectivity all around, I think there’s still a place for offline applications. This sort of thing to me, isn’t really about being able to do your work on planes, though it’s certainly useful for that. To me, this is about caching results that the user might possibly want to see/do next, so that the user experience is fast and responsive without possible network latency. While AJAX is fast, and tolerable for most things, I imagine that there will be some applications that can make good use of this type of offline caching mechanism, so that what was impossible before is now possible.

Of course, caching is irrelevant when the bandwidth is high, but you will either find yourself 1) in places where bandwidth is lower or 2) the bandwidth requirement for your dataset is higher than what you currently have. Mapping applications come to mind as benefiting a lot from caching mechanisms. And if bandwidth jumps up, that makes caching in mapping applications obsolete, there will be other datasets that will be too large to stream in the future. I can only imagine classifiers or their training data sets being one example, as well as a record of the user’s digital life.

Update: I didn’t mention this, but I think it makes even more sense for mobile devices, per this opengardens post on it.

Exploring: reCAPTCHA: A new way to fight spam

Exploring: reCAPTCHA: A new way to fight spam

This particular piece of news has been floating around lately. It’s a CAPTCHA service that also uses the CAPTCHA information entered by users to teach computers how to digitize books.

It’s so freakin’ obvious, I slapped myself on the forehead. I even advocated and watched Luis Von Ahn’s videos on human computation, and didn’t think about it. Anyway, it seems a little bit odd, though, using a technique that computers can’t solve to teach computers how to read–hence solve CAPTCHAs. Not knowing enough details–I wonder if the success of reCAPTCHA will call for the demise of the CAPTCHA.

The usual concerns of cheating were rampant on reddit comments. “What if people just put in random stuff? Then you’ll have a computer that spew out crap when digitizing books.” If his lecture on the ESP game was any indication, he has a number of ways to fight it (not to mention he specializes in online cheating also). In the ESP game, he counteracts cheating by giving the player a couple ones he knows the answers to and sees how much they’re off. Also, he keeps track of the statistics for each image as well as throwing away results randomly. It’s a little hard to see how he’ll track individual users–other than through their IP–but otherwise, one can feasibly use the same methods for reCAPTCHA.

I have hope for Facebook being the new Google

Facebook just released facebook marketplace, where its members can sell things, like housing, jobs, or textbooks. Strategically, this makes a lot of sense, since it’s something that’s actually useful to its members, especially if it ties your social network information into what you want to buy and sell. From the looks of it though, it doesn’t do that. But I’m sure someone at Facebook is thinking about it.

Facebook is social networking done right–at least better than any competitors that I’ve seen. On the surface they might all seem the same; there’s a personal profile page, there’s a list of friends, and you can send messages back and forth with each other. However, I think there’s some critical differences.

MySpace has a larger user base, but it is largely seen by its owners as a platform for media advertising. It’s an unsupported assertion, but given its mishmash feature set and large ads, it’s hard to think otherwise.

Friendster was the leader for quite some time, but has since lost the attention of the under 25 demographic (anecdotal evidence). Their mistake was adding things that were technically neat, but ultimately made the site too slow to use. It’s a lot better now, and people are still using it. But based on the features they’ve put out it seems like they are interested in helping people publishing media to a user’s personal network–using blogs, videos, etc. However, no news trickles of them attracting otaku developers, and I’m sure firing the now founder of Renkoo didn’t help win over the hearts and minds of otaku developers.

On the other hand, Facebook is seen by its owners as a platform for technology driven innovation to help keep up social interactions between individuals. I’m not sure when the transition happened, but it was more evident to me after news feeds were released. Now, most people were vehemently opposed to it, but I saw it as two things.

First, it was a feedback mechanism to open up sharing. The more you share about yourself to your friends, the more you appear on their radar, and the more interaction you’ll interact/message them. This seems to be inline with the goal of keeping people talking with each other.

Secondly, it was the basis of publishing personal news without even having to push a button. We all gather news about the world, but beyond CNN, there’s also another type of news we’re interested in–information about what our trusted friends are doing. Blogs lets you publish just by pushing a button. Facebook Mini-feeds lets you publish just by doing what you normally do on Facebook. It’s not inconceivable that in the future, you can also publish from your mobile that you have free time to chill out, and people can just join you to hang out because they saw that you were available in their mini-feed on their mobiles.

Facebook is pulling ahead in terms of their feature offerings because they seem to be able to attract developers that are the otaku of programmers that are willing to innovate something that’s actually useful to their users. Which other social network puts programming puzzles in their mini-feeds? Which other social network has an API? The alacrity in which they deploy features is stunning as well. They implemented twitter pretty easily by listing their status updates. It is in this way that I see them being a ‘new Google’–they are setting themselves up as a hacker’s paradise and attracting otaku programmers that way.

When Zuckerberg held out against getting brought out, he was either being greedy or he had future plans on what he would be able to do with a social network. Most of the press criticized him for being the former, but it’s looking like it’s the latter. As long as Facebook is useful for their users, there’s value in the social network data that can be used by future applications. If they can establish themselves as the standard platform from which all social information about an individual is gathered through their API, this world would be a changed place, just as Google changed the world with its technology.

Comments on the death of computing

This article is starts off as a complaint or a lament in the area of edge CS, and probably serves as a warning, though the conclusion is probably not as hopeful or optimistic as it could be. Or it could possibly be the lack of imagination. To start:

There was excitement at making the computer do anything at all. Manipulating the code of information technology was the realm of experts: the complexities of hardware, the construction of compliers and the logic of programming were the basis of university degrees.

However, the basics of programming have not changed. The elements of computing are the same as fifty years ago, however we dress then up as object-oriented computing or service-oriented architecture. What has changed is the need to know low-level programming or any programming at all. Who needs C when there’s Ruby on Rails?

Well, part of it is probably a lament by the author–presumably a scholar–on the loss of status and the general dilution in the quality of people in the field. And the other part is about how there’s nowhere interesting left to explore in the field.

To address the first part, it’s well known that engineers, programmers (or any other profession) likes to work with great and smart people. Usually, when a leading field explodes you’re going to attract these great and smart people to the field. However, the nature of the field of technology is to make doing something cheaper, faster, or easier. And as technology matures, the more the barriers to entry in the field lowers. And as a result, you’ll get more people that couldn’t make it before in the field and the average quality of people dilutes. People use to do all sorts of research on file access. But now, any joe programmer doesn’t think about any of that and just uses the ‘open’ method to access files on disk. But that’s the nature of technology, and it’s as it should be.

The environment within which computing operates in the 21 century is dramatically different to that of the 60s, 70s, 80s and even early 90s. Computers are an accepted part of the furniture of life, ubiquitous and commoditised.

And again, this is the expected effect of technology. Unlike other professions, in engineering one is able to make technology which gives people leverage over those that don’t use it. This gives the advantage of acceleration and productivity that’s scalable that you won’t find in other professions. If you’re a dentist, there is an upper limit to the number of patients you can see. In order to be even more productive, you’ll need to create a clinic–a dentist farm–to parallelize patient treating and you need other dentists to do that. If you’re an engineer, the technology that you build is a multiplier, and you don’t even need other people to use the multiplier.

But at a certain point, the mass adoption of a technology makes it cheaper, and hence, your leverage over other people isn’t that great, and you begin to look for other technologies to make your life easier or give you an edge over your competition. But these are all applications arguments to CS; while important in attracting new talent, it doesn’t address where the field has yet left to go on the edge.

As for whether CS is really dead or not, I think there’s still quite a bit of work to be done at the edges. Physics in the late 1800’s claimed that there wasn’t much interesting going on there until General Relativity blew up in their face. Biology has had its big paradigm shift with Darwin, but there’s still a host of interesting unknown animals being discovered (like the giant squid) and I’m sure alien biology or revival of Darwin’s sexual selection would help open up another shift. Engineering suffered the same thing in the early 1900’s, when people with only a background in electromechanical and steam powered devices thought there wasn’t much left to invent or explore, until the advent of computing spurred on by the Second World War.

In terms of near-term computing problems, there’s still a lot of work to be done in AI, and all its offshoot children, such as data mining, information retrieval, and information extraction. We still can’t build software systems reliably, so better programming constructs are being ever-explored. Also, since multi-core processors are starting to emerge, so better concurrent programming constructs are being developed (or rather, taken up again…Seymour Cray was doing vector processors a long while back)

But I’m guessing the author of the article is looking for something like a paradigm shift, something so grand that it’ll be prestigious again, and attract some bright minds again.

In the end, he is somewhat hopeful:

The new computing discipline will really be an inter-discipline, connecting with other spheres, working with diverse scientific and artistic departments to create new ideas. Its strength and value will be in its relationships.

There is a need for innovation, for creativity, for divergent thinking which pulls in ideas from many sources and connects them in different ways.

This, I don’t disagree with. I think far-term computing can draw from other disciplines as well as being applied to others. With physics, there’s currently work on quantum computers. In biology, there’s contribution to biology from bioinformatics and the sequencing of genes, as well as drawing from it like ant optimization algorithms and DNA computers. In social sciences, there’s contribution to it using concurrent and decentralized simulation of social phenomenon, as well as drawing from it like particle swarm optimization.

One day, maybe it will be feasible to hack your own bacteria, and program them just as you would a computer. And then, a professor might lament that any 14 year old kid can hack his own lifeform when it use to be in the realm of professors. But rest assured, there will always be other horizons in the field to pursue.

Adobe open sources Flex, it’d be nice for mobile too

Now that’s news. I think it’s a good strategy on their part, since there’s still work to be done in the adoption phase of user interfaces, both on the web and mobile devices. What is most interesting is if Adobe plans to use some version of Flex as a platform for mobile devices. Currently, it’s done in JavaME, and after trying it out, it was hard, because the tools were still a bit inadequate, and the fact that it’s still not easy to get applications on to phones.

With an open sourced language for rich/heavy front-ends, I wouldn’t be surprised if this gains quick adoption, as I see just OpenLaszlo and Microsoft’s Silverlight as being the alternative. AJAX will have to come up with other tricks up its sleeve, like faster javascript engines…This whole scene will be something to keep an eye on, as it’ll be interesting how it plays out.