A disruptive technology I have been following is machine created content. This means creating new content, such as a blog post completely automatically. I see this as a key enabling technology for fully adaptive learning. This would mean having a computer teacher who can constantly create new lessons at exactly the required skill level for each student and also personalizing it for the individual. To do this you need to be able to quantify the knowledge out there. Turns out there is a lot of information locked up in pictures so any technology able to access that is of interest.
Google has been working on this for a while and it seems to be making progress.
Image recognition was already good—but it’s getting way, way better. A research collaboration between Google and Stanford University is producing software that increasingly describes the entire scene portrayed in a picture, not just individual objects.
The New York Times reports that algorithms written by the team attempt to explain what’s happening in images—in language that actually makes sense. So it spits out sentences like “a group of young people playing a game of frisbee” or “a person riding a motorcycle on a dirt road.”
This is cool technology but it is not perfect yet. Not even close. 🙂
Flickr is facing a user revolt after a new auto-tagging system labelled images of black people with tags such as “ape” and “animal” as well as tagging pictures of concentration camps with “sport” or “jungle gym”.
The system, which was introduced in early May, uses what Flickr describes as “advanced image recognition technology” to automatically categorise photos into a number of broad groups.
Heh woops. Still, this isn’t going away and will be commonplace in a few years.
Here’s a recent TED video showing how they are making this happen.