There was a story last week in the Vancouver Sun about a computer program that has been generating stories on the LA Times website.
Instead of personally composing the pieces, Schwencke developed a set of step-by-step instructions that can take a stream of data — this particular algorithm works with earthquake statistics, since he lives in California — compile the data into a pre-determined structure, then format it for publication.
His fingers never have to touch a keyboard; he doesn’t have to look at a computer screen. He can be sleeping soundly when the story writes itself.
Upon reading the story you can tell this is a very very limited capability. For now. See, this is always how these things start. Disruption always starts small and on the edges. Gradually it will branch out into other areas. I’m betting it will get a boost from someone like Google who wants to customize the web experience for every user. They are already working on natural language processing in a big way so it’s only a small jump from reading to writing in my mind.
So if we can generate news stories in the future automatically, why not learning content? Let’s say a child is at a park and sees a cool looking cloud. Can’t they ask their smart toy how the cloud was formed? Could not the back end system then generate a lesson based on where they are standing and nearby bodies of water, and current weather patterns? This would be so much better than just a generic lesson on cloud formation as it would apply directly to what they child is looking at.
I think I’m going to have to add a new category to my blog for Machine Generated Content. The ability for computers to see and understand, and then synthesize new knowledge I think will be huge in the next few years.