New research on processing video and images may open up a flood of content for education use.
Loosely inspired by a biological brain’s approach to making sense of visual information, a University of Michigan researcher is leading a project to build alternativecomputer hardware that could process images and video 1,000 times faster with 10,000 times less power than today’s systems—all without sacrificing accuracy.
Basically they are creating an adaptive neural network chip to interpret images and video.
Of course this is how Skynet was created If this works it will allows computers to interpret images and video like our brains do. The implications are dramatic. Robots will get much better vision. Self driving cars become more able to adjust to changing conditions. Doctors will be able to have computers analyze medical scans for tiny problems. All video content will be able to be automatically tagged and indexed, which can then lead to it being used for education. There are millions of videos available on the web but they aren’t indexed very well. For example, let’s say you want to find a video of a woman in a blue sweater. How do you find it? This technology will help make that possible.
So here’s where it gets a little darker. This research is being funded by DARPA, which is a military research organization. Why does the government want this? Because they have a huge amount of video coming in from around the world and they need to analyze it and index it, all of which takes a lot of people right now. This tech will automate all that.
Of course then the NSA will use this to watch and index everything else that happens. What little privacy we have left goes out the window. *sigh*.
Still, a cool new development.