[65 sec read]
In this Ted talk, Stanford Researcher talks about her quest to use AI to teach computers to understand images which would help the humanity in many ways. Smartest machines are blind today. How can we make them see?
A three-year-old child describing what she sees in the photo: Those are people going in an airplane. That’s a big airplane. While a 3 year can describe it well, computers find it difficult to describe it let alone identifying them.
Taking a photo is easy – we represent it using a two dimensional array in computers. However, taking a photo is different from seeing (just like hearing vs listening). It involves a lot of brain power. It too 500 million years for mother nature to get it.
Teaching computers to identify a cat seems an easy task, but it is not! Cats can be in different shapes and positions. We humans take millions of photos to process them. Computers can learn them only by storing many photos of them. We used Amazon mechanical turk to hire nearly 50,000 people to label nearly a billion images.
We used the labeled data to nourish the computer brain using neural network models. This allowed us to identify objects in a picture. However, we are long way away from accurately describing a picture.
Ted talk by Fei-Fei Li