At first glance it seems like an unlikely use candidate for Google Glass but after watching the video I’m a believer. As part of the Open Glass project, researchers create some applications that aid visually impaired users. What a really cool project.
The two applications demonstrated are Question-Answer and Memento. Question-Answer works by having the user take a picture of what’s in front of them, and through means of mechanical turk and twitter, someone answers it. The response is then fed back to the user through audio read back to them. Memento works a little different. Verbal annotations are left in the environment by someone else. As the visually impaired person navigates the environment, images are streamed back to a server and matched against a database. When a match is made, the annotations are read to the user.
Here is a precursor video to the one above, when the developers are testing out the technology before using it on real impaired persons.