4. Within five years, search engines will be based on an understanding of natural language.Google is creating a system that will read every document on the web and every book for meaning. Kurzweil says this will provide a rich search and question answering experience based on the true meaning of natural language. — CNN
Consider that IBM’s Watson got a higher score on the American television game of Jeopardy than the best two human players combined.
Jeopardy is a broad task involving complicated natural language queries which include puns, riddles, jokes and metaphors.
For example, Watson got this query correct in the rhyme category: “A long tiresome speech delivered by a frothy pie topping.” It correctly responded “What is a meringue harangue.”
What is not widely appreciated is that Watson got its knowledge by reading Wikipedia and several other encyclopedias, a total of 200 million pages of natural language documents.
I does not read each page as well as you or I. It might read one page and conclude that there is a 56% chance that Barack Obama is President of the United States.
You could read that page, and if you didn’t happen to know that ahead of time, conclude that there is a 98% chance.
So you did a better job than Watson at reading that page. But Watson makes up for this relatively weak reading by reading more pages, a lot more, and it can combine its inferences across everything it has read and conclude that there is a 99.9% chance that Obama is president.
At Google, we are creating a system that will read every document on the web and every book for meaning and provide a rich search and question answering experience based on the true meaning of natural language.
For example, it will engage you in dialogue to clarify questions and discuss answers that are ambiguous or complex.
5. By the early 2020s we will be routinely working and playing with each other in full immersion visual-auditory virtual environments. By the 2030s, we will add the tactile sense to full immersion virtual reality.Humans will be able to augment real reality so it appears someone is present, even if that person is hundreds of miles away, Kurzweil says.
The telephone is virtual reality in that you can meet with someone as if you are together, at least for the auditory sense.
We’ve now added the visual sense with video conferencing — although not yet 3D and full immersion.
The visual sense will become full immersion over the next decade. We’ll also be able to augment real reality so that I could see you sitting on the coach in my living room and you could see me sitting on your back porch, even though we’re hundreds of miles apart.
Your augmented reality glasses will also be able to make suggestions to you for an interesting joke or anecdote that you could slip into a conversation you’re having.
There will be limited ways of adding the tactile sense to virtual and augmented reality by the early 2020s, but full immersion virtual tactile experiences will require tapping directly into the nervous system.
We’ll be able to do that in the 2030s with nanobots traveling noninvasively into the brain through the capillaries and augmenting the signals coming from our real senses.