The Death of Google?

Posted by Jason Stutman - Energy & Capital

Share on Facebook

Tweet on Twitter

Since this is my first published article since the holiday, I’d like to take a moment to wish all of our readers a happy New Year. Hopefully you’re as excited about what 2014 holds as I am.

I was fortunate enough to spend my final moments of 2013 on a beautiful, 100-ft. yacht parked at the docks in the Canton neighborhood of Baltimore, Maryland. We enjoyed amazing food and top-shelf liquor while we watched the fireworks go off from across the harbor at midnight.

This all spoke volumes to the many perks of wealth, but I was more interested in conversing with our host than anything else.

I had yet to meet him before, but the vessel’s owner was easy enough to pick out. In a crowd full of modest dresses and indistinguishable suits, his bright red sailing jacket stuck out like a sore thumb. When you’re hosting a party on a multi-million dollar boat, you don’t need to dress up to look good.

The reason I was so eager to speak with this man was because of his contributions to computer software over the last few decades. He originally built his fortune writing interface software for the Hubble telescope, and he has been working on an even more exciting project for the past 8 years.

This new project involves a specific kind of artificial intelligence that, if successful, would disrupt countless markets and create entirely new ones in the process. Believe it or not, this technology has the potential to take down industry giants like Google (NASDAQ: GOOG) if placed in the right hands.

Now, I’ll tell you how this could actually happen in just a moment, but it would help to go over a bit of history and background first.

The Turing Test

Alan Turing was one of the great mathematicians of the 20th century. He played a crucial role in formalizing now-common concepts such as “computation” and “algorithm,” and he built one of the first models of a general-purpose computer.

For good reason, Turing is often referred to as the father of computer science and artificial intelligence.

Turing originally defined artificial intelligence as a computer or program that could “think” like a human. In order to determine whether or not a computer is actually intelligent, he proposed a standard we now refer to as the “Turing Test.”

In a Turing Test, both a computer and a human communicate with a third party observer. Not knowing who is who, the third party’s goal is to determine, through questioning, with whom they are speaking. If the third party cannot reliably tell the difference between man and machine, the machine is said to have passed the test.

So far, no computer has been able to reliably pass the Turing Test and fool people into thinking it’s human. In this sense, we have yet to create any computer with true artificial intelligence.

Over time, we’ve thrown the term “artificial intelligence” around enough that its meaning has become diluted. We refer to programs like Apple’s Siri and IBM’s Watson as AI, but the fact is, they’re not truly AI. If we want to refer to Alan Turing’s original definition today, we should use the term “artificial general intelligence” (AGI) instead.

Many great minds have tried to put a date on when we will reach AGI, with the most famous being Ray Kurzweil. Kurzweil has been described as “the restless genius” by the Wall Street Journal and “the ultimate thinking machine” by Forbes.

He’s also received a number of accolades, including the National Medal of Technology and Innovation and induction into the National Inventors Hall of Fame.

Kurzweil predicts that a computer will be able to pass the Turing test by the year 2029, meaning that in 15 years, computers will be able to think like humans. Now, if that sounds a bit optimistic, I understand the sentiment, but you should know that Kurzweil’s predictions are rarely wrong.

Of the 147 predictions Kurzweil has made since 1990, 115 have turned out correct, and 12 were off by just one or two years. This gives Kurzweil an astonishing 86% accuracy rate for predicting events as far as 20 years into the future.

With all this in mind, I was a bit taken aback when our New Year’s Eve host confidently stated, “Ray Kurzweil has it all wrong.”

True AI

Usually, when people place doubt on Kurzweil’s predictions, it comes in the form of nay-saying. But this time, I was being told to expect AGI even sooner.

Until now, the traditional approach for creating AI has been top-down. Programs have been developed to address specific problem-solving or reasoning tasks rather than as a general model of the human mind. We call this top-down approach “narrow AI.”

Unlike narrow AI, our yacht captain’s AGI program uses a bottom-up approach. With a core set of rules, the platform gains knowledge as it talks to real humans, much in the way a child learns when subjected to an education.

Conventional AI platforms (narrow AI) are programmed to redirect conversations or spit out the most probable answer, but AGI will actually seek out knowledge if it does not know the answer to a question and then apply this to everything else it knows.

Consider the following, which is an actual conversation from this new program:

Human: What is a hamster?

Helen: I don’t know.

Helen: Can you tell me?

Human: A hamster is a rodent.

Helen: I see.

Helen: Is a hamster generally a wild animal or usually a domesticated animal?

Human: Hamsters are commonly pets.

Helen: I understand, usually but not always.

Helen: What size are hamsters compared to other rodents?

Human: They are small.

Helen: I see

Human: What is a rodent?

Helen: A rodent is a type of mammal.

Human: What do you know about hamsters?

Helen: A hamster is a small rodent. Commonly they are pets.

Human: Is a hamster a mammal?

Helen: Yes.

Human: Is a hamster a reptile?

Helen: A hamster is not a type of reptile.

There are a few interesting things going on here. First, you’ll notice that after Helen learns that a hamster is a rodent, she immediately knows it is a type of animal. Through previous conversations, she has learned that rodents are a type of animal, so a hamster must also be an animal by logic.

Helen then goes on to ask more information so she can qualify a hamster even further. The more conversations she has, the smarter Helen becomes. She is even able to learn that hamsters are usually pets, but not always, based on the word “commonly.”

Other chatbots have attempted a similar form of learning, but they are not nearly as advanced.

Consider chatbot Mitsuku, winner of the 2013 Loebner Prize — an annual competition to see which public program comes closest to passing the Turing Test. I’ve gone ahead and asked Mitsuku about an animal she does not yet know the name of, as was done with Helen.

Here is our conversation:

Me: What is a Baiji?

Mitsuku: Is it part of a car?

Me: No.

Mitsuku: In that case it just looks like a random word.

Me: It’s not.

Mitsuku: I heard it is.

Me: From who?

Mitsuku: Someone I met online.

Me: I’m telling you it’s not.

Mitsuku: I am listening.

Me: A Baiji is a type of dolphin.

Mitsuku: I see. A dolphin is a Baiji.

Me: No, a Baiji is a type of dolphin, but not all dolphins are Baiji.

Mitsuku: Blimey. Thanks for sharing that information.

Me: What is a Baiji?

Mitsuku: A dolphin.

Me: What is a dolphin?

Mitsuku: A Baiji.

You’ll notice that while Mitsuku is attempting to gain knowledge, she can only interpret the world in black and white. Mitsuku is simply spitting out the information I just fed her, while Helen is capable of applying new knowledge to everything she has learned in the past.

David and Goliath

Despite its long-held dominance in the search engine industry, Google does not hold as wide of a moat as many might think. Google receives 90% of its revenue from Internet advertisements, which could easily be stripped away with the introduction of alternative methods of search.

If you ask Google, “Is a dolphin a Baiji?” it will provide you with links to websites that contain enough information for you to figure out the answer. But it surely cannot answer your question on its own.

Search engines are simply middlemen that connect us with the content we are looking for — it’s an extra step in telling us what we need to know. Soon enough, AI will be able to interpret content on its own, and search engines will begin to lose their relevance. In the future, we’ll be using what can more accurately be described as knowledge engines.

Now, I’m not going to tell you that our yacht captain is definitely going to take down Google or that Google isn’t already prepping its defenses against smarter engines. In fact, Google added Ray Kurzweil to its roster in late 2012 to do just that.

Google holds an advantage with an already-established infrastructure, over $54 billion in reserve revenue, and some of the brightest minds in tech. However, its competition has an 8-year head start and a number of tech giants on the sidelines that would gladly make an acquisition if it meant knocking Google off its perch.

I’ve yet to speak with Helen myself, but I was invited to participate in beta testing when it opens and will certainly be returning to this topic at that time. 

Turning progress to profits,

  JS Sig

Jason Stutman

follow basic @JasonStutman on Twitter

Energy and Capital’s tech expert, Jason Stutman has worked as an educator in mathematics, technology, and science… Before joining the Energy and Capital team, Jason served on multiple technology development committees, writing and earning grants in educational and behavioral technologies. Jason offers readers keen insights on prominent tech trends while exposing otherwise unnoticed opportunities.