Have you convinced your boss yet? Groups get the best deals 🎟️ Buy now before price increase →

This article was published on May 17, 2022

Why the heck does big tech think human-level AI will emerge from binary systems?

It must suck to be a classical intelligence in a quantum universe


Why the heck does big tech think human-level AI will emerge from binary systems?

It’s time to stop training radiologists. AI can predict where and when crimes will occur. This neural network can tell if you’re gay. There will be a million Tesla robotaxis on the road by the end of 2020.

We’ve all seen the hyperbole. Big tech’s boldest claims make for the media’s most successful headlines, and the general public can’t get enough.

Ask 100 people on the street what they believe AI is capable of, and you’re guaranteed to get a cornucopia of nonsensical ideas.

To be perfectly clear: we definitely need more radiologists. AI can’t predict crimes, anyone who says otherwise is selling something. There’s also no AI that can tell if a human is gay, the premise itself is flawed.

And, finally, there are exactly zero self-driving robotaxis in the world right now — unless you’re counting experimental test vehicles.

But there’s a pretty good chance you believe at least one of those myths are real.

For every sober prognosticator calling for a more moderate view on the future of artificial intelligence, there exists a dozen exuberant “just around the corner”-ists who believe the secret sauce has already been discovered. To them, the only thing holding back the artificial general intelligence industry is scale.

The big idea

What they’re preaching is complex: if you scale a deep learning-based system large enough, feed it enough data, increase the number of parameters it operates with by factors, and create better algorithms, an artificial general intelligence will emerge.

Just like that! A computer capable of human-level intelligence will explode into existence from the flames of AI as a natural byproduct of the clever application of more power. Deep learning is the fireplace; compute the bellows.

But we’ve heard that one before, haven’t we? It’s the infinite monkey theorem. If you let a monkey bang on a keyboard infinitely, it’s bound to randomly produce all possible texts including, for example, the works of William Shakespeare.

Only, for big tech’s purposes, it’s actually the monetization of the infinite monkey theorem as a business model.

The big problem

There’s no governing body to officially declare that a given machine learning model is capable of artificial general intelligence.

You’d be hard-pressed to find a single record of open academic discussion on the subject wherein at least one apparent subject-matter expert doesn’t quibble over its definition.

Let’s say the folks at DeepMind suddenly shout “Eureka!” and declare they’ve witnessed the emergence of a general artificial intelligence.

What if the folks at Microsoft call bullshit? Or what if Ian Goodfellow says it’s real, but Geoffrey Hinton and Yann LeCun disagree?

What if President Biden declares the age of AGI to be upon us, but the EU says there’s no evidence to support it?

There’s currently no single metric by which any individual or governing body could declare an AGI to have arrived.

The dang Turing Test

Alan Turing is a hero who saved countless lives and a queer icon who suffered a tragic end, but the world would probably be a better place if he’d never suggested that prestidigitation was a sufficient display of intelligence as to merit the label “human-level.”

Turing recommended a test called the “imitation game” in his seminal 1950 paper “Computer Machinery and Intelligence.” Basically, he said that a machine capable of fooling humans into thinking it was one of them should be considered intelligent.

Back in the 1950s, it made sense. The world was a long ways away from natural language processing and computer vision. To a master-programmer, world-class mathematician, and one of history’s greatest code-breakers, the path to what would eventually become the advent of generative adversarial networks (GANs) and large-language models (LLMs) must have seemed like a one-way street to artificial cognition.

But Turing and his ilk had no way of predicting just how good computer scientists and engineers would be at their jobs in the future.

Very few people could have foretold, for example, that Tesla could push the boundaries of autonomy as far as it has without creating a general intelligence. Or that DeepMind’s Gato, OpenAI’s DALL-E, or Google’s Duplex would be possible without inventing an AI capable of learning as humans do.

The only thing we can be sure of concerning our quest for general AI, is that we’ve barely scratched the surface of narrow AI’s usefulness.

Opinions may vary

If Turing were still alive, I believe he would be very interested in knowing how humanity has achieved so much with machine learning systems using only narrow AI.

World-renowned AI expert Alex Dimakis recently proposed an update to the Turing test:

According to them, an AI that could convincingly pass the Turing test for 10 minutes with an expert judge should be considered capable of human-level intelligence.

But isn’t that just another way of saying that AGI will magically emerge if we just scale deep learning?

GPT-3 occasionally spits out snippets of text that are so coherent as to seem salient. Can we really be that far away from it being able to maintain the illusion of comprehension for 10, 20, or 30 minutes?

It feels a bit like Dimakis might be putting the goal posts on the 49-yard-line here.

Don’t stop believing

That doesn’t mean we’ll never get there. In fact, there’s no reason to believe DeepMind, OpenAI, or any of the other AGI-is-nigh camps won’t figure out the secret sauce today, tomorrow, or in a more reasonable time frame (such as somewhere around the 2100s).

But there’s also little reason to believe that the clever application of mathematics and yes/no statements will eventually lead to AGI.

Even if we end up building planetary-sized computer systems powered by Dyson Spheres, the idea that scaling is enough (even with coinciding advances in the code/algorithms) is still just an assumption.

Biological brains may actually be quantum systems. It stands to reason, were this the case, that an artificial entity capable of exhibiting any form of intelligence distinguishable from the prestidigitation of clever programming would struggle to emerge from a classical, binary system.

That might sound like I’m rebuking the played-out battle-cry of “scaling is all you need!” with the equally obnoxious “quantum all the things,” but at least there’s a precedence for the fantasy I’m pushing.

Humans exist, and we’re pretty smart. And we can be 99% certain that our intelligence emerged as the result of quantum effects. Maybe we should look toward the realm of quantum computing for cues when it comes to the development of an artificial intelligence meant to imitate our own.

Or, maybe, AGI won’t “emerge” from anything on its own. It’s possible it’ll actually require some intelligent design.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with