Have you convinced your boss yet? Groups get the best deals 🎟️ Buy now before price increase →

This article was published on July 27, 2021

Big tech tries to derail EU AI policy with ‘warnings’ from US think tank

Pro tip: check the board member bios for these think tanks before you take their reports at face value


Big tech tries to derail EU AI policy with ‘warnings’ from US think tank

EU policymakers recently proposed a sweeping set of regulations called the Artificial Intelligence Act (AIA). If made law, the AIA would offer European citizens the strictest, most comprehensive protections against predatory AI systems on the planet.

And big tech is terrified.

Up front: The Center for Data Innovation published a report on Sunday titled “How Much Will the AIA Cost Europe?

According to the organization’s research:

The AIA will cost the European economy €31 billion over the next five years and reduce AI investments by almost 20 percent. A European SME that deploys a high-risk AI system will incur compliance costs of up to €400,000 which would cause profits to decline by 40 percent.

You say that like it’s a bad thing.

Background: The AIA proposes vast, sweeping regulations. But it certainly isn’t Draconian or Luddite (in fact we think it might be too soft). It’s a complex proposal and there’s a lot more to it than we can get into in this article.

But, the Center for Data Innovation actually does a great job of explaining how it works. The AIA breaks AI products down into three categories. These include “prohibited,” “limited risk,” and “high risk.”

Per the Center’s report, the AIA requires high-risk AI to be:

  • Trained on datasets that are complete, representative, and free of errors
  • Implemented on traceable and auditable systems in a transparent manner
  • Subject to human oversight at all times
  • Robust, accurate, and secure

The document continues, “Operators of high-risk AI systems have to abide by numerous technical and compliance features before and after they take their AI tool to market.”

They must:

  • Build a quality management system
  • Maintain detailed technical documentation
  • Conduct an assessment to ensure the system conforms to the AIA
  • Register the system in an EU database
  • Monitor the system once it is on the market
  • Update the documentation and conformity assessment if substantial changes are made
  • Collaborate with market surveillance authorities

Every single one of those line items offers basic, common-sense protections for citizens. Which explains exactly why big tech is terrified.

Here’s another snippet from the study:

The EU’s regulatory environment continues to let down European entrepreneurs who want to undertake risky and innovative investments. It is no surprise that the venture capital market in Europe is significantly smaller than in the United States or Asia.

Quick take: This report uses fuzzy, cherry-picked math to come up with the assertion that passing the AIA will cost Europe tens of billions of dollars. It warns against “brain drain,” – that’s when all the smartest people leave their homeland so they can get rich abroad – and claims European innovation will die an expensive, painful death as US and Chinese corporations leave the EU behind.

To that, I say: Lol. Using the US or China as a bar for regulation is like using a UFC fight as an example of diplomatic negotiations.

The current state of AI policy in the US can only be described as absolutely ridiculous. There’s next to no regulation, the police are out of control, companies such as Facebook conduct psyops on the general public without reprisal, and Tesla is literally testing autonomous vehicle software in the general public with absolutely no oversight.

People regularly die in vehicle accidents because AI software lets them down, the police wrongfully arrest and shoot people because AI misidentifies them, and companies such as PredPol, Clearview AI, and Palantir are being paid billions in taxpayer dollars to strip away citizens’ Constitutional rights.

And China’s even worse. The government controls every aspect of AI investment and encourages domestic technology companies to develop surveillance tech. The main difference between China’s AI program and Silicon Valley’s is that China isn’t using AI to strip away its citizen’s constitutional rights — they didn’t have any to begin with.

It’s hard to imagine why an organization like the Center for Data Innovation would take such a ridiculous viewpoint to the AIA.

Or is it?

It turns out the Center for Data Innovation is actually a leg organization for the Information Technology and Innovation Foundation. Those both sound like really smart, fancy, nonpartisan, non-profit groups. In fact, the ITIF is one of the world’s leading technology think tanks.

So, of course, we can trust them right? Both groups have the word nonpartisan written in their respective websites’ “about” pages. That has to mean something right?

But, just for fun, let’s take a look at the ITIF’s board members shall we? Afterall, if we’re going to risk our privacy for profit, we should at least know who is pulling the strings.

Here’s just a few board members listed on the ITIF’s site:

  • Peter Cleveland, vice president of global government affairs at TSMC, the world’s largest contract semiconductor company
  • Frederick S. Humphries Jr., Corporate Vice President of U.S. Government Affairs for Microsoft
  • Shannon Kellogg, Director of Public Policy at Amazon
  • Jason Mahler, vice president, government affairs, Oracle Corporation
  • Vonya McCann, formerly Sprint’s Senior Vice President – Government Affairs, a position she held from August 2009 to April 2020, when Sprint was acquired by T-Mobile US
  • Sean E. Mickens, Manager US public policy, Facebook
  • Laurie Self, Vice President and Counsel of Government Affairs at Qualcomm Incorporated
  • Johanna Shelton, Director, Government Affairs & Public Policy, Google

Perhaps we should take their “warnings” with a Silicon Valley-sized grain of salt.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with