A.I.: Who’s Driving?

AI: Who's driving?
To think about the future of AI governance, perhaps it’s useful to revisit how we managed—or didn’t manage—another Promethean advance.

The weekend board drama out of Silicon Valley put the spotlight on some big questions about the future of AI, the most impactful technology in a generation. To help think them through, perhaps it’s useful to revisit how we managed—or didn’t manage—another Promethean advance.

Let’s imagine it’s 1938 and scientists at one of the most elite universities in the world announce they have discovered a way to harness the power of the atom. This new technology can be used in all kinds of incredible ways, they say, potentially changing every aspect of modern life, from medical breakthroughs to clean, limitless electricity.

There are downsides, though: They warn the same technology could—the chance isn’t zero—end human civilization. Beyond that, an accident inside an atomic power plant could prove cataclysmic, rendering a vast area of land uninhabitable for centuries.

In this imaginary 1938, an aging, overwhelmed FDR releases a set of development rules and principles as “guardrails” for private industry to develop the technology independent of the U.S. government. All of Washington accepts this pragmatic approach because there’s little faith—and for good reason—that a stalemated congress can govern atomic energy when it can’t even send weapons to England.

Investors pile in, with Standard Oil of New Jersey and Standard Oil of New York leading the way, looking to nail outsized returns—and avoid irrelevancy. The atom industry continues to acknowledge the big risks over and over and over, even as they press forward at whiplash speed—and who can blame them? What if the Germans get it first? Valuations soar. Breakthroughs abound. The public is awed. And just wait until you see what we’re cooking up in New Mexico!

But inside one of the key companies in the gold rush, there are divisions. How fast should development continue? Are we doing the right thing? Are we doing it at the right pace? Should we go as fast as we can to grow the business or hedge? The sides split off, and their debate—more heated and unsettled than anyone knew—goes public.

In a crossroads moment like that, you’d hope a “renegade team of outsiders” (in the Netflix version) would charge in, armed with questions like:

  • Who’s managing this company? Are they the right people?
  • Who is sitting on the board, and who should be?
  • What is the right skills mix for a board like this? What are the right qualifications? Should they all be “tech” people?
  • What is the appropriate governance structure? Are they building it?
  • And, ultimately, how do they weigh risks versus reward appropriately and incentivize for the right outcomes? (Blowing up the world would be the wrong outcome.)

Ok, so only if the renegade team came from Bain or BCG would those be the questions. But still, in our own potentially Promethean reality, we don’t hear those tactical, practical, essential queries from anyone. Maybe after this weekend, it’s time to start?

Governing AI, like atomic energy, isn’t about the tech. It’s about the people. Whatever is happening right now in Silicon Valley deserves more attention from the whole business community. Waiting for the press release signaling that a new product launch dubbed “Trinity” promises to be “a real game changer”? That’s probably the wrong way to go.

Get the StrategicCIO360 Briefing

Sign up today to get weekly access to the latest issues affecting CIOs in every industry

MORE INSIGHTS

Strategy, Insights, Action

In our weekly newsletter, get insight into the biggest issues facing CIOs, along with strategic ideas, solutions, and interviews.