Emerson Collective’s Raffi Krikorian explains why he’s technically optimistic about AI’s societal implications

Raffi Krikorian would have a better than average read on the artificial intelligence landscape, including as it pertains to potential regulation.

Not only is the Emerson Collective CTO also the CEO of conversational AI company SpeakEasy AI, but the former Twitter and Uber executive was also the former CTO of the Democratic National Committee. And even Krikorian is unsure whether the U.S. Congress will be able to institute any guardrails around the new technology.

“We are still so far away from being able to understand the nuances. I think there’s only one person in the House of Representatives right now [Rep. Jay Obernolte] that has an advanced degree in artificial intelligence,” said Krikorian on the latest Digiday Podcast episode.

Nonetheless, Krikorian leans toward optimism, not only in the potential for Congress to regulate AI but in the potential for AI overall. His recently launched podcast is called “Technically Optimistic” after all. The show debuted in late June with a five-part series centered on AI and the nuances of the subject that could prove helpful not only to members of Congress but to anyone trying to wrap their heads around the technology’s implications for society.

“The world divides itself in two ways when it comes to AI these days. There is the world [of] ‘We’re going to live in a sci-fi future where everything is miraculous,’ and then there’s the doom and gloom. And I think there’s a lot of gray in the middle,” Krikorian said. “However, I think that, as people learn to understand the gray, we can get to a place where we all can be optimistic.”

Here are a few highlights from the conversation, which have been edited for length and clarity.

The AI balance of power

Right now, I think, if you speak in a general case, there aren’t very many guardrails being built [around AI] at all. And I think that’s part of the issue. And part of the reason why I want people to focus on this gray zone. Right now there’s a shift in the balance of power where technologists or people building these technologies are actually holding a lot of the keys to the shape of what our society looks like.

The need for AI transparency

When I make arguments to lawmakers, policymakers, members of Congress, I tell them that their role isn’t necessarily say you can or cannot do the following. It’s not solely about putting up harm mitigation or putting a box around what you should or should not do. But instead it’s like how do we change the incentives on these developers so that they’re not focusing on how to build the next seductive feature but also instead focusing on how do we explain how it’s making its decisions? How do we then make transparent what are the inputs and the outputs so that all of us can see whether we are okay with this?

The reality of AI regulation

I have a lot of concerns over whether or not Congress can actually pull something off. The best idea I’ve heard is, Can Congress set up some form of separate commission similar to the FTC or FCC that can be staffed appropriately, that could then both advise and set some guardrails and make recommendations that Congress could then actually make laws off of? Honestly, given the level of technical fluency in Congress, that might be our only shot if we actually expect the governing body to do anything.

The urgency for AI regulation

I actually worry that we might not be able to get to a place where we even think about appropriate regulation on AI until something bad actually happens. I’ll draw a parallel to the finance system. We didn’t really get our act together until the financial system crashed. And the thing that scares me even more is that there’s actually been some good examples of AI already causing a little bit of havoc. There was this generative image of a potential explosion in front of the Pentagon, and it literally moved the [stock] market — and that wasn’t enough to raise people’s eyebrows.

https://digiday.com/?p=510335

Read More