Building ethical thinking into technology

In his essay introducing this year’s class of Innovators Under 35, Andrew Ng argues that AI is a general-purpose technology, much like electricity, that will be built into everything else. Indeed, it’s true, and it’s already happening. 

AI is rapidly becoming a tool that powers all sorts of other tools, a technological underpinning for a range of applications and devices. It can helpfully suggest a paella recipe in a web app. It can predict a protein structure from an amino acid sequence. It can paint. It can drive a car. It can relentlessly replicate itself, hijack the electrical grid for unlimited processing power, and wipe out all life on Earth. 

Okay, so that last one is just a nightmare scenario courtesy of the AI pioneer Geoffrey Hinton, who posed it at an EmTech Digital event of ours earlier this year. But it speaks to another of Ng’s points, and to the theme of this issue. Ng challenges the innovators to take responsibility for their work; he writes, “As we focus on AI as a driver of valuable innovation throughout society, social responsibility is more important than ever.”

In many ways, the young innovators we celebrate in this issue exemplify the ways we can build ethical thinking into technology development. That is certainly true for our Innovator of the Year, Sharon Li, who is working to make AI applications safer by causing them to abstain from acting when faced with something they have not been trained on. This could help prevent the AIs we build from taking all sorts of unexpected turns, and causing untold harms.

This issue revolves around questions of ethics and how they can be addressed, understood, or intermediated through technology. 

Should relatively affluent Westerners have stopped lending money to small entrepreneurs in the developing world because the lending platform is highly compensating its top executives? How much control should we have over what we give away? These are just a few of the thorny questions Mara Kardas-Nelson explores about a lenders’ revolt against the microfinance nonprofit Kiva.

Jessica Hamzelou interrogates the policies on access to experimental medical treatments that are sometimes a last resort for desperate patients and their families. Who should be able to use these unproven treatments, and what proofs of efficacy and (more important) safety should be required? 

In another life-and-death question, Arthur Holland Michel takes on computer-assisted warfare. How much should we base our lethal decision-making on analysis performed by artificial intelligence? How can we build those AI systems so that we are more likely to treat them as advisors than deciders? 

Rebecca Ackermann takes a look at the long evolution of the open-source movement (and the ways it has redefined freedom—free as in beer, free as in speech, free as in puppies—again and again. If open source is to be something we all benefit from, and indeed that many even profit from, how should we think about its upkeep and advancement? Who should be responsible for it?

And on a more meta level, Gregory Epstein, a humanist chaplain at MIT and the president of Harvard’s organization of chaplains, who focuses on the intersection of technology and ethics, takes a deep look at All Tech Is Human, a nonprofit that promotes ethics and responsibility in tech. He wonders how its relationship with the technology industry should be defined as it grows and takes funding from giant corporations and multibillionaires. How can a group dedicated to openness and transparency, he asks, coexist with members and even leaders committed to tech secrecy?

Read More