Microsoft, OpenAI, Alphabet and big tech are ignoring the human cost behind the rise of ChatGPT and other AI-powered chatbots

Companies must not be allowed to ignore the human rights risks associated with AI-informed chatbots.

Kenyan worker exploitation

Time magazine recently published revelations about the horrific labour exploitation that has been central to developing ChatGPT’s safeguards.

Labour rights are human rights. United Nations Sustainable Development Goal No. 8 provides as much, stating that all people have a right to decent work.

While long hours and toxic working conditions go hand-in-hand with Silicon Valley – think Elon Musk emailing Twitter staff to demand they commit to being “extremely hard-core” or leave – it is still disturbing to see the severity of exploitation of Kenyan workers.

OpenAI had engaged workers in Kenya on less than $2 an hour to review data sets which helped to train ChatGPT.

The AI that helps chatbots function is often “taught” how to respond to queries through the analysis of hundreds of thousands of pieces of data from the internet.

This data could be from blogs, websites or even fan fiction. However, when analysing all of this information, an AI will inevitably be exposed to offensive and repulsive content.

Such content often includes racism, depiction of sexual violence and even exploitative content involving children. Without workers to review such data and flag it as inappropriate, chatbots may provide answers and content which promotes such abhorrent material.

Psychological toll

Unsurprisingly, the workers that “train” AI are themselves exposed to horrific information. The job requires them to regularly review material involving, for example, non-consensual sex, bestiality and violence to create safer AI tools for the public.

Having to constantly view this content takes a serious psychological toll on any person, which can have lasting effects, so it is disturbing to find out just how little was done to help Kenyan workers cope with the disturbing content that their job exposed them to.

It is of the utmost importance that chatbots have appropriate safeguards built in through the effective training of AI. Google has even reinforced as much in a recent statement noting the need for Bard to “meet a high bar for quality, safety and groundedness in real-world information”.

However, tech companies’ talk must reflect their actions as Silicon Valley too often uses the mysticism of “wunderkind” executives and cult-like work practices to distort and hide exploitative labour practices pursuing technological advancement.

Whether this is in the United States or Kenya, OpenAI and Alphabet must do better.

No excuse

All human beings, regardless of their country of origin, have a human right to decent working conditions. To be repeatedly subjected to torturous content as part of your daily job with little pay and even less support is an abhorrent abuse of human rights.

With OpenAI raising funds at a hefty $US29 billion valuation, and Microsoft investing $US10 billion into the organisation, there is no excuse for the exploitative use of labour, with workers paid less than $2 an hour to identify and label reprehensible content in the databases that are used to train ChatGPT.

There has to be a different approach where human rights processes are built into the way technology is developed.

If these companies require human labelling of datasets and these datasets contain content that is likely to cause psychological harm to workers, there must be support processes and other mechanisms put in place to keep these workers safe, as well as paying them the equivalent of “danger money” given they are being put at risk.

This behaviour by OpenAI is extremely disappointing, and both OpenAI and Microsoft as a primary investor should address the apparent exploitative nature of the ChatGPT dataset training process.

The technology being developed will be integral to our futures, as many pundits expect it to replace traditional search engines like Google. But these technological advancements must not be built off the suffering and exploitation of workers in any country.

Organisations such as OpenAI and Alphabet must conduct their business with human rights in mind. It is never acceptable to trade off human rights pursuing profit.

But this doesn’t rest on just these companies. We all need to identify human rights risks in the advancement of technology, and make sure that we never allow technology to surpass our humanity.

Lorraine Finlay is Australia’s Human Rights Commissioner, Patrick Hooton is policy adviser for human rights and technology at the Australian Human Rights Commission and Dr Catriona Wallace is a director at the Gradient Institute and founder of the Responsible Metaverse Alliance.

Read More