Is It Worth Having ChatGPT Janitors to Clean Up Its Toxic Content?

Ever since OpenAI’s ChatGPT chatbot burst out into the limelight late last year, its popularity has grown by leaps and bounds. Unsurprisingly, in lockstep with its growing popularity, controversies have also started dogging the company. For instance, Time magazine published a bombshell report about how OpenAI sub-contracted Kenyan workers earning less than US$2 per hour to label toxic content, like violence, sexual abuse and hate speech, to be used to train

Is It Worth Having ChatGPT Janitors to Clean Up Its Toxic Content?
Image licensed via Adobe Stock

Cleaning up toxic data is a crucial step in maintaining the integrity and ethical use of artificial intelligence systems like ChatGPT. A recent article titled "Is It Worth Having ChatGPT Janitors to Clean Up Its Toxic Content?" explores the challenges and considerations associated with this task. Human moderation, as discussed in the referenced article, plays a crucial role in refining and contextualizing the AI's understanding of toxicity. Human moderators, or "janitors," contribute their expertise to review flagged content and make nuanced decisions that algorithms might struggle with.

"The second aspect of the story is that OpenAI, an American company, sub-contracted this work to workers in a poorer country. Here, yet again, though we plunge into murkier ethical waters, there isn’t much that’s inherently controversial or new about this. Companies have been outsourcing low-value and relatively more labour-intensive work to poorer countries for as long as the global economy has been a thing." https://time.com/6247678/openai-chatgpt-kenya-workers/

This article sheds light on the ongoing efforts and debates surrounding this issue, emphasizing the importance of a thoughtful and collaborative approach to handling toxic content in AI systems.