What is the future of content moderation

Why AI games still have a long way to go

Through artificial intelligence, games could represent an individual adventure for every player in the future. However, the first experiments now show problems that are not due to the machines.

#By Martin Abgottspon


The spirit of adventure in video games has been somewhat lost these days. If you get stuck at one point, ask Google for advice or go straight to a complete solution on YouTube. While you used to spend nights experimenting with a puzzle, the solution is now just a click away on the Internet.

One technology that could change this trend is AI. They are used to completely individualize games. Everyone experiences their own adventure, meets other villains and finds other useful items. Your own behavior influences the story with every action. But what sounds like a dream to many gamers also has its downsides.

Text Adventure 2.0

Two years ago, the start-up Latitude from Utah launched “AI Dungeon”, the first major game experiment based on artificial intelligence. It is a simple text adventure, similar to the one we know from the eighties. But with the difference that every action of the player leads to completely new and unpredictable twists in the game. A first milestone in the world of AI games that already works well technically.

But there is another problem, as an established surveillance system has now shown. Players abuse the game to generate stories that are already punishable by law. Sexual encounters with children can be found here as well as sexual acts with the dead.

An AI future that nobody wants

It wasn't long before OpenAI got involved in this discourse and urged the developers to take immediate action. "Decisions on content moderation are difficult in some cases, but not this one," said OpenAI CEO Sam Altman in a statement. "This is not the future for artificial intelligence that any of us want."

OpenAI is a non-profit company that deals with the question of the «existential threat from artificial intelligence». The organization, co-financed by Elon Musk, aims to help AI and humans coexist in harmony.

The developers of Latitude finally complied with OpenAI's request without major contradictions, but this really got the barrel rolling.

The problem of content moderation

Suddenly the users of “AI Dungeon” understandably saw themselves as guinea pigs with massive encroachments on their privacy. Ultimately, the game operators receive even the most intimate details from the minds of each individual player. A fact that is just as worrying as the absurd perversions that have found their way into the game.

"The community feels betrayed that Latitude is scanning private fictional literary content and manually accessing and reading it," says an AI dungeon player, who goes by the name of Mimi and claims to have written more than a million words using AI , including poems, parodies and erotic adventures. Mimi and other disgruntled users say they understand the company's desire to control publicly viewable content, but they say it went too far and ruined a powerful creative playground. “It allowed me to explore aspects of my psyche that I never knew existed,” says Mimi.

Mountains of data

So how and to what extent does content moderation take place? A central question that will probably be asked a lot in the future. Which sexual acts or crimes are fictionally justifiable and which are not? Preparing lists and filters here is currently a large part of various research and work in this area.

"However, it is always difficult to know how language models behave in the wild," says Suchin Gururangan, a researcher at the University of Washington in the field. He works out different language systems and also teaches the machines what content they are learning from.

Not an easy undertaking with hundreds of millions of words in the world. The Duden corpus, an electronic database, assumes that the German language alone contains around 23 million words. From the point of view of the machines, this results in astronomically high possible combinations and mountains of data, which will probably take a few more years to be able to see the horizon without a conflict of conscience.