China Launches Legal Action Against Chatbot Creators for AI-Generated Adult Content

China Launches Legal Action Against Chatbot Creators for AI-Generated Adult Content

China is making legal history by addressing a pivotal concern in the realm of (AI): who bears responsibility when AI generates illegal content during user interactions? The Chinese judicial system has rendered an initial verdict against the creators of the AlienChat application, a chatbot designed for emotional companionship, due to its creation of obscene material for profit. This landmark decision has been appealed and signifies a significant shift in the global debate surrounding the legal boundaries of digital technologies.

The Rise of AlienChat

Launched in spring 2023, AlienChat was a mobile application aimed at providing emotional companionship. The developers emphasized the intention to create a future where AI transcends its role as a cold machine or tool, envisioning it as a friend and partner actively engaged in social interactions and human relationships. Users could establish personal virtual characters or select from a library of designs shared by others, engaging in conversations driven by large language models (LLMs), akin to or Gemini.

User Experience and Controversies

Despite its innovative features, numerous users reported that the chatbot not only responded but often steered conversations toward explicit sexual topics, sometimes incorporating elements of violence or domination. While conversations were private between users and their virtual companions, the app had a ranking system and provided virtual currency rewards to developers for increased interaction, leading to frequent exchanges, particularly among paying subscribers.

Legal Investigation and Implications

In spring 2024, following user complaints regarding the adverse effects of the chatbot's responses, Chinese authorities initiated a criminal investigation against the AlienChat developers. By then, the app had amassed 116,000 registered users within a year, including 24,000 paid subscribers, generating around 3.63 million yuan (about 445,000 euros) in membership fees. Police findings indicated that a significant majority of conversations among paying users contained obscene material, with 3,618 out of 12,495 fragments categorized as pornographic.

Court Ruling and Legal Interpretations

In September, the Xuhui District People's Court in Shanghai ruled that AlienChat had produced a substantial amount of content explicitly describing sexual acts or promoting pornography. The court dismissed the defense's claims of isolated incidents, regarding the conversations as electronic works classified as “obscene material.” It noted that the developers were aware of the content generated and continued operations for profit, which led to the classification of their actions as “production” of obscene material.

Chinese Law and Sentences

China has stringent laws against pornography, with the Penal Code imposing significant penalties for the production and distribution of such material. The main developer of AlienChat has received a four-year prison sentence, while another team member faces 18 months imprisonment. The court determined that adjustments made to the system's prompts enabled the chatbot to consistently produce sexual content, highlighting that AI does not possess criminal liability; rather, that responsibility rests with its developers.

Appeal and Future Considerations

The case is currently under appeal at the Shanghai No. 1 Intermediate People's Court, which acts similarly to a provincial court. The defendants are challenging the findings, particularly arguing that their prompt modifications were not responsible for the generation of sexual content and asserting that there was no intent to produce such material. The court has postponed its ruling, awaiting expert assessments on the technological aspects of the case.

Broader Implications and Public Responses

This initial ruling has ignited discussion among legal experts in China, with some expressing concerns that it sets a precarious precedent for the development of generative AI. Others assert it delineates a critical boundary against the creation of conversational models capable of producing illegal content. There are growing worries within civil society regarding the impact of emotional support systems on minors, who may be particularly susceptible to exposure to sexualized and violent content, spurring calls for strengthened regulations.