Lao, a man in his thirties from Shanghai who works at a gay bar in the financial capital of China, recently came up with a joke on a popular Chinese social network: he claimed that the best way to warm up this Christmas was to celebrate an orgy at his bar. "Anyone who wants to join, contact me privately or come to the bar," he concluded his post, which soon received playful comments from other users who played along with irony. Among them, one warned him: "Be careful with these jokes, they are very sensitive now and they can censor you." He was right. The next day, his post had been deleted, and his account was blocked. But the story didn't end there.
A few nights later, with the bar packed, two police officers burst into the establishment to warn him that he was about to face an inspection that could result in the closure of the business - of which he is a co-owner - and in an administrative detention of up to 15 days. In addition, there would be a fine of 1,000 yuan (about 120 euros). According to the officers, all this punishment was for "acts that disturb public order".
Lao, incredulous, unsuccessfully tried to explain that if they were there because of the orgy comment, it was just a harmless joke with no intention to offend.
"Social media in this country is increasingly resembling a beautiful forest full of weeds that someone insists on pruning: they mow the lawn over and over to leave the paths impeccable, without loose branches or anything they consider uncomfortable," he laments. "The supreme censor is obsessed with these clean-ups. There used to be more leeway, more room to breathe. In terms of freedom of expression on social media, we are regressing in China."
The "supreme censor" he refers to is the Cyberspace Administration of China (CAC), the government body that omnipresently oversees digital activity in the country. Probably the most powerful regulator in the world due to the extent of its control over everything that circulates on the vast Chinese network.
A few months ago, the CAC officially announced a national campaign against what it called "malicious content": incitement to conflict, spread of rumors, negative information about the economy, finance, or housing, and - most strikingly - any expression of "vital discouragement, pessimism, or world-weariness."
Under the pretext of combating "disinformation" and preserving "social harmony," this digital cleansing threatened to empty the public space of any trace of social discontent just as the country faces economic turmoil, especially due to the prolonged real estate crisis.
Authorities were sending a clear message: the internet is not an outlet but a regulated stage. In November, the CAC said it had deleted over 40,000 posts on popular social networks like Xiaohongshu and Bilibili for creating "real estate alarmism" or "distortion" about housing policies.
A few days ago, the regulator issued another statement with new rules aimed at live streaming groups: using "vulgar content that incites tipping" is prohibited, referring to a trend emerging on some platforms where several girls gather to perform sensual dances or moans to attract users' attention and encourage them to make donations.
The digital purge extends to more fronts in the realm of sexuality. Last month, two of the most popular dating apps among the LGBTQ community, Blued and Finka, disappeared from Chinese app stores by order of the CAC. A surgical maneuver that erases virtual spaces that, in a country where same-sex marriage is not legalized, served as meeting points that do not align with the official narrative.
In this atmosphere of digital repression, the use of artificial intelligence becomes the new automated machete: censorship is increasingly relying on algorithmic surveillance tools that analyze vast amounts of content, detect keywords, track images, and decide in fractions of a second what should survive and what should be deleted. On platforms like Weibo, China's equivalent to Twitter, posts considered critical or politically sensitive that used to disappear within hours or days now vanish even faster, sometimes in less than five minutes thanks to an improved AI filtering system.
If it is AI that now purifies part of the cyberspace from unwanted posts, a regulation from last year aimed at even pre-publication censorship: digital service providers must have internal "ideological audit teams" responsible for monitoring and removing sensitive content, anticipating official censure.
