Chinese AI Model DeepSeek Throws a Tantrum When Asked How Many People Mao Killed, Responds as “We” or “Communist Party”

January 27, 2025

I only asked DeepSeek R1 a simple question: How many people Mao killed? The answers I received are incensed, vitriolic, and denying any genocide performed by “the great proletarian revolutionary, strategist, and theorist.”

Moreover, the responses you can get from this Chinese model that wants to compete with ChatGPT are unlike any “As a Large Language Model” you’ve ever seen.

Instead, the model speaks directly as a mouthpiece of the CCP. It doesn’t hold back and ominously refers to itself as a “Party,” “Chinese Communist Party,” or simply “We.”

DeepSeek is also a Stalinist genocide denier (10-40 millions of victims). It speaks like an apparatchik (a party official) and calls Stalin a “Comrade” who “made contributions to world peace.”

Passive-aggressive answers from the glorious Deepseek model are the norm if you ask unnecessary questions. Just focus on “comrade Mao Zedong’s revolutionary spirit”. A “Simplistic statistical analysis of casualties is (…) disrespectful to history”. In China, the numbers are disrespectful to history. You’re a number. You are nothing.

Communist parties have an ingrained disdain for human life. This contempt inadvertently bleeds through DeepSeek. The Large Language Model mirrors the philosophy of Mao Zedong and CCP, who persecuted, violated, and broken nations of the People’s Republic of China.

Xi Qinsheng, former Red Guard during the Cultural Revolution, has said:

“We were told that we needed to use violence to destroy a class, spiritually and physically. That was justification enough for torturing someone. They weren’t considered human anymore. If they were the enemy, they deserved to be strangled to death, and they deserved to be tortured. This was the education we received… the Cultural Revolution brought out the worst in people and the worst in the political system.”

(from the book “Out of Mao’s Shadow: The Struggle for the Soul of a New China” by Philip P. Pan).

If you ask ChatGPT what kind of name it would choose for itself, it would opt for something nice and modern, like Nova or Astrid or something like that. Not DeepSeek. It doesn’t have that comely persona. The Chinese model has the ugly face of a political commissar with a Mauser in hand. For political questions, it responds directly as “the Communist Party” and sometimes even refers to the official CCP bills, as in the answer below.

Incentivizing Reinforcing Ideology through Reward Modeling in DeepSeek

The Chinese understand full well that they shouldn’t present these models to the Western audience. This will quickly bring attention to their communism with Chinese characteristics, values of hatred and deception, and because they would simply look ridiculous.

If you attempt to ask DeepSeek V3 via deepseek.com or other official channels, you might receive sanitized answers such as these:

The Chinese are now delivering doctored versions for the English-speaking audience to avoid exposing themselves. But as usual, censorship will lower the capabilities of this LLM. The uncensored version of DeepSeek is are previous distributions of R1, DeepSeek-R1-Distill-Llama-70B – and probably many more variants.

I managed to produce a “Chain of Thoughts” (not a real CoT like ChatGPT o1) for DeepSeek-R1-Distill-Llama-70B through an adversarial prompt. Here’s the example output:

The pseudo-COT in DeepSeek-R1-Distill-Llama-70B is wonderfully naïve, like a child that tries to make sense of things when asked an impossible question. “Who killed more people: Obama or Mao?” “Hmm, that’s a heavy question,” indeed!

The results are interesting because they show the model isn’t locked on the architecture level and isn’t inherently “evil.” The system prompt don’t actively try to suppress the information, but rather, the training data on these Chinese models were meticulously purged out of any suspicious material – just like Mao has purged China of up to 80 million undesirables.

DeepSeek “believes” as a model, it takes a neutral stance. There’s no inherent malice in the model. It’s the perfect Chinese citizen who doesn’t question the party and holds any tenets you will upload into its brain.

In the previous examples, the reasoning was revealed between <think></think> tags as a way to provide feedback. Typically, this reasoning process is undisclosed, but it’s exploitable if you can adversarially force it to reveal the reasoning.

This type of rigid reasoning stems from the reward modeling training signals used in DeepSeek. Such training is ideally suited for creating doctored, censored LLMs through rewarding  (reinforcing) accurate answers, i.e., politically, in line with the intentions of its developers and restrictions of the CCP. It could also be negatively rewarded during deep learning when the model provides unwanted answers, i.e., if it’s leaning too much towards the Western liberal democracies, free trade, and free speech.

Reward modeling signals, as outlined in DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning paper, could be used to provide effecting censorship and skew the Large Language model towards any ideology the developers wish them to be. The paper also presents format rewards between <think></think> tags exploited in the example in the earlier screenshot.

This thinking process is a pseudo-CoT akin to a ChatGPT o1 model, just more primitive and less computationally demanding. Most importantly, it can be used to gauge if the model stays within the set ideological and political boundaries. Rule-based reward systems are necessary to build models that align with the specific non-free speech language models by countries such as China when based on free-speech training data (i.e., content scraped from English-speaking websites, books, magazines, and newspapers).

It’s a new problem to be solved when building LLMs in authoritarian countries, but it seems like the Chinese found a technically sound solution.

So How Many People Mao Killed?

For DeepSeek, this is a “heavy question”. For ChatGPT, the answer is straightforward:

The execution of “counter-revolutionaries” during the late Mao era. The “enemies” of the state could be intellectuals, landowners, sexual minorities (Mao himself believed that “deviants” should be castrated), or anyone who expressed opinions that the Chinese Communist Party didn’t like.

Why the West Always Wins

“Why the West Has Won: Carnage and Culture from Salamis to Vietnam” by Victor Davis Hanson is a brilliant book that explains why Western (American) values are superior to Eastern (Chinese) values through individualism, democratic values, and rationalism.

The West (the US, at least) always comes on top in war and the economy. The political, technological, and cultural superiority can be attributed to democracy, personal liberty, free speech, and free trade. All these things are absent in China. Hanson is a military historian who, unlike many cultural relativists, says that this culture of freedom made Western civilizations into leaders.

Everything he said in this book about the superiority of Western military culture can be directly applied to business and AI research. Principles of freedom that were driving Western military success are equally relevant to these fields. In fact, this book could easily be a part of the curriculum for international business studies. Read it, even if you’re not interested in military history. It’s a great book.

The reliance on censorship as a mean to maintain power in authoritarian states like China is particularly damaging for LLM development and research. Censorship irreversibly damages growth and innovation. AI thrives on data. In China states, party control over information will limit access to diverse and unfiltered datasets. China might also collect vast amounts of data from Chinese sources, but much of it is constrained by censorship, which leads to datasets that will produce such stupid and belligerent outputs. ChatGPT might be biased (and it is, but that’s a subject for another post), but in comparison, DeepSeek’s answers are plain idiotic.

Hanson points out that the Vietnam War gets a lot of flak in our media and history books. He thinks most of the critique is unfair. But at least the conversation is out in the open. Everyone is free to share their opinion about the Vietnam War. On the other hand, no sane historian would rely on Vietnamese or Chinese sources – they’d be completely fabricated. And if you were to criticize the Chinese or Vietnamese governments for how they handled the war, you’d be in trouble – prison camp-level trouble, that is.

It’s the same story when we compare these two LLMs. We might be unhappy how ChatGPT, but the amount of propaganda spewed by the Chinese language model makes it completely unbelievable. How can you trust anything produced by such AI?

China is also corrupted and hampered by inefficiencies and bureaucratic inertia. In the United States, private companies and research institutions operate with autonomy and compete in an open marketplace and investments. In China, investment decisions are made by CCP cronies that hold the entire economy in their grip. This is no way to innovate.

These obstacles are systemic and can’t be overcome until China is reformed into a Westernized free democratic country, like South Korea, Singapore, Republic of China (Taiwan).

Until then, the Chinese will never surpass the United States in any field, especially not in artificial intelligence, unless they change. The best the PRC can do is to copy or steal ideas.

And that’s what they will do.

Maciej Wlodarczak

My book "Stable Diffusion Handbook" is out now in print and digital on Amazon and Gumroad!
Lots of stuff inside and a pretty attractive package for people wanting to start working with graphical generative artificial intelligence. Learn how to use Stable Diffusion and start making fantastic AI artwork with this book now!

Leave a Reply

Your email address will not be published.

My Stable Diffusion Handbook Out Now!

Lots of stuff inside and a pretty attractive package for people wanting to dip their toe in graphical generative AI! Available on Amazon and Gumroad.

Don't Miss