³Ô¹ÏÍøÕ¾

AI chatbots with Chinese characteristics: why Baidu’s ChatGPT rival may never measure up

The Conversation

On March 16, unveiled China’s latest rival to OpenAI’s ChatGPT – ERNIE Bot (short for “Enhanced Representation through kNowledge IntEgration”). The “multi-modal” AI-powered chatbot can generate text, images, audio and video from a text prompt.

Author


  • Fan Yang

    Research Associate at RMIT and Alfred Deakin Institute, Deakin University

However, ERNIE was poorly received by the public. Baidu’s Hong Kong-listed fell by 10% during the press conference, and the beta test is only open to a group of organisations approved by the company.

ERNIE Bot will not be a Chinese substitute for ChatGPT, but that might be how the Chinese state wants it. As earlier efforts to make Chinese AI chatbots have shown, the Chinese Communist Party prefers to maintain strict censorship rules and government steering of research – even at the cost of innovation.

Digital sovereignty and ChatGPT

ChatGPT is not directly accessible in China due to the country’s protectionist approach to . Chinese data are confined within China, and information that conflicts with government propaganda is censored.

Chinese tech companies including Baidu and Tencent prohibit third-party developers from plugging ChatGPT into their services.

However, the prominence of ChatGPT created a booming . Until a crackdown, ChatGPT logins were sold on the ecommerce platform Taobao, and video tutorials were published on Chinese social media to demonstrate the abilities of the chatbot.

XiaoIce and BabyQ

Baidu isn’t the first or only tech company in China trialling a generative AI chatbot.

In March 2017, launched two social chatbots, called XiaoIce and BabyQ, on the WeChat and QQ messaging apps respectively.

was developed by Microsoft, while BabyQ was created by a Beijing-based AI company called . Within months, the two chatbots were to be attuned according to China’s censorship rules.

BabyQ never came back, but Microsoft’s XiaoIce returned and has been providing services to millions of users on major platforms including WeChat, QQ and Weibo.

Made in China 2025 and the push for AI

China’s government would be on the defensive if China adopted only AI chatbots developed overseas. As chatbots run on human feedback, it would be impossible to prevent transnational flows of data and the political interests of the Chinese Communist Party may be threatened.

Since 2015, during the administration of former premier Li Keqiang, the scheme has endeavoured to bolster the country’s technological capacities. AI is a major focus.

Since February 2023, across AI, food delivery, e-commerce and gaming have scrambled to catch up with OpenAI and provide their own ChatGPT-like products to the market.

Beijing’s is supporting this ambition, but only for some leading tech companies.

Censorship and culture

We can expect China will witness the short-term proliferation of versions of ChatGPT services, many of which will vanish or be acquired by big tech companies.

Smaller companies, with little support from the government, are unlikely to be able to .

A small startup called YuanYu launched China’s in January. Dubbed ChatYuan, the bot ran as a “mini-program” inside WeChat. It was within weeks after users posted of its answers to political questions online.

However, Chinese users are still interested in large language models based on the Han Chinese linguistic system.

ERNIE Bot, for example, claims to be more than , with a better understanding of Chinese histories, classical literature, and dialects.

Government steering

Beijing has tightened its governance of the tech industry since a crackdown in 2021.

One upside for industry is a and talent support. The flip side is that resources are steered towards technologies that serve Beijing’s in domestic governance and military defence.

China’s ChatGPT imitators are more likely to be designed to benefit enterprises than individuals. For tech giants, the objective is to form a “full AI stack” by integrating generative AI products into every level of their business, from search engines and apps to industrial processes, digital devices, urban infrastructure and cloud computing.

Emotional surveillance and disinformation

AI-driven chatbots can also lead to adverse outcomes. Alongside the around job security, copyright and academic integrity, in China there are also extra risks of emotional surveillance and disinformation.

Chatbots can identify users’ through conversations. This emotion-reading ability extends the power of big data and AI to invade people’s privacy.

In China, such emotional surveillance could further establish “emotional authoritarianism”. Any sentiments that could threaten the leadership of the Chinese Communist Party, even if not directly stated, have the potential to attract punishment for the user.

AI-powered chatbots and search engines are also likely to legitimise Chinese state-organised propaganda and disinformation. Users will come to trust and depend on these services, but their inputs, outputs and internal processes will be heavily censored.

Chinese politics and leadership will not be up for discussion. When it comes to controversial events or histories, only the perspectives of the Chinese Communist Party will be presented.

The Conversation

Fan Yang does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

/Courtesy of The Conversation. View in full .