May 5, 2023 | OPINION | By Saigopal Rangaraj
By now you have most likely heard about or used a generative artificial intelligence tool such as OpenAI’s ChatGPT, Google’s Bard, or Microsoft’s Bing Chat. These rapidly evolving tools have taken the world by storm, growing exponentially in popularity. To many professors’ dismay, AI usage on college campuses has rapidly expanded. It is not uncommon to find students using these tools to summarize readings, brainstorm ideas, or even write entire essays.
One does not need to look far to consider the social ramifications of such a tool. Goldman Sachs estimates that the recent advent of generative AI could displace as many as 300 million jobs. The impacts of such tools are already being felt on financial markets and in peoples’ lives. The education technology company Chegg – a one stop shop for all sorts of study materials – saw its stock price fall by close to 50% due to ChatGPT eating into its revenues.
You may be asking how these issues are connected to China and the superpower rivalry that has emerged between itself and the United States. The U.S. and China have been locked in a technological arms race for quite some time.
In addition to sending balloons flying through our airspace, China has been accused of stealing trillions of dollars of intellectual property from multinational firms through countless cyber-attacks. China’s meteoric rise and willingness to flaunt the ‘rules-based-order’ put forward by the West has irked many lawmakers, who have sought to retaliate by punishing Chinese firms.
Last year, the Biden administration placed a broad swath of sanctions on China’s semiconductor industry; among other restrictions, this ruling banned US firms from selling high end semi-conductors – the kind found in super-computers – to China.
When considering the fact that ChatGPT runs on hundreds of millions of dollars’ worth of Nvidia A100 and H100 enterprise grade Graphics Processing Units, it is no surprise that trade restrictions on chips inhibit China’s ability to develop a ChatGPT alternative. Despite the roadblocks placed by the U.S. on China’s AI industry, the biggest hindrance to China’s ambitions may be a result of its own doing.
The Chinese Communist Party’s desire to control the flow of information is at odds with two inherent features of Generative Pretrained Transformer models: they get better with more training data, and they tend to hallucinate. These challenges drastically increase the cost of developing and maintaining these services.
Before we can look at these limitations, it is important to understand how AI tools such as ChatGPT are trained. These tools are what are known as Large Language Models. LLMs work by predicting an output based on a given input; think of these models as extremely complicated sentence finishers. These models come eerily close to replicating human speech due to the vast amounts of training data they are fed. In ChatGPT’s case, that number exceeded 3 billion words.
The Chinese government, over the course of decades, has implemented numerous policies and technological surveillance tools to limit the ability of Chinese internet users to access information online that is not vetted by the Chinese Communist Party. These are collectively known as ‘The Great Firewall.’ These policies mean that Chinese tech companies developing their own LLMs must be overly cautious in determining what sources of training data their models can use, significantly slowing down the development process and increasing costs.
The second major ‘feature’ of LLMs that place them at odds with ‘The Great Firewall’ is their propensity to hallucinate. AI hallucinations refer to instances where artificial intelligence systems generate images, videos, or sounds that are not based on any real-world stimuli or data inputs. Hallucinations lead to the really cool images you see online such as the Pope in a fancy puffer jacket, but also to LLMs generating false, but plausible sounding, information that compound the fake news epidemic.
To limit the harm that could be caused by hallucinations, technology companies implement numerous guardrails that prevent LLMs from responding to certain prompts and generating misleading or hate filled messages. Due to the CCPs desire to control the flow of information, Chinese LLMs would need many more guardrails – think adding filters that prevent mentioning sensitive topics such as the Tiananmen Square massacre – to prevent hallucinations. These additional guardrails would vastly increase the computational costs associated with the development of these LLMs.
While the future around AI and its role in society remains uncertain, for the time being we can rest assured that U.S. firms will be at the forefront of this technological revolution and can take the credit for any future AI-related layoffs. The Chinese government has hamstrung many of its firms’ ambitions regarding the development of AI tools due to their heavy censorship. Unless the CCP greatly loosens its grip on the flow of information, generative AI is likely to be an area where U.S. hegemony is unlikely to be challenged.