Thanks America! How China’s Newest Software Could Track, Predict, and Crush Dissent
Armed with data from spying on its citizens, Beijing could turn ‘predictive policing’ into an AI tool of repression.
What if the Communist Party could have predicted Tiananmen Square? The Chinese government is deploying a new tool to keep the population from uprising. Beijing is building software to predict instability before it arises, based on volumes of data mined from Chinese citizens about their jobs, pastimes, and habits. It’s the latest advancement of what goes by the name “predictive policing,” where data is used to deploy law enforcement or even military units to places where crime (or, say, an anti-government political protest) is likely to occur. Don’t cringe: Predictive policing was born in the United States. But China is poised to emerge as a leader in the field.
Here’s what that means.
First, some background. What is predictive policing? Back in 1994, New York City Police Commissioner William Bratton led a pioneering and deeply controversial effort to pre-deploy police units to places where crime was expected to occur on the basis of crime statistics.
Bratton, working with Jack Maple, deputy police commissioner, showed that the so-called CompStat decreased crime by 37 percent in just three years. But it also fueled an unconstitutional practice called “stop-and-frisk,” wherein minority youth in the wrong place at the wrong time were frequently targeted and harassed by the police. Lesson: you can deploy police to hotspots before crime occurs but you can cause more problems than you solve.
That was in New York.
Wu Manqing, a representative from China Electronics Technology, the company that the Chinese government hired to design the predictive policing software, described the newest version as “a unified information environment,” Bloomberg reported last week. Its applications go well beyond simply sending police to a specific corner. Because Chinese authorities face far fewer privacy limits on the sorts of information that they can gather on citizens, they can target police forces much more precisely. They might be able to target an individual who suddenly received and deposited a large payment to their bank account, or who reads pro-democracy news sites, or who is displaying a change in buying habits — purchasing more expensive luxury items, for instance. The Chinese government’s control over the Internet in that country puts it in a unique position to extend the reach of surveillance and data collection into the lives of citizens. Chinese authorities plan to deploy the system in places where the relations between ethnic minorities and Chinese party are particularly strained, according to Bloomberg.
After the Arab Spring in 2011, Chinese leaders increased internal security spending by 13 percent to 624 billion yuan, outpacing spending on the military, which was 601 billion yuan. That year, the Chinese government compelled 650 cities to improve their ability to monitor public spaces via surveillance cameras and other technologies. “Hundreds of Chinese cities are rushing to construct their safe city platforms by fusing Internet, video surveillance cameras, cell phones, GPS location data and biometric technologies into central ICT meta-systems,” reads the introduction to a 2013 report on Chinese spending on homeland security technologies from the Homeland Security Research Council, a market research firm in Washington.
China soon emerged as the world’s largest market for surveillance equipment. Western companies including Bain Capital, the equity firm founded by former GOP presidential candidate Mitt Romney, all wanted a piece of a pie worth a potential $132 billion (in 2022.)
But collecting massive amounts of data leads inevitably to the question of how to analyze it at scale. China is fast becoming a world leader in the use of machine learning and artificial intelligence for national security. Chinese scientists recently unveiled two papers at the Association for the Advancement of Artificial Intelligence and each points to the future of Chinese research into predictive policing.
One explains how to more easily recognize faces by compressing a Deep Neural Network, or DNN, down to a smaller size. “The expensive computation of DNNs make their deployment difficult on mobile and embedded devices,” it says. Read that to mean: here’s a mathematical formula for getting embedded cameras to recognize faces without calling up a distant database.
The second paper proposes software to predict the likelihood of a “public security event” in different Chinese provinces within the next month. Defense One was able to obtain a short demonstration of the system. Some of the “events” include the legitimately terrifying “campus attack” or “bus explosion” to the more mundane sounding, “strike event” or “gather event,” (the researchers say this was the “gather” incident in question.) all on a scale of severity from 1 to 5. To build it, the researchers relied on a dataset of more than 12,324 disruptive occurrences that took place across different provinces going back to 1998.
The research by itself is not alarming. What government doesn’t have an interest in stopping shootings or even predicting demonstrations?
It’s the Chinese government’s definition of “terrorism” that many in the West find troubling, since the government has used the phantom of public unrest to justify the arrests of peaceful dissidents, such as women’s rights worker Rebiya Kadeer.
Those fears increased after the Chinese government passed new anti-terror legislation in December that expanded government surveillance powers and that compels foreign technology companies to assist Chinese authorities in data collection efforts against Chinese citizens. Specifically, the law says that telecommunication and technology companies “shall provide technical interfaces, decryption and other technical support and assistance to public security and state security agencies when they are following the law to avert and investigate terrorist activities.”
The U.S. objects, and State Department spokesman Mark Toner said the law “could lead to greater restrictions on the exercise of freedoms of expressions, association, and peaceful assembly.” The FBI’s push to compel Apple to provide a different technical interface into Syed Farook’s iPhone is one reason leaders in China are watching the FBI versus Apple debate so closely (and the epitome of irony).
“Essentially, this law could give the authorities even more tools in censoring unwelcome information and crafting their own narrative in how the ‘war on terror’ is being waged,” human rights worker William Nee told the New York Times.
It could also compel foreign technology companies to assist the Chinese government in the acquisition of more data to train predictive policing software efforts. That’s where China’s predictive policing powers enter the picture.
Predictive policing efforts are rising around the United States with programs in Memphis, Tennessee, Chicago, Illinois, Santa Cruz and Los Angeles, California, and elsewhere. Police departments implement them in a variety of ways, many not particularly controversial. Beijing has the resources, will, and the data and inclination to turn predictive policing into something incredibly powerful, and, possibly, quite dreadful.
Read the Original Article at Defense One