In May 2018, news out of China that the government is implementing “brain-monitoring” technology in businesses and factories prompted a mix of leeriness and concern within activist groups, who said Chinese workers could be at risk of seeing their privacy further eroded.
“There are limitations in international law in what sort of information a local or national authority can gather and how it can be used, and it’s often meant to be done with extreme care proportionate to the potential threat,” said Sophie Richardson, China director for Human Rights Watch. “Very few of those protections are in China.”
The development of technology that could be used to gauge workers’ emotions was first reported by the South China Morning Post, which said that the government had “applied it on an unprecedented scale in factories, public transport, state-owned companies and the military to increase the competitiveness of its manufacturing industry and to maintain social stability.”
According to the story, workers have been outfitted with wireless sensors hidden in their uniforms or hats. The devices then “constantly monitor the wearer’s brainwaves.” The data is then collected by computers which use algorithms monitoring for “emotional spikes” caused by anger, sadness, or anxiety.
“When the system issues a warning, the manager asks the worker to take a day off or move to a less critical post,” a Ningbo University professor named Jin Jia is quoted as saying. “Some jobs require high concentration. There is no room for a mistake.”
Jia also said that after initial concerns, workers outfitted with the sensors wore them without fear of having their thoughts monitored. The Post also reported that one company, State Grid Zhejiang Electric Power, saw its profits increase by around 2 billion yuan (approximately $314 million) since it began using the monitoring tech.
We contacted both Jia and Hangzhou Zhongheng Electric, one of the companies reportedly using the emotion-monitoring technology, seeking confirmation of the story. But human rights advocacy groups remained skeptical.
“The only comment I would have is that the technology should only be used with the employees’ specific consent and that workers should be able to opt out without fear of reprisals,” said Geoff Crothall, a spokesperson for the China Labour Bulletin and a former reporter for the Post.
Richardson also said her group was worried about potential privacy violations:
It’s almost impossible to exist in China online anonymously — and now because there’s a national ID system that’s linked to all sorts of biometrics and different data bases it’s very easy for authorities to compile a huge amount of information about an individual very quickly.
In late 2017, China implemented a national digital identification system, issued through the WeChat messaging app.
The purported accuracy of the workplace devices also came into question; Bennett Cyphers, a technologist with the Electronic Frontier Foundation. He estimated that the devices employed an electroencephalography reader to monitor for sleepiness or stress. While that kind of technology is plausible, he said, “there’s no technology that anyone is publicly aware of” that can read one’s mind.
While the Post reported that one company said it could measure brain activities with 90 percent accuracy, Cyphers told us, he reviewed papers connected to EEG readers and “in the best cases they say, ‘Ninety percent of the time we know from this data that the subject was more likely to be happy than they were to be sad.’ I don’t think it’s as good or as powerful as a lot of people might think it is.”
What did concern Cyphers was the possibility that using this kind of technology would have a “chilling and homogenizing effect” on employee behavior by encouraging workers not to stand out in the face of constant surveillance.
He also said that, not only does the data collection involved create a “huge power imbalance” between companies and their employees, it is dangerous for companies to try to “outsource” decision-making to algorithms.
“When that happens, the decisions don’t get made better, they get made cheaper,” he said:
The prejudices and flaws that were in whatever human system the algorithm was replacing just end up getting codified in the algorithm.