³Ô¹ÏÍøÕ¾

AI mass surveillance at Paris Olympics – a legal scholar on the security boon and privacy nightmare

The 2024 Paris Olympics is drawing the eyes of the world as thousands of athletes and support personnel and hundreds of thousands of visitors from around the globe converge in France. It’s not just the eyes of the world that will be watching. Artificial intelligence systems will be watching, too.

Author


  • Anne Toomey McKenna

    Visiting Professor of Law, University of Richmond

Government and private companies will be using advanced AI tools and other surveillance tech to conduct pervasive and persistent surveillance before, during and after the Games. The Olympic world stage and international crowds pose increased security risks so significant that in recent years authorities and critics have described the Olympics as the “.”

The French government, hand in hand with the private tech sector, has harnessed that legitimate need for increased security as grounds to deploy technologically advanced surveillance and data gathering tools. Its surveillance plans to meet those risks, including controversial use of experimental AI video surveillance, are so extensive that the country .

The plan goes beyond new AI video surveillance systems. According to news reports, the prime minister’s office has negotiated a to permit the government to significantly ramp up traditional, surreptitious surveillance and information gathering tools for the duration of the Games. These include wiretapping; collecting geolocation, communications and computer data; and capturing greater amounts of visual and audio data.

I am a , and I research, teach and write about privacy, artificial intelligence and surveillance. I also provide legal and and others. Increased security risks can and do require increased surveillance. This year, France has faced concerns about its around public sporting events.

Preventive measures should be proportional to the risks, however. Globally, critics claim that as a surveillance power grab and that the government will use this “exceptional” surveillance justification to .

At the same time, there are legitimate concerns about adequate and effective surveillance for security. In the U.S., for example, the nation is asking how the Secret Service’s an assassination attempt on former President Donald Trump on July 13, 2024.

AI-powered mass surveillance

Enabled by newly expanded surveillance laws, French authorities have been Videtics, Orange Business, ChapsVision and Wintics to deploy sweeping AI video surveillance. They have used the AI surveillance during major concerts, sporting events and in metro and train stations during heavy use periods, including around a Taylor Swift concert and the Cannes Film Festival. French officials said these AI surveillance experiments went well and .

The AI software in use is generally designed to flag certain events like changes in crowd size and movement, abandoned objects, the presence or use of weapons, a body on the ground, smoke or flames, and certain traffic violations. The goal is for the the surveillance systems to immediately, in real time, detect events like a crowd surging toward a gate or a person leaving a backpack on a crowded street corner and alert security personnel. Flagging these events seems like a logical and sensible use of technology.

But the real privacy and legal questions flow from how these systems function and are being used. How much and what types of data have to be collected and analyzed to flag these events? What are the systems’ training data, error rates and evidence of bias or inaccuracy? What is done with the data after it is collected, and who has access to it? There’s little in the way of transparency to answer these questions. Despite safeguards aimed at preventing the use of biometric data that can identify people, it’s possible the training data captures this information and the systems could be adjusted to use it.

By giving these private companies access to thousands of video cameras already located throughout France, of rail companies and transport operators, and , France is legally permitting and supporting these companies to test and train AI software on its citizens and visitors.

Legalized mass surveillance

Both the need for and the practice of government surveillance at the Olympics is nothing new. Security and privacy concerns at the 2022 Winter Olympics in Beijing were so high that the to leave personal cellphones at home and only use a burner phone while in China because of the extreme level of government surveillance.

France, however, is a member state of the European Union. The EU’s is one of the in the world, and the is leading efforts to regulate harmful uses of AI technologies. As a member of the EU, France must follow EU law.

Preparing for the Olympics, France in 2023 enacted Law No. 2023-380, a package of laws to provide . It includes the controversial Article 7, a provision that allows French law enforcement and its tech contractors to experiment with intelligent video surveillance before, during and after the 2024 Olympics, and Article 10, which specifically permits the use of AI software to review video and camera feeds. These laws such a wide-reaching AI-powered surveillance system.

, and have pointed out that these articles are contrary to the General Data Protection Regulation and the EU’s efforts to regulate AI. They argue that Article 7 specifically violates the General Data Protection Regulation’s provisions protecting biometric data.

French officials and tech company representatives have said that of identifying and flagging those specific types of events without identifying people or running afoul of the General Data Protection Regulation’s restrictions around processing of biometric data. But European civil rights organizations have pointed out that if the purpose and function of the algorithms and AI-driven cameras are to detect specific suspicious events in public spaces, these systems will necessarily “” of people in these spaces. These include body positions, gait, movements, gestures and appearance. The critics argue that this is biometric data being captured and processed, and thus France’s law violates the General Data Protection Regulation.

AI-powered security – at a cost

For the French government and the AI companies so far, the AI surveillance has been a mutually beneficial success. The algorithmic watchers are and give governments and their tech collaborators much more data than humans alone could provide.

But these AI-enabled surveillance systems are poorly regulated and subject to little in the way of independent testing. Once the data is collected, the potential for further data analysis and privacy invasions is enormous.

The Conversation

Anne Toomey McKenna is Co-Chair of the Institute for Electrical and Electronics Engineers (IEEE)-USA’s Artificial Intelligence Policy Committee (AIPC), which involves subject matter and education-related interaction with U.S. Senate and House congressional staffers and the Congressional AI Caucus. McKenna has received funding from the ³Ô¹ÏÍøÕ¾ Security Agency for the development of legal educational materials about cyberlaw and funding from The ³Ô¹ÏÍøÕ¾ Police Foundation together with the U.S. Department of Justice-COPS division for legal analysis regarding the use of drones in domestic policing.

/Courtesy of The Conversation. View in full .