A few days ago, the European Parliament called for a ban on the police use of facial recognition technology in public spaces. The ban was proposed due to controversial practices involving AI facial recognition and databases used by companies. While the intentions of protecting the public from unauthorized surveillance and ensuring data protection may seem commendable, one might question whether this approach is appropriate.
Facial recognition technology is increasingly popular for profiling potential criminals and tracking existing ones. In this instance, the European Commission's bill limits the use of remote biometric identification, including facial recognition, in public spaces except in cases of "serious" crime, such as kidnappings and terrorism. The European Parliament is even considering a permanent ban on various recognition technologies. This measure also aims to prevent private companies from managing large volumes of private data used in facial recognition databases. Proponents of the ban argue that the technology should not be deployed until a proper regulatory framework is established to protect privacy and data.
Critics might wonder why a comprehensive AI framework has not yet been established, while others may argue that AI development is advancing so rapidly that legal frameworks cannot keep pace. The European Parliament's strategy, although well-intentioned, seems shortsighted and inadequately designed to address the challenges of AI utilization.
It appears that officials would prefer to ban the technology rather than learn how to use it effectively. Consider if this approach had been applied to automobiles a century ago; we might still be using horses for transportation. A commonly reiterated point in discussions about this legislative move is that AI facial recognition technology could be deployed in cases of real threats, such as terrorism or serious crimes.
Envisioning this in practice raises questions. For instance, someone might alert the police about a potential crime at a specific location and time. Then, precisely when the crime is about to happen, someone could quickly install a facial recognition camera at the scene. Although this scenario is unlikely, it illustrates potential practical issues with the legislative proposal. The strategy concerning AI needs clearer articulation and presentation beyond merely calling for a ban. The ongoing debate does highlight the need to improve AI facial recognition and its usage. There are certainly flaws with the technology; for example, the facial recognition system used by the UK's Metropolitan Police in 2019 was found to be 81% inaccurate, often misidentifying innocent people as suspects, according to a University of Essex whitepaper. Moreover, a 2019 Pew Research Center poll revealed that only half of U.S. adults trust law enforcement to use facial recognition responsibly.
While more comprehensive frameworks are undoubtedly necessary to regulate the use of facial recognition, it is essential to focus on improving the technology rather than discarding it outright. Efforts should be aimed at enabling regulators to keep pace with technological advancements. We cannot afford to take two steps forward and one step back continually.
Comments