What is going on with facial recognition and biometric-based verification?

Joe Bloemendaal
4 min readMay 11, 2021

Joe Bloemendaal

IMAGE CREDITS: Gerd Altmann for Pixabay

As of late, it seems that there is no day that goes without some news about the different uses of face recognition technology.

While the ubiquity of high-resolution cameras on mobile and smart devices and the convenience that facial recognition and other biometric technologies bring to our daily lives might explain part of that increased interest, there’s growing chatter worldwide about what constitutes an acceptable, responsible and proportionate use of those.

The fact that these questions arise can be interpreted as a good sign, as it shows the potential for further benefits and use case as biometrics-enabled solutions go mainstream.

Take facial recognition for passport checks at airports and major travel hubs, for example. One could argue that this touchless way of verifying travelers’ identities has contributed to keep coronavirus at bay at a time when many healthcare workers were required to travel to assist in pandemic hotspots. Or that having access to face and iris recognition at schools and universities has proven particularly helpful to have more students return to in-person instruction safely. Public safety can also be enhanced by the appropriated use of biometrics. To that point, the European Commission has recently released new rules for biometrics where they acknowledge the benefits of facial recognition technologies to help find missing children and terrorists.

On the flipside, there are many examples of how damaging the impact of these technologies can be when used wrongly or with dubious intentions. Human rights activists and watchdogs have alerted of the controversial role biometrics have played — even if unintentionally — in perpetuating discrimination and infringing civil liberties. They point at the misuse of facial recognition by authoritarian political regimes, its involvement in targeting minority groups or the mis-identification of people which had led to their wrongful arrest.

Also to consider is the rise of deepfakes or AI-generated videos that replace a real person with someone else’s likeness. At first, those were inoffensive or even amusing — like the viral YouTube video of a Saturday Night Live’s Bill Hader in conversation with David Letterman on his late night show in 2008, which became famous after a so-called deepfake master altered the footage to show Hader doing an impression of Tom Cruise — as his face subtly shifts into Cruise’s. But now this is turning into a troubling issue, with more and more open source tools making it possible for virtually anyone with images of a given person to create a convincing deepfake. I read in a recent Venturebeat article that the number of deepfakes on the web has increased 330% from October 2019 to June 2020, reaching over 50,000 at their peak. Some of them were intended as fun videos on TikTok, but others have the potential to influence opinion during elections or to implicate a person in a crime.

What would it take to make biometric tech publicly acceptable?

Faced with mounting scrutiny and public demand, governments and other public organisations worldwide are looking into possible ways to make biometric technology acceptable.

The European Commission has recommended that the use of real-time remote biometric identification systems by law enforcement gets limited to preventing terrorist attacks, finding missing children, and resolving public security emergencies. Furthermore, the EU governing body will also require high-risk AI systems to use high-quality datasets, ensure traceability and human oversight. Reuters reports that these new procedures for biometrics and other artificial intelligence applications deemed ‘high-risk’ must be assessed for conformity before deploying the technology or system. Companies that don’t comply or infringe the rules can be fined with penalties worth up to 6 percent of their global turnover.

Meanwhile, in Mexico, the governing party has proposed a new law to require the mandatory registration of any new SIM card or prepaid mobile phone line purchases, as well as several personal data including their name, phone number, and address associated with the owner of the SIM card or phone line, as well as their biometric data which could include fingerprint, face, voice, or iris biometrics. While the regulation is intended to tackle the growing use of unregistered mobile phones in kidnapping and extortion crimes, critics argue that instead of being deterred by the new law, criminals will continue to use mobile phones they have acquired illegally and thus won’t risk exposing their own biometric data. Their main concern though is that this regulation will go against constitutional rights.

The Ada Lovelace Institute’s Citizens’ Biometrics Council has just released some recommendations to bring public perspectives into debates about biometrics. After running a series of workshops attended by citizens and experts in biometrics, they concluded that biometric technologies require stronger regulation, tougher oversight and clearer standards for best practice. But more importantly, the Ada Lovelace Institute argues that in order for biometric technologies to be trustworthy, they require public debate alongside legal and ethical inquiry. At the end of the day, to ensure technology work for people and society, the decisions about how they are developed and deployed must be informed by and aligned with their (our!) concerns, needs, and values.

--

--