Clicky

‘Conditioning an entire society’: the rise of biometric data technology | Biometrics

Ina school canteen in Gateshead, cameras scan children’s faces, automatically takes the payment after identifying it with facial recognition. Nursing home workers more than 300 kilometers away in north London recently took part in a study that used facial data to check their Covid-19 vaccine status. And in convenience stores across the country, employees are alerted to potential shoplifters through an intelligent CCTV system that taps into a database of suspicious people.

In each case, biometric data was used to save time and money. But the increasing use of our bodies to open up realms of public and private has raised questions about everything from privacy to data security to racial prejudice.

CRB Cunninghams, the US company whose facial recognition technology is used in canteens, has said its systems could speed up payment and reduce the risk of Covid-19 spreading through contact with surfaces. The system was first tested last year at Kingsmeadow School in Gateshead, and dozen of schools have signed up to follow suit.

However, enthusiasm for the system could wane after the North Ayrshire Council was suspended Use of the technology in nine schools after a backlash. The decision to withdraw came after parents and data ethics experts raised concerns that the tradeoff between comfort and privacy may not have been fully considered.

“It’s about saving time,” says Prof. Sandra Wachter, data ethics expert at the Oxford Internet Institute. “Is it worth it to have a database of children’s faces somewhere?”

Stephanie Hare, author of Technology Ethics, sees the use of biometric data from children as a “disproportionate” way to shorten lunch lines. “They normalize children to see their bodies as something they do business with,” she said. “This is how you condition a whole society to use face recognition.”

Experts fear that biometric data systems are not only partially flawed, but increasingly invade our lives under the radar, with limited public knowledge or understanding.

There are salutary examples of how such technology could be worryingly authoritarian in its use, and China offers some of the more extreme precedents. Users became users after a spate of toilet paper thefts from public facilities in a Beijing park asked to undergo a face scan before a newspaper came out, and in Shenzhen, pedestrians crossing the street at a red light were beamed face-to-face at a billboard.

In the US, a little-known company called Clearview AI became known in 2020 scratched social media sites like Facebook to collect users’ facial data, collected more than 3 billion images that could be shared with the police.

Some of the technologies to be rolled out in the UK seem more innocuous at first glance. Eurostar is testing whether facial data could be used to board its cross-channel trains using technology from US company Entrust.

In Manchester, the city’s Mayor Andy Burnham had talks with FinGo, a startup whose technology analyzes the unique pattern of the veins in people’s fingers.

Applications considered include paying for buses and accessing universities, and providing prescription drugs while the city’s licensing authority has approved their use in pubs.

FinGo says it stores an encoded version of the finger vein pattern that cannot be reverse engineered by thieves, while storing different segments of the data in different locations for added security.

Earlier this year, London-based face verification company iProov tested systems in three North London nursing homes operated by Springdene to allow employees to check their Covid status based on their faces.

That technology isn’t used anywhere right now, iProov said, but it’s one of several companies whose systems are embedded in the NHS app, which is used when users want to use their face to access services such as their Covid status or booking family doctor appointments.

Such applications have prompted us to do so Concerns among technology professionals and civil rights groups how long the user data is stored, how secure this data is and whether foreign law enforcement authorities can request inspection.

Ankur Banerjee, Chief Technology Officer at digital identity startup Cheqd, points out that biometric technology depends on our trust in the people who operate it. In Moscow, users of the city’s famous metro system can now pay with the face, a system that is voluntary, at least for the time being.

“That’s handy for 99% of people, but when someone shows up to protest the government, they suddenly have the opportunity to find out who’s been in and out, as opposed to an Oyster-style card that may not register is, ”said Banerjee.

Some technologies that are already widespread in the UK have raised concerns about civil liberties. London-based FaceWatch sells security systems that alert store staff to the presence of an “interested party” – typically someone who has been antisocial or has been caught shoplifting in the past. It started out as a system for tracking down pickpockets in Gordon’s wine bar in central London, owned by Simon Gordon, founder of FaceWatch.

Cameras scan the face of everyone who enters a building and compare it to a database of people marked for special review.

However, Wachter has concerns about the prospect of this technology becoming more widespread. “Research has shown that facial recognition software is less accurate in people of color and women.” It also points to the possibility that existing human prejudices are hardwired into supposedly neutral technology. “How can you be confident that they are exactly on the watchlist? There is bias in selective policing and in the justice system. “

Also, in many cases it is not clear to whom such systems are accountable and how individuals can challenge their judgments. “What if I was wrongly accused or the algorithm incorrectly matched me with someone else?” Asks Banerjee. “It is a private judiciary where you have no way of correcting it.”

FaceWatch states that it will not disclose the stored facial data to the police, although they can access it when a crime is reported. The company said it minimizes the risk of misidentification by making sure the cameras are positioned in good light to increase accuracy, with all borderline cases being referred to a manual checker. People on the watchlist can appeal the decision.

FaceWatch added that facial data is stored for up to two years and that it is both encrypted and protected by bank-grade security.

However, Wachter points out that the security systems that protect our biometric data are only state-of-the-art until the day of the breach.

“The idea of a data breach is not a question of if, but a question of when,” she said. “Welcome to the Internet: Everything is hackable.”

We should be careful about technology adopting just because it promises to make our lives easier. “The idea is that once something is developed it has a place in society,” she said. “But sometimes the price we pay is too high.”