The IRS/ID.me debacle: A teaching moment for tech

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Last year, when the Internal Revenue Service (IRS) signed an $86 million contract with identity verification provider ID.me to provide biometric identity verification services, it was a monumental vote of confidence for this technology. Taxpayers could now verify their identities online using facial biometrics, a move meant to better secure the management of federal tax matters by American taxpayers.

However, following loud opposition from privacy groups and bipartisan legislators who voiced privacy concerns, the IRS in February did an about-face, renouncing its plan. These critics took issue with the requirement that taxpayers submit their biometrics in the form of a selfie as part of the new identity verification program. Since that time, both the IRS and ID.me have provided additional options that give taxpayers the choice of opting in to use ID.me’s service or authenticating their identity via a live, virtual video interview with an agent. While this move may appease the parties who voiced concerns — including Sen. Jeff Merkley (D-OR) who had proposed the No Facial Recognition at the IRS Act (S. Bill 3668) at the peak of the debate — the very public misunderstanding of the IRS’ deal with ID.me has marred public opinion of biometric authentication technology and raised larger questions for the cybersecurity industry at large. 

Though the IRS has since agreed to continue offering ID.me’s facial-matching biometric technology as an identity verification method for taxpayers with an opt-out option, confusion still exists. The high-profile complaints against the IRS deal have, at least for now, needlessly weakened public trust in biometric authentication technology and allowed fraudsters to feel highly relieved. However, there are lessons for both government agencies and technology providers to consider as the ID.me debacle fades in the rearview mirror.

Don’t underestimate the political value of a controversy

This recent controversy highlights the need for better education and understanding of the nuances of biometric technology, of the types of content that is potentially subject to facial recognition versus facial matching, the use cases and potential privacy issues that arise from these technologies and the regulations needed to better protect consumer rights and interests. 

For example, there is a huge discrepancy between using biometrics with explicit informed user consent for a single, one-time purpose that benefits the user, like identity verification and authentication to protect the user’s identity from fraud, as opposed to scraping biometric data at each identity verification transaction without permission or using it for unconsented purposes like surveillance or even marketing purposes. Most consumers do not understand that their facial images on social media or other internet sites may be harvested for biometric databases without their explicit consent. When platforms like Facebook or Instagram expressly communicate such activity, it tends to be buried in the privacy policy, described in terms incomprehensible to the average user. In the case of ID.me, companies implementing this “scraping” technology should be required to educate users and capture explicit informed consent for the use case they are enabling. 

In other cases, different biometric technologies that seem to be performing the same function may not be created equally. Benchmarks like the NIST FRVT provide a rigorous evaluation of biometric matching technologies and a standardized means of comparing their functionality and ability to avoid problematic demographic performance bias across attributes like skin tone, age or gender. Biometric technology companies should be held accountable for not only the ethical use of biometrics, but the equitable use of biometrics that works well for the entire population they serve.

Politicians and privacy activists are holding biometrics technology providers to a high standard. And they should – the stakes are high, and privacy matters. As such, these companies must be transparent, clear, and — perhaps most importantly — proactive about communicating the nuances of their technology to those audiences. One misinformed, fiery speech from a politician trying to win hearts during a campaign can undermine an otherwise consistent and focused consumer education effort. Sen. Ron Wyden, a member of the Senate Finance Committee, proclaimed, “No one should be forced to submit to facial recognition to access critical government services.” And in doing so, he mischaracterized facial matching as facial recognition, and the damage was done.

Perhaps Sen. Wyden did not realize millions of Americans submit to facial recognition every day when using critical services — at the airport, at government facilities, and in many workplaces. But by not engaging with this misunderstanding at the outset, ID.me and the IRS allowed the public to be openly misinformed and to present the agency’s use of facial matching technology as unusual and nefarious. 

Honesty is a business imperative

Against a deluge of third-party misinformation, ID.me’s response was late and convoluted, if not misleading. In January, CEO Blake Hall said in a statement that ID.me does not use 1:many facial recognition technology – the comparison of one face against others stored in a central repository. Less than a week later, the latest in a string of inconsistencies, Hall backtracked, stating that ID.me does use 1:many, but only once, during enrollment. An ID.me engineer referenced that incongruity in a prescient Slack channel post:

“We could disable the 1:many face search, but then lose a valuable fraud-fighting tool. Or we could change our public stance on using 1:many face search. But it seems we can’t keep doing one thing and saying another, as that’s bound to land us in hot water.”

Transparent and consistent communication with the public and key influencers, using print and digital media as well as other creative channels, will help counteract misinformation and provide assurance that facial biometric technology when used with explicit informed consent to protect consumers is more secure than legacy-based alternatives.

Get ready for regulation

Rampant cybercrime has prompted more aggressive state and federal lawmaking, while policymakers have placed themselves in the center of the push-pull between privacy and security, and from there they must act. Agency heads can claim that their legislative endeavors are fueled by a commitment to constituents’ safety, security, and privacy, but Congress and the White House must decide what sweeping regulations protect all Americans from the current cyber threat landscape.

There is no shortage of regulatory precedents to reference. The California Consumer Privacy Act (CCPA) and its landmark European cousin, the General Data Protection Regulation (GDPR), model how to ensure that users understand the kinds of data that organizations collect from them, how it’s being used, measures to monitor and manage that data, and how to opt-out of data collection. To date, officials in Washington have left data protection infrastructure to the states. The Biometric Information Privacy Act (BIPA) in Illinois, as well as similar bills in Texas and Washington, regulate the collection and use of biometric data. These rules stipulate that organizations must obtain consent before collecting or disclosing a person’s likeness or biometric data. They must also store biometric data securely and destroy it in a timely manner. BIPA issues fines for violating these rules. 

If legislators were to craft and pass a law combining the tenets of the CCPA and GDPR regulations with the biometric-specific rules outlined in BIPA, greater credence around the security and convenience of biometric authentication technology could be established.

The future of biometrics

Biometric authentication providers and government agencies need to be good shepherds of the technology they offer – and procure – and more importantly when it comes to educating the public. Some hide behind the ostensible fear of giving cybercriminals too much information about how the technology works. Those companies’ fortunes, not theirs, rest on the success of a particular deployment, and wherever there is a lack of communication and transparency, one will find opportunistic critics eager to publicly misrepresent biometric facial matching technology to advance their own agendas. 

While multiple lawmakers have painted facial recognition and biometrics companies as bad actors, they have missed the opportunity to weed out the real offenders – cybercriminals and identity crooks. 

Tom Thimot is CEO of authID.ai.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Read More