Is it feasible to implement remote biometric identification in European Union?

  • April 12, 2021

Retailers across the globe are largely adapting to AR technologies in a fashion that uses facial recognition among other biometric data to integrate in-store shopping with a digital experience.

However, the European Union laws have put remote biometric identification in the “high risk” category under the General Data Protection Regulation (GDPR), as part of the EU’s five-year human-centric digital strategy to establish global standards for advancements in AI technology in various sectors.

The applications of AI include data and record-keeping, human oversight, facial recognition, gender identification, color identification, among other important factors associated with biometric identification.

The EU through its laws has differentiated biometrics for authentication differently from remote biometric identification.

A guideline provided by the High-Level Expert committee of GDPR states that biometric authentication is described as a security mechanism leveraging unique biological characterizes to verify identity, while remote identification is when the finger-prints, facial images, iris, or vascular patterns are used to identify multiple person identity from a distance in a public space, in a continuous on-going manner, and storing the data as well.

The GDPR has termed the latter high-risk and hence has forbidden the act on the grounds of concerns of risks to human beings on their privacy and fundamental rights.

However, the GDPR has also given scope for implementing remote biometric identification in the EU if the biometric data processing happens on a “limited number of grounds..”, for reasons of public interest.

The high-level committee report says that not all remote bio-metric identifications are the same and they can vary according to their purpose, context, and scope of use.

This gives leverage to businesses aiming to implement AI, to prove their scope of use to the GDPR committee, seeking approval for implementation.

The businesses can have a prior consultation with the supervisory authority of the GDPR, under article 36, to put forth a data protection impact assessment under article 35.

Article 35 of the GDPR mandates prior consultation and permission for business using particularly new technologies with nature, scope, context, and purposes of applications that are likely to result in a high risk to the rights and freedoms of human beings.

Once the impact assessment is completed, the supervisory authority as per his mandated powers can either permit the business to move forward with its implementation, under supervision or deny permission as well.

The EU has come up with these stringent laws to set up global standards and they are likely to conduct more debates on this subject in the coming days.

The businesses must remember that violation of the GDPR may result in stringent action. In Sweden, a school was fined for running a pilot facial recognition program even after obtaining “consent”.

This is because the supervising authority during the pilot project found that the school had processed personal data more than what was required for recording attendance.

Secondly, the supervising committee also noted that even though the school had obtained consent from the guardians of the students, there was no level-playing field, and it was a one-sided approach of tacking attendance.

The committee stated that the school had better alternative measures to track attendance as well.

Thus, the companies need to prove the public interest involved in their remote bio-metric authorities is of significant public interest, despite the high risks involved.