Face recognition is one of the modern biometric authentication methods used for various security purposes. However, it remains vulnerable to security risks due to the lack of verification of an independent authentication move. Nonetheless, a researcher has come up with C2FIV that acts as a two-factor authentication method for face recognition. It involves strengthening face recognition with precise facial movements such as a smile or wink.
About C2FIV Technology For Face Recognition
From unlocking phones to the more sophisticated systems, facial recognition serves as a trusted validation strategy. However, this technology remains vulnerable to exploitation by an adversary. For instance, gaining access by scanning a victim’s face while asleep, or using the victim’s photos.
The same vulnerability affects fingerprint and iris scans as well.
Thus, researchers from the Brigham Young University, USA, have devised a strategy to avoid such exploitations.
As BYU disclosed, Dr. D.J. Lee and his student have devised a technology that serves as a two-factor authentication method for facial recognition. Named Concurrent Two-Factor Identity Verification (C2FIV), the technology involves precise facial movements together with the usual face scan to validate recognition.
Such validation is important to combat accidental accesses in case of a forced verification attempt. According to Dr. Lee,
The biggest problem we are trying to solve is to make sure the identity verification process is intentional. If someone is unconscious, you can still use their finger to unlock a phone and get access to their device or you can scan their retina. You see this a lot in the movies — think of Ethan Hunt in Mission Impossible even using masks to replicate someone else’s face.
C2FIV basically relies on an integrated neural network framework for improved learning and recognition. As explained,
C2FIV relies on an integrated neural network framework to learn facial features and actions concurrently. This framework models dynamic, sequential data like facial motions, where all the frames in a recording have to be considered (unlike a static photo with a figure that can be outlined).
The researchers have trained the algorithm with 8000 videos of 50 subjects making different facial movements. The entire data set has now trained the technology to successfully identify positive pairs with over 90% accuracy.
How C2FIV Works
A user can set up C2FIV by recording a very short video clip (1 to 2 seconds) of a particular facial movement while facing the camera. This can be something like a wink, smile, a particular lip movement while pronouncing a word/phrase, or any other expression. The technology will then record the movement together with the facial features.
When triggered with new input, the technology matches the facial movements and features with the data stored on a server. Following a successful match to a certain threshold, the technology will grant access.
Currently, Dr. Lee has filed a patent for this technology that he hopes to have broader application for different purposes.
Let us know your thoughts in the comments.