As organisations have tried to stay ahead of cybercriminals, faces and other biometrics have become the passwords for many everyday functions. From unlocking phones to accessing financial and government services, important tasks are completed by verifying that the correct face is the one behind (or in front of) the camera.
However, like with traditional passwords, fraudsters have gotten even more creative when getting around facial recognition security. Cybercriminals these days depend on a technique known as “camera injection” to beat facial recognition systems, which leverages deepfake technology.
The reality is that deepfake attacks — where bad actors impersonate others by manipulating video and audio — have been around for a long time. However, they are now becoming a significantly growing threat due to the availability of advanced AI tools, making them more sophisticated and accessible by fraudsters. They can carry out camera injection attacks easily to bypass facial recognition systems.
This leads to a burning question: To what extent do deepfakes pose a threat to the fields of biometrics and cybersecurity?
Understanding camera injection attacks
Beating facial recognition software requires a great level of sophistication, which has led to the invention of tactics designed to trick biometric and liveness detection tools — which are used to evaluate the user's physical presence behind the screen. This includes camera injection attacks.
See also: Keys to achieving human-centred automation testing
Camera injection occurs when a fraudster bypasses the charged-coupled device (CCD) of a camera — which is essentially the “eye” of the camera — to directly inject pre-recorded content, a real-time face swap video stream, or completely fabricated (deepfake) content. This pre-recorded content could be an unaltered video of a real person that a bad actor is attempting to defraud. It could also be a clip where someone’s face is altered in some way, or of a completely synthetic face that does not exist.
Cybersecurity has been compromised
Once hackers have successfully passed through this stage of verification, they can wreak havoc on individuals and organisations alike. With unauthorised access to accounts, they can commit identity theft and fraud, and create fake profiles — leading to devastating financial and reputational consequences.
See also: Human element still important for effective mass communication
The main concern with the camera injection technique is that if executed successfully, organisations may not even realise that their systems have been compromised. If the facial recognition technology in place believes it has properly verified a user’s identity, when it has actually been fooled by camera injection, fraudsters can essentially sneak in multiple times undetected.
Only when an account conducts some kind of suspicious behaviour, like an unusual bank transaction, would an organisation determine that it may have fallen victim to this kind of attack. In many cases, by the time an organisation detects the threat, the damage to a user’s account has already been done.
How can organisations bolster their defence against camera injection attacks?
While fraudsters’ tactics continue to evolve, so do the mechanisms designed to keep them out. Robust identity verification technologies, with a higher degree of sophisticated liveness detection tools, can protect organisations from fraudsters employing the camera injection technique.
Here are some of the strategies that organisations can implement to detect fake videos:
- Establishing controls to detect compromises and recognise manipulation through forensic examination of video streams. Such controls include checking the camera’s technical details to see if a virtual camera device is being used, or if it’s a real camera with a CCD sensor.
- Comparing natural motion — such as eye motion, expression changes, or regular blinking patterns — to the motions observed in the captured video.
- Injecting artifacts that would alter expected images in detectable ways — such as changing the camera parameters (like ISO, aperture, frame rate, and resolution) and observing whether the expected changes occur in the capture) — which can unveil fraudulent content. Additionally, altering the colour or illumination intensity of the device's screen and checking for a corresponding reflection on the captured face.
- Utilising the device’s built-in accelerometer during the selfie verification process to detect the changes in the objects (e.g., faces or backgrounds) and determine if the camera has been compromised.
- Conducting forensic analysis of the individual frames of the video for signs of tampering.
Tapping on liveness detection
To stay ahead of the latest tech trends, click here for DigitalEdge Section
Facial recognition tools are meant to supply an added level of security for organisations, and the emergence of the camera injection technique has been a legitimate threat to that extra layer of protection.
According to a recent Jumio study, 53% of consumers in Singapore believe they can accurately detect a deepfake video, but the reality is that synthetic content is growing more sophisticated and harder to decipher. In a recent incident in China, an AI-powered video impersonator assumed the identity of the victim’s friend and scammed them out of more than $800,000.
As prevalent as the threat of synthetic content may be, sophisticated liveness detection during the identity verification process enables businesses to stay ahead of hackers attempting to use techniques like camera injection. With these resources at their disposal, organisations can feel confident that malicious actors are being kept at bay while ensuring legitimate business users can still gain entry to their accounts.
Stuart Wells is the chief technology officer of Jumio Corporation