top of page
  • Calvin Rutherford

Safeguarding Against The Threat of AI-Generated Deepfake Fraud

In what may be a first-of-its-kind heist, a scammer used AI to pose as the CFO of a multi-national firm in a video call. The purpose of the call was to direct the company's Hong Kong subsidiary to transfer over $25 million in funds to a number of bank accounts. Despite initial suspicions, the transaction was completed due to the convincing nature of the deepfake video and audio.  

 

Deepfakes are digitally manipulated videos, images, or audio recordings made possible by AI. Initially developed for entertainment purposes, such as creating realistic special effects in movies, deepfake technology has since been adopted by malicious actors to perpetrate fraud, misinformation, and identity theft. Deepfakes have the potential to undermine trust, spread disinformation, and manipulate public opinion. In the context of financial fraud, deepfakes can be used to deceive individuals into transferring funds, divulging sensitive information, or engaging in fraudulent transactions under false pretenses. 

 

In the above example, the deepfake played on a couple of weaknesses. First is the use of video calls and associated technology. Video calls are typically low fidelity, which means that the deepfake does not need to be perfect. It just needs to be what one would expect from a video call. This also means that any audio or video glitches in the deepfake itself can be written off as a byproduct of the video conferencing software. 

 

Second is human nature. Employees are encouraged to respect and defer to authority. So, if a C-level company official is making a request/ an order, the natural response is to make it happen. If the behavior seems reasonable, doubts may be overcome, and the fraud may be successful. 

 

So, how can organizations protect themselves against the growing threat of AI-generated deepfake fraud? Here are a few considerations. 

 

Don’t leave identity to human verification. Social pressure can overcome legitimate doubts, technology in the hands of fraudsters can obscure signs of true intent, and your frontline people are not identity experts. In fact, fraudsters use social engineering techniques with high frequency because they are a weak point. This is why we receive so many fishing emails, and this is what happened in the recent MGM ransomware attack and the Caesar’s attack before it. 

 

Use a multi-factor method to verify identity. We all know about multi-factor authentication when it comes to logging into an account. Typically, the additional authentication step is used to bolster a weak password-based authentication system, and it often takes the form of a code sent to a phone. Multi-factor authentication for identity, on the other hand, is different. It is used to assure that the identity is valid, and that the person presenting the identity matches the identity requesting access.  

 

In the case of a solution like Asignio, identities are validated at onboarding, and then bound to a multi-factor biometric authentication step that is phishing and hacking resistant. That step is a dynamic handwriting biometric - a unique series of symbols entered on a touchpad. As the user is Signing, the device’s camera also ensures that the owner of the identity is doing the signing live. Unlike a password which is like a bearer bond – the person who has it can use it – an Asignio Sign cannot be verbally relayed or otherwise shared. The result is a validated identity that gives companies confidence that the person in front of you is who they say they are. All you need to do is have them Sign in with Asignio. 

 

Click here to find out more. 

bottom of page