Santander is addressing the growth of deepfakes set up by internet criminals to deceive customers into handing over private data, or even transferring money into bogus accounts. They are also be used to misinform, impersonate, or harm the reputation of people and businesses. 

A deepfake is a video, image or audio file that has been tampered with to look real in order to impersonate others or even create the physical make up of individuals that do not actually exist.

Criminals use a method of artificial intelligence (AI) called deep learning to superimpose a person’s face on someone else’s body and to make it look like they are saying or doing something they, in fact, did not say or do.

Deepfakes have been used mainly for entertainment until now, but they are increasingly deployed as part of sophisticated hoaxes, or scams, to deceive or manipulate.

Criminals using deepfakes poses a direct risk to cybersecurity, with people and businesses in the firing line. In addition, if they are created to spread fake news, they imply a risk that can damage their reputation.

Listen to this podcast (in Spanish) about deepfakes with Lisette Guittard, Global Head of Cyber Secure User Experience, and José Palacio, Global Head of Threat Detection and Cyber Security Operations.
Listen to this podcast (in Spanish) about deepfakes with Lisette Guittard, Global Head of Cyber Secure User Experience, and José Palacio, Global Head of Threat Detection and Cyber Security Operations.

Santander is prioritizing the detection of deepfakes and implementing measures to fight against them. According to José Palacio, Global Head of Threat Detection and Security Operations at Grupo Santander, there are three types of deepfake:

  • Voice cloning: AI simulates a person’s voice which can, for example, be used to impersonate a relative, or your bank representative, on a phone call. “A 15-minute video is enough to clone anyone,” says Palacio.
  • Lip sync: The movement of a person’s lips can be altered, and the audio changed so it looks as if what they are saying is real and can deliver a false message in a real video. For instance, a fake video can be made to look like a politician or business leader is recommending or warning against something, which was never said.
  • Face swap: This is the most complex technique of all. Algorithms can swap faces in an image or video, to the person who wants to impersonateso the “person” you see looks like someone you know and trust.

How Santander is tackling deepfakes

We provide our customers with tools to protect themselves against cyber risk in their day-to-day lives. We also have advanced monitoring and detection systems to prevent scams and fraud.

We regularly spread awareness through tips and recommendations for customers on protecting their digital banking login and other items:

If you suspect something, report it

Reporting a deepfake is essential so it can be taken down before it affects more people. If you receive a suspicious message from Santander, take a screenshot and either send it to reportphishing@gruposantander.com or share it on the available channels in the markets where we operate.

If you see a fake post or announcement by Santander or another company on a social network, you can also report it directly to the network.  

You might like