January 25, 2021

456 words 3 mins read

The Cybersecurity Fight Against Deepfakes

The Cybersecurity Fight Against Deepfakes

As the title suggests, both cybersecurity and deepfakes are on complete opposite sides. To get this started, deepfakes are very similar to incorrect articles and spread of false information but the distinguishing factor is deepfakes are more towards fake videos and audio which look and sound as if it were the original files. Looking back, these effects were mainly used by intelligence agencies whereas now these deepfake softwares are easily available for anyone to download and make these videos at their leisure. 

Amateur video editors make fake videos for meme purposes by editing any speeches of politicians or famous actors to seem as if they told something funny. Nevertheless deepfakes are very dangerous in general and could result in chaos, for example releasing fake recordings of a candidate right before the elections. Unnecessary scandals and rumours would also be started because of these deepfakes. Marco Rubio the 2016 presidential candidate claimed deepfakes as the ‘modern equivalent to nuclear weapons’ 

Deepfakes are now even more advanced using the input of AI which could cause some serious troubles like the problem faced by a UK- based energy company, where an employee was convinced by a scam artist who used a voice altering AI tool to sound like the employee’s CEO of their German parent company, to transfer a lump sum of almost $243,000 to a Hungarian supplier. 

Cybersecurity specialists have been predicting the rise of AI in cybercrimes for some time now. The alarming fact is how deepfakes are causing confusion by making it hard to distinguish between the original video or audio from the fake files. Forrester predicts that businesses would incur costs of up to $250 million due to these deepfake scams. 

Cybersecurity companies are now in the process of developing products to identify deepfake scams and MNC’s such as Google and Facebook are now taking these scams very seriously. Tech companies have started developing AI technologies to take over AI threats. Google for instance has released over a thousand deepfake videos to assist researchers in identifying and to attack the deepfake scams.

Apart from these safety measures what else can be done?

  • The first thing is to get the training right. Employees must be made aware of these deepfakes at their cybersecurity training. Providing them with made up deepfake scenarios such as how to handle a scam call and many more. 
  • Monitoring the business’s online presence. Employees must immediately report any suspicious activities and must be on the look out for fake content concerning the organisation and taking measures to remove those fake content. 
  • Verification must be considered very seriously. Businesses/ organisations must verify every individual user by introducing various authentication methods. This will allow only authorized users to access the information or the website.
Categories: Security
comments powered by Disqus