All Is Not What It Seems
You are the CEO of a large energy firm.
One day you receive a phone call from the boss of your German-based parent company. He requests a favour. He needs you to make an urgent payment of US$243,000 to a supplier on his behalf.
He promises that the parent company will reimburse you the funds later the same day.
You have spoken to the boss of your parent company many times in the past. You immediately recognise his German accent. In fact, the incoming call was from the correct mobile phone number.
Do you process the payment?
This scenario actually happened.
The CEO of the energy firm processed the requested funds to the supplier.
Later that day he received a second phone call from the parent company’s boss stating that the reimbursement had been sent. A subsequent third phone call, this time from an Austrian phone number, requested an additional urgent payment to the supplier.
Finally, the CEO became suspicious.
The reimbursement for the first payment had not yet been received. The fact that the third phone call came from an unrecognised Austrian number was another red flag.
The CEO refused to process the second payment request and started investigating.
After considerable digging it turned out that the CEO had been scammed.
The fraudsters had apparently used artificial intelligence (AI) software to successfully mimic the German boss’s voice.
Once the funds had been transferred to the “supplier,” they had then been sent on to a bank account in Mexico, before being dispersed to a range of other accounts. This made tracing the fraudsters all but impossible.
This incident is an example of a new attack vector being employed by scammers.
Known as “deep fakes,” these scams use sophisticated AI technology to accurately mimic people. By creating deep fakes, fraudsters have the ability to realistically impersonate individuals in both video and audio formats.
According to the Australian Strategic Policy Institute:
“A deep fake is a digital forgery created through deep learning (a subset of AI). Deep fakes can create entirely new content or manipulate existing content, including video, images, audio and text. They could be used to defame targets, impersonate or blackmail elected officials and be used in conjunction with cybercrime operations.”
As the technology behind deep fakes becomes increasingly sophisticated, not to mention cheaper and more accessible, we are likely to see an uptick in this type of fraud. Presently, deep learning requires a significant investment of time and labour to achieve the level of authenticity that is required to deceive people.
However, that is rapidly changing.
Lyrebird is a subscription-based voice generation tool. It allows users to create a synthesised voice based on a small audio sample of an authentic voice. It is even possible to use a tool called Overdub to replace the words a person has spoken in a recorded message with new words.
Neither humans nor machines have any way to easily detect a deep fake.
How can eftsure help?
The use of deep fakes by fraudsters presents a significant challenge for any organisation’s Accounts Payable team.
Typical internal controls, such as conducting call backs before processing EFT payments, are clearly no match for such technologies. With no easy way to detect a deep fake, every organisation needs to seriously consider implementing additional layers of security.
Eftsure’s unique platform aggregates banking data from nearly 2 million Australian organisations. Before your Accounts Payable team processes any EFT payments, the banking details are cross referenced against this database to verify their validity.
Eftsure gives you a level of knowledge that call backs can’t achieve.
The seamless process automatically notifies you when the banking details match our database, or whether you should pause to investigate further.
Contact eftsure today for a no-obligation demonstration of the power of aggregated knowledge-sharing.