Deepfakes — also known as synthetic media — can be used for more than impersonating celebrities and making disinformation more believable. They can also be used for financial fraud.
Fraudsters can use deepfake technology to trick employees at financial institutions into changing account numbers and initiating money transfer requests for substantial amounts, says Satish Lalchand, principal at Deloitte Transaction and Business Analytics. He notes that these transactions are often difficult, if not impossible, to reverse.
Cybercriminals are constantly adopting new techniques to evade know-your-customer verification processes and fraud detection controls. In response, many businesses are exploring ways machine learning (ML) can detect fraudulent transactions involving synthetic media, synthetic identity fraud, or other suspicious behaviors. However, security teams should be mindful of the limitations of using ML to identify fraud at scale.
Finding Fraud at Scale
Fraud in the financial services sector over the past two years was driven by the fact that many transactions were pushed to digital channels as a result of the COVID-19 pandemic, Lalchand says. He cites three risk factors driving the adoption of ML technologies for customer and business verification: customers, employees, and fraudsters.
Though employees at financial services firms are typically monitored via cameras and digital chats at the office, remote workers are not surveilled as much, Lalchand says. With more customers signing up for financial services virtually, financial services firms are increasingly incorporating ML into their customer verification and authentication processes to close that window for both employees and customers. ML can also be used to identify fraudulent applications for government assistance or identity fraud, Lalchand says.
In addition to spotting fraudulent Paycheck Protection Program loans, ML models can be trained to recognize transaction patterns that could signal human trafficking or elder abuse scams, says Gary Shiffman, co-founder of Consilient, an IT firm specializing in financial crime prevention.
Financial institutions are now seeing fraud emerge across multiple products, but they tend to search for fraudulent transactions in silos. Artificial intelligence and ML technology can help bring together fraud signals from across multiple areas, Shiffman says.
“Institutions continue to do the whack-a-mole, and continue to try and identify where fraud was increasing, but it was just happening from all over the place,” Lalchand says. “The fusion of information … is called CyFi, bringing cyber and financial data together.”
ML tools can assist in positively identifying customers, detecting identity fraud, and spotting the likelihood of risk, says Jose Caldera, chief product officer of global products for Acuant at GBG. ML can examine past behavior and risk signals and apply those lessons in the future, he says.
The Limits of Machine Learning
Though ML models can analyze data points to detect fraud at scale, there will always be false positives and false negatives, and the models will degrade over time, Caldera says. Therefore, cybersecurity teams training the algorithm to spot fraud must update their models and monitor its findings regularly, not just every six months or every year, he says.
“You have to make sure that you understand that the process is not a one-time [task]. And … you need to have the proper staffing that would allow you to maintain that process over time,” Caldera says. “You’re always going to get more information, and … you need to be able to use it constantly on improving your models and improving your systems.”
For IT and cybersecurity teams evaluating the effectiveness of ML algorithms, Shiffman says they will need to establish ground truth — the correct or “true” answer to a query or problem. To do so, teams using ML technologies try out a model using a test data set, using an answer key to count its false negatives, false positives, true positives, and true negatives, he says. Once these errors and correct answers are accounted for, companies can recalibrate their ML models to identify fraudulent activity in the future, he explains.
Besides updating their algorithms to detect fraud, IT and cybersecurity teams using ML technology must also be aware of legal restrictions on sharing data with other entities, even to identify fraud, Shiffman says. If you’re handling data from another country, you may not be legally able to transfer it to the US, he says.
For teams looking to use ML technology for fraud detection, Caldera cautions that such tools are just one component of a fraud prevention strategy and that there is no one solution to solving that problem. After onboarding new customers, cybersecurity and IT professionals must stay abreast of how they are changing behaviors over time.
“The use or not of technology or machine learning is just one component of your toolset,” Caldera says. “You as a business, you have to understand: What is the cost that you are putting to this, what is the risk tolerance that you have, and then what is the customer position that you want?”