One goal of the Federal Reserve is to facilitate a secure, safe payment system, but a Boston Federal Reserve Bank official this week pointed out a major, and growing, obstacle to that mission: synthetic identity fraud.
During the Boston Fed’s Six Hundred Atlantic podcast interview released Monday, Mike Timoney, vice president of secure payments and fintech at the Boston Fed, discussed the growing risk of synthetic identity fraud, the role credit cards play in constructing fake identities and the use of generative artificial intelligence to accelerate synthetic identity fraud.
Like the Federal Reserve, risk officers at financial institutions have also been paying greater attention to synthetic identity fraud, according to the annual Federal Reserve Financial Services survey released Tuesday.
Synthetic identity fraud involves threat actors stealing information, such as people’s driver’s licenses, checking accounts or Social Security numbers, from multiple real people and using it to construct a false identity, Timoney said.
Two key components of synthetic identity fraud are opening an account in the name of the fake person and establishing good credit for the false identity using a credit card. Those credit cards and accounts establish credibility for the fake identity and give fraudsters access to the financial system, he said.
“What happens when your credit rating goes up? Banks want to give you more money. They extend your credit, give you higher credit limits, right?” Timoney explained during the interview. “So, the fraudsters could continue to do that.”
And that’s not “a short-term scam, it’s really a long-term,” he added.
While a common tactic to access fraudulently obtained funds is using mule accounts, some threat actors have been able to open their own accounts using synthetic fraud, bypassing the need for a mule account, Timoney said. Mule accounts involve manipulating a legitimate account holder into accepting the ill-gotten funds into their account, but that requires recruiting account holders and paying a portion of those funds to the account holder, he explained.
“By opening their own accounts, they don’t have to do that,” Timoney said. “What we’ve seen is this huge shift into now opening accounts through online portals, and so, therefore, the fraudsters have shifted.”
Generative artificial intelligence has helped to accelerate synthetic identity fraud by giving threat actors a sophisticated tool to comb through stolen data and create fake identities faster, Timoney said. Using generative AI, cybercriminals can create collections of counterfeit identities and determine which fake profiles are successful, which ones are not, and target financial institutions where they are more successful, he added.
“It can make sure that it’s not duplicating things,” Timoney said. “It can try to leverage the learnings that it has from the data sets to make sure that it’s being as varied as possible, so that they have [a higher] success rate.”
To alert financial institutions about the rising risks, the Federal Reserve continues to update its synthetic identity fraud toolkit with new articles, Timoney said.
Meanwhile, the Federal Reserve Financial Services unit’s survey of more than 360 institutions suggests that synthetic identity fraud is becoming a bigger headache for financial firms.
Nearly four in ten survey respondents said synthetic identity fraud was a persistent (25%) or increasing (14%) problem for them, the Fed’s survey results showed. Also, nearly half described mule account activity as either an increasing (18%) or persistent (29%) issue, growing 12% last year over 2023 as a cause of concern. Mule accounts were among the primary drivers of fraud for financial institutions, the survey said.
Along with the Federal Reserve, financial firms have been warning each other of the role artificial intelligence has played in accelerating fraud. During the annual Money 20/20 conference in Las Vegas last fall, financial institutions sounded the alarm on the use of artificial intelligence tools that enable fraudsters to circumvent their monitoring for unusual activity.
One way financial firms can combat cybercriminals who are using AI is to partner with one another, Nicole Lauredan, partnerships leader for the payments processing software company Stripe, said during the conference. Stripe’s card issuer network enables financial firms to share data on fraudulent transactions and lets them share the company’s anti-fraud services, Lauredan added.
Despite the use of generative AI to execute synthetic identity fraud more thoroughly, Timoney expressed optimism regarding combating the threat in the future.
“I’m not overly concerned that synthetic’s going to run away,” Timoney said during the interview released Monday. “I think that Gen AI is going to affect fraud in general on an upward swing. And then, ultimately, we’ll use it to fight it.”