How is AI Enabling Financial Crime?

I’ve been focusing my attention into fraud matters recently.  Firstly, I am researching check fraud solutions as the rise of checks being stolen from mailboxes, altered and submitted has skyrocketed out of control. My theory is that our industry has done a pretty good job of controlling or remediating fraud in electronic payments, and thus organized crime refocused attention back to checks. Combine that with the fact that most checks are now corporate or business checks that tend to have a much higher value amount, meaning that crooks can wash and replace the payee information and not have to alter the courtesy or legal amounts, thus avoiding controls such as positive pay to get away with their fraudulent activity.  Yet, it’s the potential fraud from AI that has our industry in even a greater lather.

AI has so much promise and I will continue to highlight those positive uses in future blog posts. AI has the capability to create realistic images and audio, although completely synthetic.  A bank that relies on voice authorizations for large value wires now has to wonder, how do I really know I am speaking to the authorized person at their customer and not a criminal using AI to fake the customers voice?  I was informed on this issue by a white paper put out by BioCatch, which is based on an online survey of 600 professionals within fraud management, anti-money laundering, and risk and compliance. All respondents were of manager-level or above at banking or financial services companies in the United States, Canada, France, Germany, Spain, the United Kingdom, the United Arab Emirates, Netherlands, Sweden, Australia, and India.

The study highlighted that 91% of the respondents are rethinking the use of voice verification for large deposit and business customers given the risks of voice cloning with AI. 88% said more information should be shared between banks, financial institutions and government or regulatory authorities to combat financial crime and fraud.  That all makes perfect sense to me, and it is certainly commercially reasonable that something other than just the sound of one’s voice be used to authorize transactions, particularly high value ones.  This might be a voice password or better, a code that is calculated using date, time or other variables that would be different every time but one that the real customer and their bank would clearly understand as a valid authorization.

There was another data point from the white paper that was a bit of a head scratcher for me.  The report stated that “sixty-nine percent of respondents said that criminals are more advanced at using AI for financial crime than banks are at using AI to fight crime, according to the survey. Seventy-two percent said their organizations faced cases of synthetic identities when onboarding new clients.”  I don’t think it’s any secret that fake accounts are being setup at financial institutions, but I am not that sure how AI is super-charging this effort. If I walk into a bank to open an account and show them my ID, is the bank then going to conduct a search for me online, look at my LinkedIn and Facebook profiles in the process to determine that I am a real person?  I get it that AI can create online accounts and create a persona online that is synthetic but so can a real person.  I don’t get why AI is being blamed for fake accounts being opened.

And how is opening an account with a synthetic identity creating losses for the banks? If I create a fake account and then conduct business in that account, how does that negatively affect the bank? There has to be some other fraud taking place, such as depositing fraudulent items into that account or other similar fraud.  So why can’t the bank detect that fraud no different than any other fraudulent activity that occurs on a non-synthetic account?  I am not trying to minimize or belittle financial crime; it is serious business and deserves our attention and efforts to remediate it. But it seems there is an effort to demonize AI as somehow making it much easier to defraud a bank.  The net result has many banks shutting down all AI tools within the bank, perhaps negating the very tools that might be deployed in the efforts to reduce financial crime.

I have long advocated those banks offering an online opening experience should require the applicant to take a contemporaneous picture using their phone with their ID right beside their face. That picture along with the meta-data associated with it would go a long way to thwarting much of the synthetic account opening process.  I know there are naysayers that will claim that the picture and its meta-data can be deep faked, and they are correct.  But the vast majority of bad actors creating fake accounts, either real or AI driven, will not be successful with this element of the new account opening process.  Especially if the FI makes taking the picture a part of its account opening process, meaning it would take control of the user’s camera, thereby eliminating the ability for an AI created photo to be inserted into the account opening process.

Let’s not let unfounded concerns over AI keep us from examining how and where AI can be implemented as an important tool in providing financial services.  Marketing, lending, fraud prevention and yes, even new account opening can all benefit from AI powered tools if they are properly vetted and implemented. Preventing the strategic use of AI is like making all your employees do addition and multiplication by hand because you were concerned about risk associated with calculators.  Being strategic about AI is smart, and you should be proactive in determining how and where it can be successfully used. If you have concerns or questions about AI’s use in banking, reach out to me at dpeterson@bankers-bank.com and let’s start a conversation about AI and its impact on financial services.

 

Resources

https://www.biocatch.com/ai-fraud-financial-crime-survey?utm_source=MarketingCloud&utm_medium=email&utm_campaign=newsbytes&utm_content=NEWSBYTES-20240425.html

 

The views expressed in this blog are for informational purposes. All information shared should be independently evaluated as to its applicability or efficacy.  FNBB does not endorse, recommend or promote any specific service or company that may be named or implied in any blog post.