Techno Blender
Digitally Yours.

Google Cloud Launches Anti-Money-Laundering Tool for Banks, Betting on the Power of AI

0 39


Alphabet’s cloud business on Wednesday announced the launch of a new AI-driven anti-money-laundering product. Like many other tools already on the market, the company’s technology uses machine learning to help clients in the financial sector comply with regulations that require them to screen for and report potentially suspicious activity.

Where Google Cloud aims to set itself apart is by doing away with the rules-based programming that is typically an integral part of setting up and maintaining an anti-money-laundering surveillance program—a design choice that goes against the prevailing approach to such tools and could be subject to skepticism from some quarters of the industry.

The product, an application programming interface dubbed Anti Money Laundering AI, already has some notable users, including London-based HSBC, Brazil’s Banco Bradesco and Lunar, a Denmark-based digital bank.

Its launch comes as leading U.S. tech companies are flexing their artificial intelligence capabilities following the success of generative AI app ChatGPT and a race by many in the corporate world to integrate such technology into a range of businesses and industries.

Financial institutions for years have relied on more traditional forms of artificial intelligence to help them sort through the billions of transactions some of them facilitate every day. The process typically starts with a series of human judgment calls, then machine learning technology is layered in to create a system that enables banks to spot and review activity that might need to be flagged to regulators for further investigation.

Google Cloud’s decision to do away with rules-based inputs to guide what its surveillance tool should be looking for is a bet on AI’s power to solve a problem that has dogged the financial sector for years.

Depending on how they are calibrated, a financial institution’s anti-money-laundering tools can flag too little or too much activity. Too few alerts can lead to questions—or worse—from regulators. Too many can overwhelm a bank’s compliance staff, which is tasked with reviewing each hit and deciding whether to file a report to regulators.

Manually inputted rules drive up those numbers, Google Cloud executives argue. A user, for example, could tell the program to flag customers that deposit more than $10,000 or send multiple transactions of the same amount to over 10 accounts.

As a result, the number of system-generated alerts that turn out to be bad leads, or what the industry calls “false positives,” tends to be high. Research by Thomson Reuters Regulatory Intelligence puts the percentage of false positives generated by such systems at as high as 95%.

With Google Cloud’s product, users won’t be able to input rules, but they will be able to customize the tool using their own risk indicators or typologies, executives said.

By using an AI-first approach, Google Cloud says its technology cut the number of alerts HSBC received by as much as 60%, while increasing their accuracy. HSBC’s “true positives” went up by as much as two to four times, according to data cited by Google.

Jennifer Shasky Calvery, the group head of financial crime risk and compliance at HSBC and the former top U.S. anti-money-laundering official, said the technology developed by Google Cloud represented a “fundamental paradigm shift in how we detect unusual activity in our customers and their accounts.”

For many financial institutions, ceding control to a machine-learning model could be a tough sell. For one, regulators typically want institutions to be able to clearly explain the rationale behind the design of their compliance program, including how they calibrated their alert systems. The usual line of thinking among banks and their regulators is that such systems should be tailor-made to the specific institution and its risk profile.

And while compliance experts say machine-learning-driven anti-money-laundering tools have improved over the years, their limitations have made some in the industry skeptical of their ability to substitute for a human’s capacity to figure out where the risks actually lie.

“There’s so much contextual information that isn’t accounted for by these systems,” Sarah Beth Felix, a consultant who helps banks vet and calibrate their anti-money-laundering tools, said of the existing tools on the market. “AI is only as good as the humans who train it.”

Google Cloud executives said they hope to ease these concerns, both by showing better resultsand through another feature of their product—what they called its “explainability.”

Instead of focusing on providing transaction alerts, the company’s product draws on a range of data to identify instances and groups of high-risk retail and commercial customers. Anytime the product flags a particular customer, it also provides information about the underlying transactions and contextual factors that led to the high-risk score, said Zac Maufe, global head of regulated industries solutions at Google Cloud.

“We spent a lot of time making sure that the language that the model was able to provide to the analysts spoke their words,” Maufe said. “It’s not just ‘give them the answer,’ it’s also ‘show them the homework.’”

For her part, Calvery said getting regulators to accept HSBC’s new approach was accomplished through testing and validation of the new tool.

“As soon as we saw that [Google Anti Money Laundering AI] was finding more, and was doing it with significantly less noise…we started asking ourselves, ‘What’s not the case for using it?’” she said.

Write to Dylan Tokar at [email protected]


Alphabet’s cloud business on Wednesday announced the launch of a new AI-driven anti-money-laundering product. Like many other tools already on the market, the company’s technology uses machine learning to help clients in the financial sector comply with regulations that require them to screen for and report potentially suspicious activity.

Where Google Cloud aims to set itself apart is by doing away with the rules-based programming that is typically an integral part of setting up and maintaining an anti-money-laundering surveillance program—a design choice that goes against the prevailing approach to such tools and could be subject to skepticism from some quarters of the industry.

The product, an application programming interface dubbed Anti Money Laundering AI, already has some notable users, including London-based HSBC, Brazil’s Banco Bradesco and Lunar, a Denmark-based digital bank.

Its launch comes as leading U.S. tech companies are flexing their artificial intelligence capabilities following the success of generative AI app ChatGPT and a race by many in the corporate world to integrate such technology into a range of businesses and industries.

Financial institutions for years have relied on more traditional forms of artificial intelligence to help them sort through the billions of transactions some of them facilitate every day. The process typically starts with a series of human judgment calls, then machine learning technology is layered in to create a system that enables banks to spot and review activity that might need to be flagged to regulators for further investigation.

Google Cloud’s decision to do away with rules-based inputs to guide what its surveillance tool should be looking for is a bet on AI’s power to solve a problem that has dogged the financial sector for years.

Depending on how they are calibrated, a financial institution’s anti-money-laundering tools can flag too little or too much activity. Too few alerts can lead to questions—or worse—from regulators. Too many can overwhelm a bank’s compliance staff, which is tasked with reviewing each hit and deciding whether to file a report to regulators.

Manually inputted rules drive up those numbers, Google Cloud executives argue. A user, for example, could tell the program to flag customers that deposit more than $10,000 or send multiple transactions of the same amount to over 10 accounts.

As a result, the number of system-generated alerts that turn out to be bad leads, or what the industry calls “false positives,” tends to be high. Research by Thomson Reuters Regulatory Intelligence puts the percentage of false positives generated by such systems at as high as 95%.

With Google Cloud’s product, users won’t be able to input rules, but they will be able to customize the tool using their own risk indicators or typologies, executives said.

By using an AI-first approach, Google Cloud says its technology cut the number of alerts HSBC received by as much as 60%, while increasing their accuracy. HSBC’s “true positives” went up by as much as two to four times, according to data cited by Google.

Jennifer Shasky Calvery, the group head of financial crime risk and compliance at HSBC and the former top U.S. anti-money-laundering official, said the technology developed by Google Cloud represented a “fundamental paradigm shift in how we detect unusual activity in our customers and their accounts.”

For many financial institutions, ceding control to a machine-learning model could be a tough sell. For one, regulators typically want institutions to be able to clearly explain the rationale behind the design of their compliance program, including how they calibrated their alert systems. The usual line of thinking among banks and their regulators is that such systems should be tailor-made to the specific institution and its risk profile.

And while compliance experts say machine-learning-driven anti-money-laundering tools have improved over the years, their limitations have made some in the industry skeptical of their ability to substitute for a human’s capacity to figure out where the risks actually lie.

“There’s so much contextual information that isn’t accounted for by these systems,” Sarah Beth Felix, a consultant who helps banks vet and calibrate their anti-money-laundering tools, said of the existing tools on the market. “AI is only as good as the humans who train it.”

Google Cloud executives said they hope to ease these concerns, both by showing better resultsand through another feature of their product—what they called its “explainability.”

Instead of focusing on providing transaction alerts, the company’s product draws on a range of data to identify instances and groups of high-risk retail and commercial customers. Anytime the product flags a particular customer, it also provides information about the underlying transactions and contextual factors that led to the high-risk score, said Zac Maufe, global head of regulated industries solutions at Google Cloud.

“We spent a lot of time making sure that the language that the model was able to provide to the analysts spoke their words,” Maufe said. “It’s not just ‘give them the answer,’ it’s also ‘show them the homework.’”

For her part, Calvery said getting regulators to accept HSBC’s new approach was accomplished through testing and validation of the new tool.

“As soon as we saw that [Google Anti Money Laundering AI] was finding more, and was doing it with significantly less noise…we started asking ourselves, ‘What’s not the case for using it?’” she said.

Write to Dylan Tokar at [email protected]

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment