Techno Blender
Digitally Yours.

Despite Deepfake and Bias Risks, AI Is Still Useful in Finance, Firms Told

0 31


A bank uses biased artificial intelligence outputs in a mortgage lending decision. An insurance firm’s AI produces racially homogeneous advertising images. Users of an AI system complain about a bad experience.

These are just a few of the potential risks AI poses for financial institutions that want to embrace the emerging technology, according to a series of papers released on Thursday. The papers, by FS-ISAC, a nonprofit that shares cyber intelligence among financial institutions around the world, highlights additional pitfalls as well, including deepfakes and “hallucinations,” when large language models provide incorrect information presented as facts.

Despite those risks, FS-ISAC outlines many potential uses for AI for financial firms, such as improving cyber defenses. The group’s work outlines the risks, threats and opportunities that artificial intelligence offers banks, asset managers, insurance firms and others in the industry.

We are on WhatsApp Channels. Click to join. 

“It was taking our best practices, our experiences, our knowledge, and putting it all together, leveraging the insights from other papers as well,” said Mike Silverman, vice president of strategy and innovation at FS-ISAC, which stands for Financial Services Information Sharing and Analysis Center.

AI is being used for malicious purposes in the financial sector, though in a fairly limited way. For instance, FS-ISAC said hackers have crafted more effective phishing emails, often refined through large language models like ChatGPT, intended to fool employees into leaking sensitive data. In addition, deepfake audios have tricked customers into transferring funds, Silverman said. 

FS-ISAC also warned of data poisoning, in which data fed into AI models is manipulated to produce incorrect or biased decisions, and the emergence of malicious large language models that can be used for criminal purposes.

Still, the technology can also be used to strengthen the cybersecurity of these firms, according to the reports. Already, AI has shown to be effective in anomaly detection, or singling out suspicious, abnormal behavior in computer systems, Silverman said. In addition, the technology can automate routine tasks such as log analysis, predict potential future attacks and analyze “unstructured data” from social media, news articles and other public sources to identify potential threats and vulnerabilities, according to the papers. 

To safely implement AI, FS-ISAC recommends testing these systems rigorously, continually monitoring them, and having a recovery plan in the case of an incident. The report offers policy guidance on two paths companies can take: a permissive approach which embraces the technology or a more cautious one with stringent restrictions on how AI can be used. It also includes a vendor risk assessment, which offers a questionnaire that can help firms decide which vendors to choose, based on their potential use of AI. 

As the technology adapts, Silverman expects the papers will be updated as well to provide an industry standard in a time of concern and uncertainty.

“The whole system is built on trust. So the recommendations that the working group has come up with are things that keep that trust going,” Silverman said. 

Also, read other top stories today:

AI Mania! The artificial intelligence craze, which has come to dominate the stock market, accounts for most of the wealth gained by the world’s richest people this year courtesy of the demand for AI chips. Know what it is about here.

AI and Love? Companion apps are being used to cope with loneliness or receive support, and users have developed emotional attachments to their digital companions. Know what human-AI relationships are like. Check it out here.

Hackers using ChatGPT! Microsoft’s latest report says nation-state hackers are using artificial intelligence to refine their cyberattacks as adversaries were detected adding LLMs like OpenAI’s ChatGPT to their toolkit. Know all about it here.


A bank uses biased artificial intelligence outputs in a mortgage lending decision. An insurance firm’s AI produces racially homogeneous advertising images. Users of an AI system complain about a bad experience.

These are just a few of the potential risks AI poses for financial institutions that want to embrace the emerging technology, according to a series of papers released on Thursday. The papers, by FS-ISAC, a nonprofit that shares cyber intelligence among financial institutions around the world, highlights additional pitfalls as well, including deepfakes and “hallucinations,” when large language models provide incorrect information presented as facts.

Despite those risks, FS-ISAC outlines many potential uses for AI for financial firms, such as improving cyber defenses. The group’s work outlines the risks, threats and opportunities that artificial intelligence offers banks, asset managers, insurance firms and others in the industry.

We are on WhatsApp Channels. Click to join. 

“It was taking our best practices, our experiences, our knowledge, and putting it all together, leveraging the insights from other papers as well,” said Mike Silverman, vice president of strategy and innovation at FS-ISAC, which stands for Financial Services Information Sharing and Analysis Center.

AI is being used for malicious purposes in the financial sector, though in a fairly limited way. For instance, FS-ISAC said hackers have crafted more effective phishing emails, often refined through large language models like ChatGPT, intended to fool employees into leaking sensitive data. In addition, deepfake audios have tricked customers into transferring funds, Silverman said. 

FS-ISAC also warned of data poisoning, in which data fed into AI models is manipulated to produce incorrect or biased decisions, and the emergence of malicious large language models that can be used for criminal purposes.

Still, the technology can also be used to strengthen the cybersecurity of these firms, according to the reports. Already, AI has shown to be effective in anomaly detection, or singling out suspicious, abnormal behavior in computer systems, Silverman said. In addition, the technology can automate routine tasks such as log analysis, predict potential future attacks and analyze “unstructured data” from social media, news articles and other public sources to identify potential threats and vulnerabilities, according to the papers. 

To safely implement AI, FS-ISAC recommends testing these systems rigorously, continually monitoring them, and having a recovery plan in the case of an incident. The report offers policy guidance on two paths companies can take: a permissive approach which embraces the technology or a more cautious one with stringent restrictions on how AI can be used. It also includes a vendor risk assessment, which offers a questionnaire that can help firms decide which vendors to choose, based on their potential use of AI. 

As the technology adapts, Silverman expects the papers will be updated as well to provide an industry standard in a time of concern and uncertainty.

“The whole system is built on trust. So the recommendations that the working group has come up with are things that keep that trust going,” Silverman said. 

Also, read other top stories today:

AI Mania! The artificial intelligence craze, which has come to dominate the stock market, accounts for most of the wealth gained by the world’s richest people this year courtesy of the demand for AI chips. Know what it is about here.

AI and Love? Companion apps are being used to cope with loneliness or receive support, and users have developed emotional attachments to their digital companions. Know what human-AI relationships are like. Check it out here.

Hackers using ChatGPT! Microsoft’s latest report says nation-state hackers are using artificial intelligence to refine their cyberattacks as adversaries were detected adding LLMs like OpenAI’s ChatGPT to their toolkit. Know all about it here.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment