Adam Lieberman, Head of Artificial Intelligence and Machine Learning, Finastra

With ChatGPT reaching 100 million users Within two months of its release, generative AI has become one of the hottest topics as people and industries reflect on its benefits and ramifications. This has been further fueled by the fact that ChatGPT has inspired a host of new generative AI projects across industries, including the financial services ecosystem. Recently, it was reported that JPMorgan Chase is developing a software service similar to ChatGPT for use by its clients.

On the other hand, as new stories about generative AI tools and applications spread, so do conversations about the potential risks of AI. On May 30, the Center for AI Security released a statement, signed by more than 400 AI scientists and notable leaders, including Bill GatesCEO of OpenAI sam altman and “the godfather of AI”, Geoffrey Hinton— express concerns about serious potential risks.

Better has been closely following developments in AI for many years, and our team is optimistic about what the future holds, particularly for the application of this technology in financial services. In fact, at Finastra, AI-related efforts are widespread, spanning areas ranging from financial product recommendations to mortgage process document summaries and more.

However, while AI can do some good, bank leaders—responsible for keeping customers’ money safe, a job they don’t take lightly—also need to have a clear idea of ​​what sets tools like ChatGPT apart. from previous chatbot offerings, initial use. Generative AI cases for financial institutions and the risks that can arise with artificial intelligence, particularly as technology continues to advance rapidly.

Not your grandmother’s chatbots

AI is no stranger to financial services, as artificial intelligence was already implemented in functions like customer interaction, fraud detection and analysis long before the launch of ChatGPT.

Unlike today’s extensive language models (LLMs), however, previous financial services chatbots were archaic, much simpler and more rule-based than ChatGPT. In response to a query, these previous iterations would essentially seek to find a similar question, and if no such question was logged, they would return an irrelevant answer, an experience many of us have no doubt had.

It takes a much larger language model to understand the semantics of what a person is asking and then provide a useful answer. ChatGPT and its peers excel in domain expertise with a human-like ability to discuss topics. Mass bots like these are highly trained to provide a much smoother experience for users than previous offerings.

Possible use cases

With a better understanding of how new generative AI tools differ from old ones, bank leaders need to understand the potential use cases for these innovations in their own work. No doubt the applications will expand exponentially as the technology develops further, but initial use cases include:

Case workloads: These documents can be hundreds of pages long and often take at least three days for one person to manually review. With AI technology, this is reduced to seconds. Also, as this technology evolves, AI models can be developed in such a way that they not only review but actually create documents after being trained to generate them with all their necessary needs and concepts incorporated.

Administrative work: Tools like ChatGPT can save bank employees significant time by taking over tasks like selecting and responding to emails and supporting incoming tickets.

Domain Experience – To provide an example here, many questions tend to arise for consumers in the mortgage market process who may not understand all of the complex terms on applications and forms. Advanced chatbots can be integrated into the digital customer experience to answer questions in real time.


While this technology has many interesting potential use cases, much is still unknown. Many of Finastra’s clients, whose job it is to be risk-aware, have questions about the risks AI presents. And indeed, many in the financial services industry are already moving to restrict the use of ChatGPT among employees. Based on our experience as a provider to banks, Finastra focuses on a number of key risks that bank leaders need to be aware of.

Data integrity is at stake in financial services. Customers trust their banks to keep their personal data safe. However, at this stage, it is not clear what ChatGPT does with the data it receives. This raises an even more troubling question: Could ChatGPT generate a response that shares sensitive customer data? With old-style chatbots, the questions and answers are predefined and govern what is returned. But what is ordered and returned with new LLMs can be difficult to control. This is an important consideration that bank leaders need to weigh and monitor closely.

Ensuring fairness and lack of bias is another critical consideration. AI bias is a well-known problem in financial services. If there is bias in the historical data, it will contaminate the AI ​​solutions. Data scientists in the financial industry and beyond must continue to explore and understand the available data and look for any bias. Finastra and its clients have spent years working and developing products to counteract biases. Knowing how important this is to the industry, Finastra appointed bloinxa decentralized application designed to build an unbiased fintech future, as a winner of our 2021 hackathon.

the way to follow

Balancing innovation and regulation is not a new dance for financial services. The AI ​​revolution is here, and as with previous innovations, the industry will continue to evaluate this technology as it evolves to consider applications that benefit customers, always with an eye on customer security.

Adam Lieberman, Head of Artificial Intelligence and Machine Learning, Finastra

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *