Those of you regularly reading these posts know that I am researching multiple Generative AI solutions. It is my firm belief that advanced AI tools will greatly benefit financial institutions, both for internal applications as well as customer facing services. Yet, concern over the use of GenAI has caused many institutions to literally shut it off at the network level. I completely understand the issues of whether GenAI content can be fully trusted, but it also may be a mistake to ignore GenAI as a valuable tool that can greatly increase the efficiency of bank employees or deliver valuable wisdom to customers. What are the concerns that have bank IT and compliance professionals so worried? As a recent Financial Brand article highlights, there are potential downsides to using Generative AI without the appropriate safeguards. Especially since most GenAI projects employ the use of a Large Language Model (LLM) version of AI. These issues include: High Cost – Creating a GenAI solution is not a “one and done” exercise: it requires constant maintenance and “tuning” to ensure that it provides the expected outcomes Hallucinations – Gen AI solutions based on LLMs can and do “create” information that is inaccurate Bias – LLMs have been shown to have internal biases that may be counter to what a financial institution wishes to project Data Usage – Proprietary information can get into the solution and then use inappropriately But are these potential issues enough for a financial institution to reject any use of Generative AI? I don’t believe they are. The challenge lies in determining exactly what you are attempting to achieve in using a GenAI tool. That first crucial step sets up additional “moments of truth” in a GenAI project lifecycle that ensures that the proper safeguards are in place and the intended outcome is achieved. So how will you define an outcome? Let’s separate this into two separate buckets: internal and external GenAI projects. An internal outcome might consist of using GenAI to create better marketing posts. A Large Language Model AI request might be to create a series of social media posts touting the benefit of opening a primary checking account targeting a young millennial or Gen Z audience. This is exactly the type of creative outcome that LLMs were created to achieve. Do you take the output it creates and immediately post that to Instagram? Of course not! However, giving your marketing team content on which to vet, edit and subsequently post will ensure a more meaningful social media presence. Another internal outcome might be to provide lending officers and loan operations with a systemic compendium of knowledge regarding the institution’s loan policy, procedures and financial regulation. Unlike the marketing example above, this outcome would use a Small Language Model (SLM) type of GenAI service. This would yield a resource that would be accessed by those working in lending who could ask native English queries and get immediate response, limited to only the content on which it was trained. This eliminates hallucinations and yields a powerful tool for lenders and loan ops personnel. What about customer facing outcomes? How about creating a financial resource to help younger customers and prospects learn more about financial services? One of the projects I am working on is a GenAI powered “agent” that can provide financial wisdom. The one I am experimenting with is based on an LLM but is being trained to offer only financial wisdom and only in a manner that follows laws, rules and regulations for how and what type of financial advice can be provided without specific licensing. Thus, my agent, who I have named Buckley, can explain why having a primary checking account is beneficial over just having Venmo but won’t opine on whether you should buy Apple stock. His delivery can be fun and whimsical, such as when you ask him to provide general investment advice in the style of Snoop Dogg. Hey, if it is a fun tool to use, then it will get used, it will get shared and maybe even become viral. I am not confident that Buckley is ready for prime time. Since it is based on a Large Language Model, I am not yet assured that Buckley won’t go off the rails and share inappropriate wisdom or worse, deliver content that runs afoul of regulations. More work needs to be done to enact and test guardrails before I would have Buckley interacting with customers, but I am confident that this would likely happen in the next eighteen months or so. I recently had several calls with bank executives asking my opinion on how they should deploy GenAI tools and what their policy on its use should contain. My recommendations to them have turned into a series of steps that would govern any Generative AI project. Step 1 is to define what the desired outcome of the project will be. When you can clearly articulate that outcome with as much detail as possible to make sure that outcome is unambiguous, you will have set in motion the rest of the project details which follow. I will cover more of these project elements in future posts, but for now, focus on the start of the process: defining the desired outcome. Resources https://thefinancialbrand.com/news/data-analytics-banking/artificial-intelligence-banking/an-ai-system-built-for-everything-is-an-ai-built-for-nothing-180290/ The views expressed in this blog are for informational purposes. All information shared should be independently evaluated as to its applicability or efficacy. FNBB does not endorse, recommend or promote any specific service or company that may be named or implied in any blog post.