At this point, most of us are familiar with the Hollywood narrative about artificial intelligence: As AI becomes more widespread and grows smarter, it will eventually turn its power against us.
Luckily, this dark fiction is still just that—a technological scary story.
Though AI is already delivering on its many helpful promises, like making myriad aspects of business easier for companies and consumers, the change is not taking place without its own unique set of considerable challenges, particularly for those in the financial services industry.
No wonder then that AI served as a topic of extreme interest among business leaders and the student audience at 2019’s Analytics Colloquium. This year’s edition of the annual conference, hosted by the McIntire School of Commerce’s Center for Business Analytics (CBA), delved into the rewarding but demanding practice of integrating AI to enhance processes and operations across sectors.
CBA Director and IT Professor Ahmed Abbasi, who moderated the Sept. 6 Colloquium’s first panel on “Leadership in an AI-enabled World,” discussed the subject at length with CapTech Principal and Regional President Joanna Bergeron (A&S ‘ 98, M.S. in MIT ’03), Deloitte Chief Data Scientist for DSJ Sectors Tom Kramer, Ipsos Public Affairs Senior Vice President Mark Polyak, and EY Principal Sacheen Punater.
During the session, panelists noted that when compared to even the recent past, AI is being implemented with a different set of concerns—the very issue taken up by EY Partner and CBA Board Member Yang Shim (McIntire ’96) and Abbasi in the white paper “How Do You Steer the Business When AI Is Running the Ship?”
The article examines the subject and highlights the CBA’s strong thought leadership partnership with EY in this important area. Written together with EY co-authors Chandra Devarkonda, Rita Kirzhner, and Tom Reilly, Shim and Abbasi’s paper details key considerations that are increasing the demand for evaluation when it comes to the potential impact of the growing use of AI.
As the technology becomes a ubiquitous component of business, we recently spoke to Shim and Abbasi about five critical factors financial services firms should take into account when using AI:
1. Machine learning models do more than ever before—and introduce more risk.
As AI is being relied upon for everything from anticipating human behavior and financial markets to managing security-related events, the potential for failure and misuse also rises.
Abbasi says that as AI machine learning models offer organizations boosted efficiency and increased revenue, a pressing concern remains risk management—at every level, from individual employees to entire industries.
“There are many types of risk. Privacy is a growing issue, but there’s also the problem of models behaving badly, and firms need to be resilient when they’re relying so heavily on autonomous, black-box models running in real time,” he says. “Data monetization also creates situations in which organizations must formally design frameworks from the onset. Then they need to think about where and how they will account for all of these types of risks.”
Shim expanded on the idea: “Because of the level and amount of risk involved, companies have to introduce AI the right way—with foundations to support trust, privacy, security, and resiliency. Over the last six months, the industry perspective moved past simply building and using models toward having the foundational issues taken care of so that their AI is truly robust and ready to go.”
2. Being able to explain how complex machine learning models work is crucial.
Along with assuming more complex tasks, AI and the machine learning models it employs are becoming ever more hands-off. All of this self-directed decision making can breed mistrust in the event of a misfire—especially if the processes and rules governing their methods aren’t well understood.
Financial services companies need to know that models can be understood as well as trusted. Questions worth answering to ensure model transparency include:
- Do I know what tools are being built or leveraged throughout the organization?
- Do we have sufficient standards to control AI risk?
- What are the potential implications of using complex machine learning models for our business?
“Ultimately, firms must be ready with clear explanations and detailed plans,” Abbasi says.
3. Machine learning models must be taught how to play fair.
Controversial chatbot conversations. Biased recruiting tools. Discriminatory advertising.
Recent headlines have been all over big-name tech companies for these types of gaffs, but financial services aren’t immune, either.
Shim notes that as firms rely on AI to expand credit access, models can unfairly benefit some consumer groups and systematically disadvantage others. “Bias could rear its ugly head during data collection, preparation, modeling, evaluation, or deployment,” he says.
Though best practices for combating the possibility vary by region, organizations are best served by defining the purpose of the data they’re using, knowing how and from whom the data was sourced, being aware of how the data was prepared, and regularly testing the model results for bias.
4. To benefit from AI, rethink workforce strategy.
No matter how you slice it, increased AI use has a tendency to stir up job security anxiety. Organizations poised to reap the most rewards from automation are wise to consider upskilling or retraining employees, reassessing organizational roles, providing open access to learning tools, and cultivating sustainable talent in house.
“AI also often dictates that teams or functions collaborate more closely, and effective communication between different teams, such as business units and technology teams, becomes vital,” Abbasi explains. “This is especially true in times when the lines becomes blurred about which team owns certain responsibilities. As organizations reevaluate existing roles and responsibilities, they’ll need fluidity to adapt to a future-state, AI-enabled environment.”
5. Third-party AI solutions need extra attention.
Predictive AI solutions made by vendors may work perfectly in one context, but fail miserably in others. The gap between expectation and performance for third-party and open source solutions means that organizations need new management guidelines for dealing with possible issues.
Compounding the problem, a lack of transparency, rapid, improperly tested tool development, potential bias, and a slew of privacy risks should motivate firms to set up a centralized structure for approving vendors, detailing requirements for the tools they use, doing their due diligence on vendors and their software, and ensuring compliance with emerging regulations by evaluating every vendor’s data protection standards.
“It’s about responsible growth,” says Shim. “Because at the end of the day, if you look at the banking industry, they are not saying, ‘We have the coolest experience or tech for you.’ It’s more important that they can say, ‘We are building trust for you and we’re protecting your privacy. That’s why you need to have this business relationship with us.’ The concept of trust is superseding the technology hype about what businesses can accomplish with AI.”