AI is growing and becoming a larger part of our lives by the day. As such, the need to study and understand its potential, its impact, and its pitfalls is behind UVA’s Darden-Data Science Collaboratory (DCADS) Fellowships for AI Research, awards that are, according to the academic center, designed to “foster collaborative, multidisciplinary academic research activity at UVA on topics at the intersection of data science and business.”
The recipients of the awards, UVA Data Science Professors Tom Hartvigsen and Mona Sloane, are working on separate projects with their own research teams that include McIntire’s Professor Steven L. Johnson and Professor Sarah Lebovitz, respectively, both of whom have extensive experience in AI-related scholarship and shared their expertise on the subject during McIntire’s Fall Forum in October 2023.
Johnson joins Hartvigsen and Professor Maarten Sap of Carnegie Mellon University to focus on detecting implicit bias in natural language with large language models. Lebovitz is collaborating with Sloane and Darden Business Administration Professor Roshni Raveendhran on the use of AI in recruiting.
The Hartvigsen’s team intends to categorize how large language models represent linguistic biases, such as racial, gender-based, and ethnic stereotypes, and in turn, produce models that can continually update as biases evolve. The goal of the project is to successfully develop models capable of quickly and efficiently finding toxic and biased language—something that will be of great benefit to a multitude of industries and the many users they serve.
“Most people would agree that being ‘fair and unbiased’ is a good thing. But what does that really mean? Bias and fairness are concepts we intuitively understand, but it’s impossible to create any simple rules to eliminate bias and ensure fairness,” says Johnson. “This is a big problem in online communication.”
He explains that the question of how platforms can cut down on toxic content in the absence of universally accepted rules is a problem that goes beyond technology, but in society itself. “In this project, we will study how large language models, like ChatGPT, can provide more context-sensitive identification of toxic content. By combining both data science and information technology approaches, we can find more innovative solutions that combine the perspectives of both disciplines,” says Johnson.
Lebovitz has long admired the work her research partners Sloane and Raveendhran, as they have each been studying AI technologies with their own disciplinary and methodological perspectives. After Slone reached out to Lebovitz and Raveendhran, they began writing the grant proposal together, emphasizing their complementary perspectives, methods, and prior experiences studying the role of AI technologies used in recruiting.
“The proposed research will study how human resources professionals utilize AI in recruiting and investigate the existing landscape of recruiting AI tools,” says Lebovitz. “HR is a high-stakes decision-making context that has major implications for the hiring, employment, promotion, and evaluation of working individuals across the labor force. We aim to systematically examine various outcomes for recruiters that should enable the development of effective regulations and compliance mechanisms for deploying AI in HR.”
Another aim of the work is to produce a public database detailing the properties of recruiting AI tools, “a prerequisite for auditing at scale and other accountability and compliance mechanisms,” says Lebovitz, noting it will serve as “an essential contribution” to both the data science discipline and the growing body of research on AI-driven decision-making in business.