Faculty

AI and Business Ethics: Professor Dorothy Leidner Finds the Human Through Line in the Maze of New Tech

While the impacts of AI can seem mind-boggling, Leidner, who has spent decades evaluating information systems, simplifies things by looking through the lens of human dignity.

Dorothy Leidner

By Haley Nolde

“Whenever things are possible, they usually happen.”

This observation feels somehow both exciting and a bit sinister—much like the new tools of artificial intelligence that are dominating public discourse and changing business and society with lightning speed. The words belong to Dorothy Leidner, who joined the McIntire School of Commerce faculty in January as the Leslie H. Goldberg Jefferson Scholars Foundation Distinguished Professor in Business Ethics.

In a recent interview, Leidner discussed the emergence of generative AI tools such as OpenAI’s ChatGPT and Microsoft’s new AI-powered Bing search engine. Her extensive background in information systems and award-winning contributions to the field uniquely position her to examine the new technologies through an ethical lens. With game-changing developments popping up in the news cycle almost daily, it seems that, for the moment, it’s as if we’re hanging onto the leash of an untrained dog, running to keep pace with it.

“You’re never going to be able to control new technologies that come out, because they always occur before the legal institutions and policy have time to catch up,” Leidner says. “What we have to do is try to govern how it’s used.” The lag, she explains, makes it crucial for organizations to plan how they will oversee their use of the new tools, and who will be responsible when things go wrong. Discussing the profound capabilities and potential pitfalls ahead, Leidner addresses unwanted outcomes of AI use; ways generative AI will change the nature of work; its influence on the use and misuse of employee data; and transparency regarding not only the sources AI systems are trained on, but also decisions that are made using the information they deliver.

We Can’t Avoid Unwanted Outcomes

Capable of generating original essays, emails, poems, and such instantaneously, the language model ChatGPT also spews out so-called hallucinations—confidently delivered, erroneous responses with no basis in fact. While these models will improve with use, Leidner maintains that some degree of inaccurate information is unavoidable. Likewise, biases that may be further propagated by generative AI models will be hard to excise, because those biases exist within the vast swaths of online content on which the models are trained. In short, there is no making the new technology perfect, especially before it’s incorporated into use.

“I don’t think you can possibly prepare ahead of time to make everything that comes out of an AI generation to be accurate or factual,” Leidner says pragmatically. “It’s not realistic. So, then, how are we going to govern the mistakes? What procedures of oversight are we going to set in place?” It will be up to individual organizations to develop governing policies.

The cart-before-the-horse model is nothing new, Leidner points out. “Unintended consequences have been happening for hundreds of years in technology development,” she says. Sometimes they lead to further development; sometimes they are quite bad. “We can’t stop that train,” Leidner said. “We can only try our best to govern what we see as the potential, and make adjustments as necessary.”

Unfortunately, we don’t have the best track record in governing the use of emerging technologies. Serving on faculties of universities abroad, Leidner has noticed stark differences in laws and practices related to the protection of personal data with those in the United States. She views the digitalization of personal data in the U.S. as not just a privacy issue, but an existential threat to our agency to define our identities. “Our country has always had this frontier mentality, where people can pick themselves up again, move, start over, and rebuild their lives…and we will be unable to do that,” Leidner warns. “You’re getting trapped by your own past based on data you don’t know is there, data that is forming a digital identity of you that companies have—and you can’t erase it.”

Examples are everywhere: employee surveillance systems, stored social media posts, recordings of conversations in our homes and cars taken by smart phones and speakers. Losing the ability to define oneself, and being defined instead by digital traces that don’t accurately depict who we are, is one of Leidner’s greatest concerns. She recommends that we all begin asking more questions: “First, is it factual what’s being gathered about me? Second, do I know it’s being gathered? And third, do I know what judgments, what conclusions are being drawn about me based on that data?”

We Still Need Humans

Generative AI tools certainly will change the nature of work, particularly in fields centered on the creation of content through text, images, and code, such as marketing, journalism, real estate, law, art, and architecture. However, as past technological innovations have done, AI will create a need for different types of work, sparking a shift in how we spend our time and effort. Time once spent writing, for example, will be spent fine-tuning queries in order to produce the desired content. Time will be spent evaluating AI-generated work. Is it complete? Is it accurate? Is it good?

Despite the jaw-dropping capabilities of AI, it’s worth remembering that it’s not all-powerful, Leidner reminds: “It can’t give advice, it cannot motivate people, it cannot encourage people, and it cannot counsel people; it’s amoral, so it can’t tell you right from wrong. There’s so much it can’t do.” AI will be helpful in coming up with ideas, but even its ideas will not be entirely unique, as it’s pulling them from somewhere. Maybe we’ll eventually use citations to disclose that work is AI-generated, and perhaps even identify the model used, but for now, AI pulls and delivers content without reference, creating risks for copyright infringement and plagiarism that can’t be assessed.

Nonetheless, for education, Leidner finds AI’s potential exciting. It has sparked controversy already, with schools, universities, and individual educators divided over whether to allow its use. Recalling similar concerns when the internet became broadly accessible, and when Wikipedia launched, Leidner is again pragmatic, pointing out that lasting takeaways from a good education are not found in granular test answers, but in the holistic experience. So, she says, “Yes, let them use it.”

“If the questions we’re asking can be answered by ChatGPT, then we’re not asking the right questions,” she says with a laugh.

One of the greatest questions for the coming era may whether the source matters. If one paid an architect for a design, does it matter whether the architect created it or skillfully coaxed an AI model to generate it? How about a novel? Does it matter whether a machine wrote it or a person with a wealth of lived experience? If work is only about performance, without regard for the relationships behind it, Leidner predicts “these systems will take over as much as they can. Anything they can do more efficiently than a human, they will.” If, however, work is not simply “about getting something done,” she says, “but about getting it done with someone we enjoy working with, whose relationship we value—if that matters, then AI will become a support, but it will not take over completely the job. We’ll see. It may be different in different industries.”

What Happens When a Computer Judges How You Feel?

What’s coming next, Leidner shares, is AI systems that look at our mental state of mind, deploying facial recognition software to determine our emotional state. The developments raise meaningful questions about the use and potential abuse of data and concerns about privacy and, again, unintended consequences.

Generally, intentions are good for using this type of AI system, and they can have positive outcomes. Already in use, Leidner notes, are AI systems that can detect when a truck driver is getting drowsy. Systems designed for marketing, that attempt to compare how a customer feels when they walk into a store to how they feel when they leave, may be used to scan workers in hopes of preventing workplace violence. Perhaps it could be leveraged within school systems to identify (and ideally reduce) anxiety and depression, when students might be unlikely to turn to someone for help.

But, how do we address consent? What happens when AI gets us wrong? How do we prevent our data from getting out and being misused? How long is it kept? We need to establish policies, Leidner says. “There’s a delicate balance with technologies that are gathering our private data,” she adds. “There’s the potential to do something really helpful, but over time, there’s also the danger that you’re never able to escape it. We need to think through these things.”

The rising use of surveillance technologies to monitor employee productivity, for both in-person and virtual work, already has taken a toll on employees’ dignity. Newer AI systems that scrape organizational data to locate talent internally and issue assignments, Leidner says, can have the unintended result of trapping a worker in what he or she already knows, preventing advancement or growth. Health data has even more potential for outliving its usefulness, as AI systems can turn a temporary condition into an indelible label. Here, too, is our tendency to put the cart ahead of the horse. “If it’s possible to gather it, we gather it,” Leidner says, “and then figure out what to do with it.” If we don’t want our data to define us, she adds, “we need to come up with policies about how long it is kept.”

Three Things to Ask Yourself When Using AI

What is the source? Does it matter? Do I want to know what it can tell me?

When we do want a machine to generate something for us, we should consider its sources—the data on which an AI system was trained—and whether that matters for our use. Some AI language models may be able to reference the source of the answers they give, but when they are scraping data from all over the web, that may not be possible or accurate. If we’re asking a model to find us something fun to do for the weekend, of course, the source doesn’t matter so much.

For business applications, Leidner says, internal generative AI models will probably serve companies best, scraping data only from within the organization. A leader might use an AI tool to generate a list of people in the company who have knowledge of international mergers and acquisitions law, for example. The potential for knowledge management is great, and “the potential for accuracy is going to be higher when we limit the boundaries of (the source),” she says.

For personal matters, the use of AI gets more unwieldy. Leidner believes we will have models that can predict things that would influence our lives and our decision-making in significant ways. A series of conversations held in the presence of an AI model, for example, could allow it to predict quite well a couple’s likelihood of divorcing. “If it’s going to give us that information,” Leidner asks, “wouldn’t we want to know how it came up with it?” In some cases, we may not want to know what AI can tell us. When we have tools that are able to use our DNA to tell us if we’re predisposed to certain illnesses, will we want to know? If it’s something we can’t prevent, if knowing leads us to change the course of our lives, Leidner ponders, “might it actually burden us much more than it frees us?”

Simpler Than It Seems

While the impacts of AI can seem mind-boggling, Leidner, who has spent decades evaluating information systems, simplifies things by looking through the lens of human dignity.

“Human dignity,” she says, “transcends ethics, religious backgrounds, and ethnicities, because each one of us has inherent value as an individual, and that inherent value needs to be respected and recognized.” Homing in on the dignity and well-being of the humans who make and use technology will clarify the landscape, even as it continually changes and expands.

At UVA, Leidner is conducting research on maintaining dignity within employee surveillance practices and on the rise and ramifications of online public shaming. This fall, she’ll teach a class at McIntire on technology ethics that will cover AI, as well as some history of technology. “A lot of the issues we face today,” she insists, “are not so dramatically different than what we’ve faced before.”

Find out about all the exciting things happening in the McIntire community. Visit our news page for the latest updates.

More News