After several recent high-profile examples of “machine learning gone wrong,” such as a Twitter chatbot being shut down after a single day for its obscene and inflammatory tweets, an increased interest in mitigating bias has loomed large over machine learning.
In a recently published article in Harvard Business Review, McIntire Murray Research Professor and Director of the Center for Business Analytics Ahmed Abbasi and IT Professor Jingjing Li, and their colleagues Gari Clifford and Herman Taylor, discuss how bias across various stages of the machine-learning process can influence how models perform.
Understanding the need to alleviate such bias from the models in their federally funded “Stroke Belt” project (with McIntire Professors Rick Netemeyer and David Dobolyi), the researchers borrowed the concept of “privacy by design” popularized by the EU’s General Data Protection Regulation (GDPR) to employ a “fairness by design” strategy in their patient-centric mobile/IoT (“Internet of things”) platform created to help those at early risk of cardiovascular disease.
Drawing from their experiences, the McIntire team offers five actionable recommendations on designing for fairness. Read the full article at Harvard Business Review to learn how companies and data scientists can overcome the type of biases that have the potential to interfere with analytics models and the machine-learning process.