There are dangers that arise from artificial intelligence (AI) that might not be the kind that first come to mind. Many involve autoML, automated machine learning that relies on more machine learning to generate better versions of itself.
While autoML advertises the promise of “democratizing machine learning,” offering increased access to firms with limited enterprise analytics capabilities to solve their complex business problems, the advancements have also brought the idea of “AI hubris” to light. Where do people and human expertise fit in an increasingly AI-enabled word?
McIntire Professors Ahmed Abbasi and Brent Kitchens tackle the problem in an Oct. 24, 2019, article for Harvard Business Review titled “The Risks of AutoML and How to Avoid Them.” Along with their co-author, Graduate Research Assistant Faizan Ahmad, the Commerce School faculty members propose methods for avoiding the hazards of using advanced machine learning methods with large-scale search query data. The result is what they term “augmented machine learning,” or augML, which supports the autoML concept by integrating experts, providing context for the results, and relying on complementary sources of data.
The research team (which also includes McIntire Professor Rick Netemeyer and Professor Donald Adjeroh of West Virginia University) also offer important considerations for managers and data scientists interested in making use of machine learning, including semi-automating the model development process, contextualizing machine learning with representation engineering (the intentional mapping of data into a meaningful custom architecture), and balancing depth with breadth through data triangulation.