How to Monitor Your Human-AI Systems and Avoid AI Fails

By 2023, companies worldwide will generate a revenue of $500 billion, thanks to AI.1 Yet, only 60% of AI projects are profitable.2

There are many reasons why AI projects fail, but lack of monitoring tops the list. Enterprises spend enormous time and other resources prepping their data, building iterative models and even on the deployment phase. But the post-production of a human-AI system — how it performs with real-world data and usage — is often left to chance.

Download our free ebook, Extending Business Intelligence With Human-AI Systems

Learn how to make your human-AI systems collaborative, sustainable and high-performing over the long term.

The truth is human-AI systems are forever a work in progress; no AI model is final. As AI works with new data, people use it (perhaps causing input errors or breaking data pipelines) and broader factors affect it — such as changing consumer behavior — the data, model and prediction will begin to drift.

The Pitfalls of Human-AI Fails

Drifting from the original is an innate part of working with AI. Sometimes, the consequences are incidental, like a mislabelled photo or a wrongly-targeted ad. But when stakes are high, such as hiring or mortgage decisions, a miscalculation can have severe consequences.

In the last couple of years, consumer behavior has changed dramatically. This resulted in wildly inept AI predictions, such as:

 

  • Instacart’s AI model accuracy for predicting product availability in stores dropped from 93% to 61% due to changes in shopper habits.
  • The AI model did not have the correct data during training, resulting in poor decision-making.
  • The data was biased, and the human team is not diverse enough; so, the AI outcomes are skewed.
  • There’s no long-term data-management in place to adjust data sets to real-world behavior.
  • There’s a lack of standards and governance whereby the AI can explain the outcomes.
  • Business leaders are not committed to the AI project or are ill-qualified to monitor it.
  • There’s no transparency or collaboration between the different teams using the AI system.
  • People using the system lack the skills and competencies to work with AI.

How to Monitor Your Human-AI Systems

Human-AI systems require more than basic health checks. The co-founder and CEO of Evidently AI notes that for human and AI collaboration to succeed, all stakeholders have to know how to work with the AI.3 She lists what question each stakeholder should ask:

 

  • Data scientist: is the model drifting?
  • User: is the prediction trustworthy?
  • Data science manager: should the AI be retrained?
  • Business user: how much value is the AI bringing?
  • Product manager: what are the model’s limitations?
  • Compliance manager: is the model safe?

 

Enterprises have to monitor their human-AI systems at both functional and operational levels and business leaders have to institute best practices for monitoring:

 

  • Data
  • Model
  • Predictions or other outputs
  • Alerts
  • Log

 

Further, they have to set in place practices such as:

 

  • Building a culture of open data.
  • Establishing a cross-functional team for monitoring human-AI systems.
  • Keeping the monitoring process lean by centralizing tools and decentralizing people.
  • Initiating the monitoring process as early as the design phase to ensure more accuracy.
  • Keeping the focus on your employees and what will benefit them during decision-making.
  • Training your workforce to use AI, how to train it and how to document their usage.

 

Business leaders need to know that deployment is not the final step of the AI project. They need clarity on how to monitor their human-AI systems post-production.

Sign up forThe Future of Leadership: Human and AI Collaboration in the Workforce from MIT Media Lab to learn how to successfully deploy and monitor human-AI systems to ensure their success in your organization.

The Future of Leadership: Human and AI Collaboration in the Workforce is delivered as part of a collaboration with MIT Media Lab and Esme Learning. All personal data collected on this page is primarily subject to the Esme Learning Privacy Policy.

 

© 2022 Esme Learning Solutions. All Right Reserved.