How to Build an Ethical Human-AI System

In 2021, Amazon closed down its AI-based recruitment tool because it showed bias towards minority groups. The AI picked male candidates 60% of the time.

Building an ethical AI means getting rid of biases — an uphill battle considering these biases have roots in numerous human cognitive biases. These challenges are magnified in human-AI systems that need to contend with both people and AI.

Download our free ebook, Extending Business Intelligence With Human-AI Systems

Understand how to safeguard your collaborative human and AI system from manipulation, bias and other ethical challenges.

Misuses of AI

AI can also be biased and manipulative for all the potential good it can do. As yet, only 47% of organizations test for bias in data and human-AI systems.1

AI has been used for outright wrong reasons, such as cyberwarfare. But beyond the obvious, many companies are abusing the power of AI without considering their ethical repercussions, such as:

 

  • Surveillance practices that encroach on privacy: The AI Global Surveillance Index pointed out that 176 countries use AI surveillance without considering whether that’s abusing the technology and if that’s against human rights.
  • Manipulating human judgment: AI analytics can be twisted to manipulate human behavior. Cambridge Analytica sold American voters’ data crawled on Facebook to political campaigns to assist the 2016 presidential campaign.
  • Spreading deep fakes: deep fakes can be used to manipulate and misrepresent. During Russia’s invasion of Ukraine, Russia used AI to create false narratives that misrepresented the war to its citizens.Besides being morally wrong, deep fakes also erode people's trust in the media.

6 Points Organizations Should Consider Before Building Human-AI Systems

The emergence and proliferation of this technology has given rise to a new frontier in ethics and risk assessment. Amidst the purported potential benefits of AI, there are growing fears of joblessness, bias and even AI dominance or singularity.

Businesses should have the following goals as they adopt AI into their daily operations:

 

  • Train their workforce in AI to safeguard their futures.
  • Safeguard against using AI to manipulate human behavior.
  • Ensure AI can perform as planned during the training phase.
  • Eliminate bias from the data sets AI is working from.
  • Secure AI systems against cyberattacks.
  • Default to the most human outcome to avoid unintended consequences.

Building an Ethically-Safe Human-AI System


AI systems should be human-centered and focus on human autonomy and fundamental rights. The European Union prescribed a guideline to ensure the human-AI system is safe:

 

  • The AI system should not hamper fundamental human rights.
  • Human influence should prevail.
  • AI systems should have human oversight and can override a system decision.

 

Some of the best practices to navigate ethical issues in human-AI collaboration are:

 

  • Transparency: many private and for-profit companies have gone so far as to publish their AI algorithms publicly to ensure transparency.
  • Explainability: AI systems should explain how they came to their outcomes and what data-sets they use to get there.
  • Inclusivity: the bias in AI systems are a direct result of human biases, and to counter that, companies need to ensure inclusivity in each step of AI design, build, implementation and use.
  • Technically robust: the AI system should be built to defend against cyberattacks, data hacking and data poisoning and should protect the privacy of human users.
  • Give users control of their data: AI systems should use data encryption and other data protection methods as well as ensure the quality of data sets.
  • Alignment: in the absence of a global AI framework, companies should model their AI systems within stringent guidelines that consider the benefits of humans, and safeguards against unintended consequences.
  • Accountability: AI systems should have processes in place to ensure responsibility and accountability through internal and external audits and impact assessments tools.

While ethical issues surrounding AI mount, business leaders need a clear understanding of how to develop human-AI systems that are practical, understandable, perform as designed and don’t cause harm.

Sign up forThe Future of Leadership: Human and AI Collaboration in the Workforce from MIT Media Lab, co-directed by AI ethicist Dr. Kate Darling, to learn how to make the right choices to ensure ethical human-AI collaboration in your company.

The Future of Leadership: Human and AI Collaboration in the Workforce is delivered as part of a collaboration with MIT Media Lab and Esme Learning. All personal data collected on this page is primarily subject to the Esme Learning Privacy Policy.

 

© 2022 Esme Learning Solutions. All Right Reserved.