Bedford Group AI Webinar – Why Boards and CEOs Need to Elevate AI Governance to Manage Risk & Accelerate AI Adoption

by

Subscribe to receive industry-focused thought leadership and insights, delivered right to your inbox Subscribe

On October 6th, 2020 and October 20, 2020, The Bedford Group, in partnership with Saleschoice Inc. and EY conducted a compelling webinar featured guest speakers Dr. Cindy Gordon, CEO of SalesChoice, an award winning SaaS company specializing in Al and Data Sciences and Cathy Cobey, a technology risk partner and trusted AI advisor with EY.  The webinar was moderated by Howard Pezim, Managing Director at the Bedford Group.

 

To view a Video Recording of this Webinar, please click here (YouTube).

To view an Executive Summary of the Webinar, please click below:

Bedford Group AI Webinar Summary – October 20 2020 Why Boards and CEOs Need to Elevate AI Governance to Manage Risk & Accelerate AI Adoption

 

Event Summary:

  • AI is growing at a pace of 30-45% per annum and has become a $126 billion industry in less than 5 years.
  • Companies across all sectors will either make it or get left behind if they don’t invest in AI and develop a robust AI strategy, governance operating process and build successful AI implementations demonstrating an ROI
  • Adoption rates are currently low at 17-30%. Major contributors to low adoption are lack of Executive knowledge of what AI is and how to advance it with value outcomes, shortage and expensive AI skills and talent, designing and deploying successful use cases and sustaining operating practices, with governance rigour.

 

Risks:

  • Lack of trust in AI is the #1 barrier to deployment. There is a general mistrust/aversion among consumers
  • Mistakes and errors are also key AI risks e.g. 85% of AI projects through 2022 will deliver erroneous outcomes due to bias in data, algorithms or development teams as managing for data bias in methods remains a challenge.
  • If businesses don’t move forward with AI, they risk not gaining the competitive advantages AI offers, such as providing better and more personable services
  • Reputation. If AI products/services cause problems for a consumer, they can voice their displeasure via social media, which would erode the company’s brand/reputation
  • Compliance. Companies need to actively shape AI regulations and also understand related ethical principles. If they don’t, they risk developing products that don’t comply and increase regulatory/legal risks.
  • Legal. If companies don’t comply with regulations, lawsuits and financial penalties will arise. For example, a husband and wife with similar credit profiles applied for an Apple Card and the husband received 10 times the credit limit. They sued citing gender bias in the AI.

 

How to build a user’s trust in AI:

  • Ethics – AI must have an ethical foundation built into it with ethical norms such as respect, fairness and transparency. Questions still exist as to whether ethics can be encoded into AI and monitoring of ethics.
  • Social Responsibility – Starting right from its design, AI technology must consider its impact on the well-being of people and its potential to help eradicate societal issues, biases and ensuring the company brand is consistent.
  • Accountability – There must be clear lines of accountability, especially if/when there are problems, from the developer right up to executives overseeing the governance and operating practices.
  • Reliability – AI must be high performing on an ongoing basis so users know they can rely on it and trust it.

 

How to establish a trusted AI governance and control framework

  • Ensure business purposes, governance and stakeholder engagement is aligned
  • Ensure AI solutions are scalable and deployable
  • Review data sourcing, profiling, processing, quality and ethical issues
  • Ensure models are fit for purpose, explainable and reproducible

AI risk management practices:

  • Establish a multi-disciplinary AI governance and ethics advisory board
  • Have an inventory of all algorithms subject to impact /risk assessments
  • Use validation tools to ensure algorithms are fair and unbiased, using rigorous bias informing methods
  • Educate executives and AI developers on legal/ethical considerations and their responsibility to safeguard user’s rights and freedoms
  • Recruit third party experts to ease into the AI journey with proven track records to build internal enablements
  • Undergo independent AI ethics and design audits

 

For any further information, please feel free to contact: