Florens Gressner | Trust in Al - Is your AI Compliant and Explainable? | Rise of AI Conference 2022

"Explore the essential requirements for AI compliance and explainability, including testing, data diversity, and systematic risk mitigation, to ensure trust in AI solutions and their applications."

Key takeaways
  • Automation of testing and quality requirements is crucial in high-risk applications.
  • AI solutions must be equipped with explainable and transparent AI models.
  • Performances metrics are not sufficient to capture edge cases, outliers, and adversarial threats.
  • To mitigate risk, solutions must be trained on diverse, varied, and abundant data.
  • Adversarial attacks must be considered, such as corruption, noise, or unwanted changes in the data.
  • There needs to be a systematic approach to mitigating and testing AI systems, prioritizing edge cases, and updating models in response to defects and failures.
  • AI solutions must provide concrete optimization to minimize risk, defect triage, and transparency in development.
  • A taxonomy for risk identification and prioritization is necessary for successful AI deployments.
  • Quality and consistency are paramount in AI applications for trust, including data governance and continuous testing and updating.
  • Human oversight and intervention in high-risk use cases, such as autonomy, are crucial.