Keynote - Ten Key Questions that a Company Should Ask to have Responsible AI

Ai

Learn key principles for building ethical AI systems, from data collection and bias mitigation to human oversight and environmental impact. Essential guidance for responsible AI development.

Key takeaways
  • AI systems should be proportional to the problem they aim to solve - avoid using complex solutions when simpler ones would suffice

  • Data is only a proxy for reality and often fails to capture qualitative aspects - avoid over-relying on data-driven decisions without human judgment

  • Systems must allow for human contestability, auditability and the right to appeal automated decisions

  • Minimal data collection and storage time should be practiced following good data protection principles

  • AI systems should empower and augment human capabilities rather than fully replace humans

  • Bias cannot be fully removed but should be identified, measured and mitigated - transparency about limitations is crucial

  • Avoid creating fictitious categories or using arbitrary thresholds that oversimplify complex realities

  • Solutions should be technology-independent and regulate the use cases rather than specific technologies

  • Consider environmental impact - AI systems can have significant carbon footprints

  • Regular validation, testing and maintenance is required to ensure systems work as intended over time

  • Ethics and responsibility should be considered from the design phase, not as an afterthought

  • Digital divides and inequality can be amplified by AI - consider impacts on underserved populations

  • Allow for “I don’t know” responses rather than forcing predictions with low confidence

  • Explanations and transparency are crucial, especially for high-stakes decisions

  • Work with domain experts and ensure you have the right expertise for the problem you’re trying to solve