Prof. Dr. Joanna Bryson | No one should trust AI | Rise of AI Conference 2023

No one should trust AI, and instead, we should focus on designing systems that are transparent, accountable, and respectful of human autonomy, understanding that AI is not a moral agent.

Key takeaways
  • Trust is a relationship: Trust is a relationship between peers where we recognize and understand each other’s intentions, limitations, and biases.
  • Responsibility is key: Responsibility is the fundamental concept that needs to be addressed in AI, as it’s what allows us to acknowledge and take accountability for our actions.
  • No one should trust AI: AI systems are not moral agents, and we should not trust them as if they were. They are designed to benefit their creators, not to act in the best interest of humans.
  • Auditing and transparency: AI systems should be designed with auditing and transparency in mind to ensure accountability and detect bias and errors.
  • AI is not a moral agent: AI systems are not moral agents, and it’s unfair to hold them responsible for their actions. We should focus on designing systems that are transparent and accountable, not trying to make them moral agents.
  • Consent and agency: Consent and agency are crucial concepts in AI ethics. We need to design systems that respect users’ autonomy and allow them to make informed decisions about how their data is used.
  • Critical importance of documentation: Proper documentation is essential in AI systems, as it allows for auditing and transparency. We need to document all the data, algorithms, and decision-making processes used in AI systems.
  • AI should not be trusted blindly: We should not trust AI systems blindly, but rather design them with transparency, accountability, and a clear understanding of their limitations and biases.
  • No one agrees on what makes a moral agent: There is no consensus on what makes a moral agent, and AI systems are not moral agents. We need to focus on designing systems that are transparent and accountable, rather than trying to make AI systems moral agents.
  • AI is not a substitute for human judgment: AI systems should not be used as a substitute for human judgment, but rather as a tool to augment and assist human decision-making.