Online Safety Bill: How Global Platforms Use MLOps to Keep People Safe • Phil Winder • GOTO 2023

Ai

Learn how global platforms use MLOps to keep people safe, as the Online Safety Bill proposes regulations and penalties for non-compliance, amid the challenges of data quality, annotation, and harmful content.

Key takeaways
  • Online Safety Bill aims to keep people safe by regulating tech companies, proposing criminal penalties for non-compliance.
  • Companies must invest in MLOps (machine learning operations) to develop and deploy AI systems at scale.
  • Foundation models are large AI models used by many companies, but can be manipulated to produce harmful content.
  • Online Safety Bill defines harmful content as that which causes or may cause harm to individuals, including self-harm, suicide, and child sexual exploitation.
  • EUAI Act focuses on applications, not content, and proposes a ban on the development and deployment of AI systems that pose a risk to individuals and society.
  • Data quality and annotation are major challenges in developing and deploying AI systems.
  • Regulatory bodies like Ofcom will have to investigate and enforce compliance with the Online Safety Bill.
  • Social media companies are struggling to deal with the amount of content they need to moderate.
  • Government departments are working together to develop a unified approach to regulating tech companies.
  • Tech companies are investing in data platforms and data annotation to improve the quality of their AI models.
  • There is a need for more research and development in MLOps to improve the quality and safety of AI systems.
  • Online Safety Bill proposes fines for non-compliance, with the largest fine being 10% of annual turnover.
  • Regulatory bodies will have the power to shut down a company’s funding mechanisms if they do not comply with the Online Safety Bill.