We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
With great power comes great responsibility
Learn critical practices for building safer software systems: from threat assessment and risk mitigation to user-centric design and fostering a culture of proactive safety.
-
Safety must be fundamental to software engineering work, not an afterthought or specialist concern - it includes security, privacy, reliability, and abuse prevention
-
Consider users part of your system - UX failures and “operator error” are system design problems. Make it easy to do the right thing and hard to make dangerous mistakes
-
Use a three-pronged approach to identify risks:
- System-first: Examine component failures and integration points
- Actor-first: Consider adversarial uses and misaligned incentives
- Target-first: Identify who could be harmed and how
-
Diversity in teams is critical for identifying potential harms - people with different lived experiences spot different risks
-
As staff+ engineers, you set the cultural standard for safety:
- Model threat assessment from initial system design
- Raise safety questions consistently in reviews
- Get teams thinking proactively about risks
- Build safety checking into regular processes
-
Safety vs business needs tensions are common - build senior allies and be ready to escalate when needed
-
Unknown failure modes are often more dangerous than known ones - plan for surprises
-
Impact of software failures can be massive - measured in lives affected, not just dollars
-
Take responsibility for indirect harms - just because someone used a system unexpectedly doesn’t mean it’s not your problem
-
Regular safety reviews should be ongoing, not one-time assessments - threats and usage patterns evolve