We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Substrate engineering: Engineering foundations in a world of LLMs
Discover why building robust engineering systems and foundations is crucial for successful LLM integration, and learn key strategies for safer AI implementation.
-
LLMs are fundamentally predictors over tokens and hallucination is an inherent characteristic, not a solvable problem - no amount of prompt engineering can overcome this
-
The quality of engineering systems and foundations (the “substrate”) will always dominate prompt engineering results - you can’t get better outputs than your system architecture and guardrails allow
-
Better tooling and configuration languages are needed - YAML/JSON lack type safety and validation capabilities that could prevent errors before deployment
-
Focus should be on “defense in depth” through multiple complementary approaches:
- Type systems and compile-time checks
- Restricted/non-Turing-complete configuration languages
- Automated validation and testing
- Clear boundaries on LLM usage scope
-
The better automation works, the less humans tend to pay attention - systems need to be designed accounting for this reality
-
Memory safety and thread safety cannot be guaranteed through prompting alone - language and tooling choices matter significantly
-
Configuration and tooling code is especially likely to be written with LLMs, making it critical to have strong foundational safeguards
-
Engineering systems need totality (guaranteed output for every input) and sound type systems without being overly complex
-
Code review and testing remain essential - LLMs don’t remove the need for human oversight and validation
-
Success with LLMs requires understanding their limitations and building appropriate guardrails and foundations rather than relying solely on prompt engineering