We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Talks - Tuana Celik: Everything is a graph, including LLM Applications (and that’s handy)
Learn how graph-based architectures can simplify LLM applications, with tips on building modular pipelines, validating outputs, and optimizing component interactions.
-
AI/LLM applications can be effectively modeled and implemented as interconnected graphs composed of specialized components
-
Components in the graph are responsible for individual tasks (fetching, embedding, generating, classifying, etc.) and pass data from one node to another
-
Haystack 2.0 provides a framework for building these graph-based pipelines while allowing custom components based on specific needs
-
Pipeline graphs can be:
- Linear (simple sequence of tasks)
- Branching (parallel processing paths)
- Cyclical (loops for refinement/validation)
-
Large Language Models don’t need to handle every task - specialized components can be more efficient for specific operations like translation or classification
-
Structured output validation (using tools like Pydantic) helps ensure LLM responses match expected formats
-
Components should have clear input/output types that match the requirements of connected nodes in the pipeline
-
Complex AI applications can be broken down into smaller, manageable tasks connected through a pipeline graph
-
Pipeline approach allows easy swapping of components (e.g., switching between different LLM providers)
-
Error handling and validation can be built into the pipeline to allow for automatic retries and refinement of results