We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Javier- Build a personalized Commute virtual assistant in Python w/ Hopsworks & LLM Function Calling
Learn how to build a personalized commute virtual assistant using Python, Hopsworks, and LLM function calling to provide real-time transport predictions and updates.
-
LLM function calling enables virtual assistants to understand user intent and execute specific functions from a predefined set to retrieve external data or perform operations
-
The project demonstrates building a commute assistant using:
- Hopsworks AI lake house as the feature store
- Public transport API data from Stockholm
- Functions for predicting delays and retrieving historical departure data
- LLM (fine-tuned Mistral model) for natural language understanding
-
The FDI pipeline architecture consists of:
- Feature pipeline for raw data processing
- Training pipeline for model training/fine-tuning
- Inference pipeline for real-time predictions
-
Hopsworks provides key abstractions:
- Feature groups for organizing related features
- Feature views for low-latency feature vector retrieval
- Model registry and serving capabilities
- Streaming data processing with Quick Streams
-
LLM function calling workflow:
- User sends question to assistant
- LLM determines which function(s) to call
- System executes functions to get external data
- Data is incorporated into prompt
- LLM generates final response
-
The system addresses LLM limitations by:
- Adding real-time context to static model knowledge
- Using function calling for current data retrieval
- Maintaining conversation history
- Breaking complex queries into function-specific steps
-
Implementation includes:
- Prompt templates for function calling
- JSON Function definitions with parameters
- Feature pipelines for real-time and batch processing
- Streaming aggregations for historical analysis