We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Generative AI - Architectures and applications in depth by K, Mavrodimitraki & D. Papageorgiou
Learn how foundation models power generative AI, explore customization techniques like RAG, and discover Amazon Bedrock's capabilities for building AI applications effectively.
-
Foundation models are large deep learning neural networks trained on massive datasets, forming the core of generative AI applications
-
Retrieval Augmented Generation (RAG) helps overcome limitations of foundation models by:
- Retrieving relevant context from knowledge bases
- Augmenting prompts with domain-specific information
- Providing source attribution and traceability
- Being more cost-effective than fine-tuning
-
Three main approaches to customize foundation models:
- Prompt engineering
- Fine-tuning
- Continuous pre-training
-
Amazon Bedrock provides:
- Managed access to multiple foundation models
- Built-in knowledge base functionality
- Agent orchestration capabilities
- Vector database integration
-
Agents work through four key phases:
- Pre-processing
- Orchestration
- Knowledge base resource retrieval
- Post-processing
-
Word embeddings are crucial for LLMs by:
- Representing words as multidimensional vectors
- Capturing contextual relationships
- Enabling semantic similarity matching
-
Model parameters have grown significantly:
- BERT (2019): 340M parameters
- GPT-2 (2019): 1.5B parameters
- GPT-3 (2022): 175B parameters
-
Knowledge bases should be used when:
- Domain-specific accuracy is required
- Real-time data access is needed
- Source attribution is important
- Cost optimization is a priority