Optimizing Amazon EventBridge Pipes with Apache Kafka • Daniele Frasca & Roman Boiko • GOTO 2024

Discover how to optimize Amazon EventBridge Pipes with Apache Kafka: serverless integration, payload handling, best practices, and event bus mesh architecture.

Key takeaways
  • EventBridge Pipes provides a serverless way to connect Kafka with AWS services, offering built-in features like filtering, enrichment, and batch processing

  • The claim check pattern helps handle large payloads (>256KB) by storing data in S3 and passing references through EventBridge Pipes

  • Key limitations to consider:

    • 256KB maximum payload size for EventBridge
    • 3,000 requests per second quota for Confluent REST API
    • 300 requests per second default for API destinations
  • Benefits of EventBridge Pipes over custom Kafka connectors:

    • Built-in error handling and dead letter queues
    • Native AWS service integration
    • Less code to maintain
    • Better monitoring and metrics
    • Automatic scaling
  • Event bus mesh architecture allows:

    • Decoupling between producers and consumers
    • Domain isolation
    • Standardized event routing
    • Local event buses per service/domain
  • Best practices:

    • Use sparse events for notifications
    • Implement proper error handling
    • Consider multi-region requirements
    • Leverage serverless capabilities when possible
    • Standardize event schemas and routing patterns
  • For Kafka integration scenarios, evaluate:

    • On-premises vs managed Kafka considerations
    • Network security requirements
    • Event size and volume requirements
    • Required integration patterns (enrichment, filtering, etc)