Integrated AI and Enterprise Intelligence with LLMs and RAG by Felix Huchzermeyer

Learn how Felix Huchzermeyer implemented a secure RAG-based AI solution for enterprise audits, reducing planning time from days to seconds with 70% accuracy.

Key takeaways
  • Fine-tuning LLMs wasn’t effective for their use case - RAG (Retrieval Augmented Generation) proved to be a better solution

  • Created a one-click AI solution for internal auditors without requiring AI/technical expertise:

    • Integrated directly into existing audit management workflows
    • No need for manual copying/pasting between systems
    • Automated prompt generation in background
  • System provides transparency and trust through:

    • Showing which documents were used as context
    • Displaying confidence scores and sources
    • Allowing users to verify source documents
  • Implementation focused on data privacy and security:

    • Self-hosted solution keeping data within company infrastructure
    • Support for different LLM options (Llama 7B, 70B)
    • Role-based access control for document usage
  • Key business benefits:

    • Reduces audit planning time from hours/days to seconds
    • Achieves 60-70% accuracy in initial proposals
    • Saves 3-4 hours per audit for organizations
    • Affordable implementation using standard hardware
  • Architecture features:

    • Multiple RAG databases for different knowledge domains
    • Document management system with automated embedding generation
    • User management and access control
    • Configurable AI model backend
  • System helps prevent unauthorized AI tool usage by providing compliant alternatives employees can officially use