Micro Benchmarking - The Art of Realizing One is Wrong by René Schwietzke

Learn the pitfalls of Java performance testing: JIT effects, GC impact, hardware variables, and tools like JMH. Master the art of reliable microbenchmarking for real results.

Key takeaways
  • Don’t trust the first benchmark result - always run tests multiple times to account for JIT compilation and warmup effects

  • Microbenchmarks often measure the wrong thing - like garbage collector performance instead of actual code performance

  • The JIT compiler is unpredictable and can drastically change code optimization between runs

  • Hardware matters significantly - laptop benchmarks are unreliable, use proper server hardware without power/thermal throttling

  • Data access patterns heavily impact performance - random access is much slower than sequential due to cache misses

  • Hyper-threading can negatively impact benchmark reliability - isolate to physical cores when possible

  • Branch prediction and jumps significantly impact performance - unpredictable branches hurt CPU efficiency

  • Memory allocation and garbage collection strongly affect results - consider escape analysis and object lifetime

  • Use proper tools like JMH (Java Microbenchmarking Harness) rather than handwritten benchmarks

  • Include real-world data patterns and edge cases - don’t benchmark only the “happy path”

  • Consider warmup phases and tiered compilation effects on results

  • Return or consume all computed values to prevent dead code elimination

  • Profile first to identify actual bottlenecks before microbenchmarking

  • Results from microbenchmarks may not translate to production performance

  • Verify assumptions and theories through multiple benchmark variations