feat: unified benchmark runner with composable config [will not merge]#3534
Closed
andygrove wants to merge 7 commits intoapache:mainfrom
Closed
feat: unified benchmark runner with composable config [will not merge]#3534andygrove wants to merge 7 commits intoapache:mainfrom
andygrove wants to merge 7 commits intoapache:mainfrom
Conversation
Replace scattered benchmark scripts (10 duplicated shell scripts in dev/benchmarks/ and separate pyspark shuffle benchmarks) with a single composable framework under benchmarks/. Key changes: - Config system: profile (cluster shape) + engine (plugin/JARs) + CLI overrides with clear merge precedence - Python entry point (run.py) that builds and executes spark-submit - TPC-H/TPC-DS and shuffle benchmark suites - Level 1 JVM profiling via Spark REST API - Analysis tools for comparison charts and memory reports - Docker and Kubernetes infrastructure - 8 engine configs (spark, comet, comet-iceberg, gluten, blaze, plus 3 shuffle variants) and 5 profile configs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Port CometStringExpressionBenchmark as a micro suite in the unified benchmark runner. All 29 string expressions are supported, and the suite works with any engine (Comet, Gluten, Blaze, vanilla Spark). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Switch base image to Spark 3.5.2 for Gluten compatibility - Install both Java 8 (Gluten) and Java 17 (Comet) in Docker image - Fix run.py injecting --name before subcommand, breaking argparse - Document Gluten Java 8 requirement and JAVA_HOME override workflow Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Member
Author
TPC-H q21 Memory Profile: Comet vs Gluten (SF100, Docker, 1 executor / 8 cores / 16 GiB)Ran TPC-H query 21 with JVM memory profiling enabled ( Configuration
Results (Executor 0)
maxMemory (executor 0): Comet = 25.42 GB | Gluten = 24.36 GB Key Takeaways
Notes
|
- Remove K8s manifests and k8s.conf profile (to be added in a follow-up issue once the core runner is merged) - Add TPC-H (22) and TPC-DS (100) SQLBench query files under benchmarks/queries/ and embed them in the Docker image - Update Dockerfile to COPY queries into /opt/benchmarks/queries/ TODO: file upstream issue for K8s support once PR apache#3534 is merged. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The TPC-H and TPC-DS query files use TPC Fair Use Policy headers, not Apache License headers, so they must be excluded from both the Maven RAT plugin and the release RAT script. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Member
Author
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Part of #3440
I am going to break this PR down into some smaller PRs for easier review
Prior to this PR, we have three types of benchmark:
dev/benchmarkfor TPC-*benchmarks/pysparkfor an ETL use caseThis PR implements a unified benchmark runner Python script, making it easy to run benchmarks with Spark, Spark+Comet, and Spark+Gluten.
It also adds memory profiling features, and support for local execution, and docker-compose. We can add k8s support in a future PR.
I ported one microbenchmark over as an example, and will create follow-on PRs to port the remaining microbenchmarks, once this is merged.
The rest of this description was written by Claude Code.
Claude's Summary
Replace scattered benchmark scripts (10 duplicated shell scripts in
dev/benchmarks/and separate pyspark shuffle benchmarks) with a single composable framework underbenchmarks/.run.py): builds and executesspark-submitwith--dry-runsupportCometStringExpressionBenchmark.scala)compare.py) and memory reports (memory_report.py)Test plan
--dry-runproduces correct spark-submit command for each suite--engine comet --profile local--expression asciiruns only a single expressionanalysis/compare.py--profileflag produces metrics CSV🤖 Generated with Claude Code