diff --git a/.claude/skills/ci-e2e-debug/SKILL.md b/.claude/skills/ci-e2e-debug/SKILL.md new file mode 100644 index 000000000000..23723d85e983 --- /dev/null +++ b/.claude/skills/ci-e2e-debug/SKILL.md @@ -0,0 +1,41 @@ +--- +name: ci-e2e-debug +description: Download and inspect CI e2e test logs from GitHub Actions artifacts. Use when investigating e2e test failures in CI. +argument-hint: "" +--- + +# CI E2E Debug Skill + +Download test log artifacts from a GitHub Actions CI run and inspect them for errors. + +## Steps + +1. **Find the artifact**: Use `gh api` to list artifacts for the given CI run: + ```bash + gh api repos/apache/skywalking/actions/runs//artifacts --jq '.artifacts[] | {id: .id, name: .name}' + ``` + +2. **Download and extract**: Download the artifact zip and extract it: + ```bash + cd /tmp && rm -rf e2e-debug-logs && mkdir e2e-debug-logs && cd e2e-debug-logs + gh api repos/apache/skywalking/actions/artifacts//zip > artifact.zip + unzip -o artifact.zip + ``` + +3. **Inspect OAP logs**: Look for errors in the OAP server logs: + ```bash + # Find OAP log files + find /tmp/e2e-debug-logs -name "skywalking-oap-*.log" -o -name "oap.log" + # Check for errors + grep -E "ERROR|Exception|FATAL|CannotCompileException" | head -30 + ``` + +4. **Inspect other component logs**: Check BanyanDB, UI, and other pod logs as needed. + +5. **Report findings**: Summarize the root cause error from the logs. + +## Notes +- CI artifacts are automatically uploaded by the e2e test framework to `$SW_INFRA_E2E_LOG_DIR` +- Log files are organized by namespace/pod name +- OAP init pods may have different errors than the main OAP pod — check all of them +- Common errors: MAL/LAL/OAL compilation failures, storage connection issues, module initialization errors diff --git a/.claude/skills/compile/SKILL.md b/.claude/skills/compile/SKILL.md new file mode 100644 index 000000000000..b13537057a95 --- /dev/null +++ b/.claude/skills/compile/SKILL.md @@ -0,0 +1,109 @@ +--- +name: compile +description: Build SkyWalking OAP server, run javadoc checks, and verify checkstyle. Use to validate changes before submitting a PR. +argument-hint: "[all|backend|javadoc|checkstyle|module-name]" +--- + +# Compile & Verify + +Build the project and run static checks matching CI. + +## Prerequisites + +- JDK 11, 17, or 21 (LTS versions) +- Maven 3.6+ (use `./mvnw` wrapper) + +## Maven profiles + +- `backend` (default): Builds OAP server modules +- `ui` (default): Builds web application +- `dist` (default): Creates distribution packages +- `all`: Builds everything including submodule initialization + +## Commands by argument + +### `all` or no argument — full CI build + +```bash +./mvnw clean flatten:flatten install javadoc:javadoc -B -q -Pall \ + -Dmaven.test.skip \ + -Dcheckstyle.skip \ + -Dgpg.skip +``` + +### `backend` — backend only (faster) + +```bash +./mvnw clean flatten:flatten package -Pbackend,dist -Dmaven.test.skip +``` + +### `javadoc` — javadoc check only + +Javadoc requires delombok output, so `install` must run first: + +```bash +./mvnw clean flatten:flatten install javadoc:javadoc -B -q -Pall \ + -Dmaven.test.skip \ + -Dcheckstyle.skip \ + -Dgpg.skip +``` + +Running `javadoc:javadoc` alone without `install` will miss errors because `${delombok.output.dir}` won't be populated. + +### `checkstyle` — checkstyle only + +```bash +./mvnw -B -q clean flatten:flatten checkstyle:check +``` + +### Module name — single module build + +```bash +./mvnw clean flatten:flatten package -pl oap-server/analyzer/ -Dmaven.test.skip +``` + +## Reading javadoc output + +Maven prefixes all javadoc output with `[ERROR]`, but the actual severity is in the message after the line number. Only lines containing `error:` fail the build; lines with `warning:` do not. + +``` +[ERROR] Foo.java:42: error: bad use of '>' ← ACTUAL ERROR (must fix) +[ERROR] Foo.java:50: warning: no @param for ← WARNING (does not fail build) +``` + +### Common javadoc errors + +| Error | Cause | Fix | +|-------|-------|-----| +| `bad use of '>'` | Bare `>` in javadoc HTML (e.g., `->` in `
` blocks) | Use `{@code ->}` or `->` |
+| heading out of sequence | Heading level skips the expected hierarchy | See heading rules below |
+| reference not found | `{@link Foo#bar()}` with wrong signature | Match exact parameter types: `{@link Foo#bar(ArgType)}` |
+
+### Javadoc heading rules (JDK 13+)
+
+Strict heading validation was introduced in JDK 13. JDK 11 does **not** enforce it, but JDK 17/21/25 do. Write headings correctly for forward compatibility:
+
+| Javadoc location | Start heading at |
+|---|---|
+| Class, interface, enum, package, module | `

` | +| Constructor, method, field | `

` | +| Standalone HTML files (`doc-files/`) | `

` | + +The generated javadoc page uses `

` for the class name and `

` for member sections (Methods, Fields, etc.), so class-level subsections must use `

` and method-level subsections must use `

` to maintain proper nesting. + +## CI reference + +CI uses JDK 11 on Linux. The `dist-tar` job runs: + +```bash +./mvnw clean flatten:flatten install javadoc:javadoc -B -q -Pall \ + -Dmaven.test.skip \ + -Dcheckstyle.skip \ + -Dgpg.skip +``` + +The `code-style` job runs: + +```bash +./mvnw -B -q clean flatten:flatten checkstyle:check +``` diff --git a/.claude/skills/generate-classes/SKILL.md b/.claude/skills/generate-classes/SKILL.md new file mode 100644 index 000000000000..578ed5041801 --- /dev/null +++ b/.claude/skills/generate-classes/SKILL.md @@ -0,0 +1,65 @@ +--- +name: generate-classes +description: Generate bytecode classes from DSL scripts (MAL, OAL, LAL, Hierarchy). Runs the compiler and dumps .class files for inspection. +argument-hint: "" +--- + +# Generate DSL Classes + +Run the v2 compiler (ANTLR4 + Javassist) to generate bytecode classes from DSL scripts and dump `.class` files to disk for inspection. + +## Commands by argument + +### `mal` — MAL expression classes + +```bash +./mvnw test -pl test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker \ + -Dtest=MalComparisonTest -DfailIfNoTests=false -Dcheckstyle.skip +``` + +Output location: `test/script-cases/scripts/mal/**/*.generated-classes/` + +### `oal` — OAL metrics/dispatcher/builder classes + +```bash +./mvnw test -pl oap-server/oal-rt \ + -Dtest=RuntimeOALGenerationTest -DfailIfNoTests=false -Dcheckstyle.skip +``` + +Output location: `oap-server/oal-rt/target/test-classes/metrics/`, `metrics/builder/`, `dispatcher/` + +### `lal` — LAL filter/extractor classes + +```bash +./mvnw test -pl test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker \ + -Dtest=LalComparisonTest -DfailIfNoTests=false -Dcheckstyle.skip +``` + +Output location: `test/script-cases/scripts/lal/**/*.generated-classes/` + +### `hierarchy` — Hierarchy rule classes + +```bash +./mvnw test -pl test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker \ + -Dtest=HierarchyRuleComparisonTest -DfailIfNoTests=false -Dcheckstyle.skip +``` + +Output location: `test/script-cases/scripts/hierarchy-rule/*.generated-classes/` + +### `all` or no argument — generate all DSLs + +Run all four commands above sequentially. + +## After generation + +Print the output location for the requested DSL so the user knows where to find the generated `.class` files. Use `javap` to decompile: + +```bash +javap -c -p +``` + +## Cleaning generated classes + +```bash +./mvnw clean -pl test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker,test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker +``` diff --git a/.claude/skills/gh-pull-request/SKILL.md b/.claude/skills/gh-pull-request/SKILL.md new file mode 100644 index 000000000000..fa08cd598fc5 --- /dev/null +++ b/.claude/skills/gh-pull-request/SKILL.md @@ -0,0 +1,112 @@ +--- +name: gh-pull-request +description: Verify, commit, and push changes on a PR branch. Runs pre-flight checks (compile, checkstyle, license headers) before every push. Also creates the PR if one doesn't exist yet. +--- + +# PR Branch Workflow + +Run pre-flight checks, commit, push, and optionally create a PR. + +## Pre-flight checks + +Run these checks before every commit+push and fix any failures: + +### 1. Compile and checkstyle + +```bash +# Checkstyle +./mvnw -B -q clean checkstyle:check + +# Full build (compile + javadoc) +./mvnw clean flatten:flatten install javadoc:javadoc -B -q -Pall \ + -Dmaven.test.skip \ + -Dcheckstyle.skip \ + -Dgpg.skip +``` + +### 2. License header check + +```bash +license-eye header check +``` + +If invalid files are found, fix with `license-eye header fix` and re-check. + +## Commit and push + +After checks pass, commit and push: + +```bash +git add +git commit -m "" +git push -u origin +``` + +### Branch strategy +- **Never work directly on master branch** +- If on master, create a new branch first: `git checkout -b feature/` or `git checkout -b fix/` + +## Create PR (if not yet created) + +Check whether a PR already exists for the current branch: + +```bash +gh pr view --json number 2>/dev/null +``` + +If no PR exists, create one: + +### PR title +Summarize the changes concisely. Examples: +- `Fix BanyanDB query timeout issue` +- `Add support for OpenTelemetry metrics` + +### PR description + +Read `.github/PULL_REQUEST_TEMPLATE` and use its **exact format with checkboxes**. Do NOT use a custom summary format. + +Key template sections — uncomment the relevant one: + +**For Bug Fixes:** +``` +### Fix +- [ ] Add a unit test to verify that the fix works. +- [ ] Explain briefly why the bug exists and how to fix it. +``` + +**For New Features:** +``` +### +- [ ] If this is non-trivial feature, paste the links/URLs to the design doc. +- [ ] Update the documentation to include this new feature. +- [ ] Tests(including UT, IT, E2E) are added to verify the new feature. +- [ ] If it's UI related, attach the screenshots below. +``` + +**For Performance Improvements:** +``` +### Improve the performance of +- [ ] Add a benchmark for the improvement. +- [ ] The benchmark result. +- [ ] Links/URLs to the theory proof or discussion articles/blogs. +``` + +**Always include:** +``` +- [ ] If this pull request closes/resolves/fixes an existing issue, replace the issue number. Closes #. +- [ ] Update the [`CHANGES` log](https://github.com/apache/skywalking/blob/master/docs/en/changes/changes.md). +``` + +### Create command + +```bash +gh pr create --title "" --body "$(cat <<'EOF' +<PR body from template> +EOF +)" +``` + +### Post-creation +- Add `copilot` as a reviewer: `gh pr edit <number> --add-reviewer copilot` +- Do NOT add AI assistant as co-author. Code responsibility is on the committer's hands. +- Return the PR URL when done. diff --git a/.claude/skills/license/SKILL.md b/.claude/skills/license/SKILL.md new file mode 100644 index 000000000000..e1fc9a1913bc --- /dev/null +++ b/.claude/skills/license/SKILL.md @@ -0,0 +1,62 @@ +--- +name: license +description: Check and fix Apache 2.0 license headers and dependency licenses using skywalking-eyes. Use before submitting a PR. +argument-hint: "[check|fix|deps]" +--- + +# License Checks + +Check and fix license compliance using [skywalking-eyes](https://github.com/apache/skywalking-eyes). Two CI jobs use this tool: + +1. **license-header** — verifies all source files have Apache 2.0 headers +2. **dependency-license** — verifies the LICENSE file matches current dependencies + +## Steps + +### Header check (default, or `check` argument) + +```bash +license-eye header check +``` + +- **All valid**: Output shows `valid: N, invalid: 0` — nothing to do. +- **Invalid files found**: Fix with `license-eye header fix`, then re-check. + +### Header fix (`fix` argument) + +```bash +license-eye header fix +license-eye header check +``` + +### Dependency license check (`deps` argument) + +This regenerates the LICENSE file from dependency metadata and checks for drift: + +```bash +license-eye dependency resolve --summary ./dist-material/release-docs/LICENSE.tpl || exit 1 +if [ ! -z "$(git diff -U0 ./dist-material/release-docs/LICENSE)" ]; then + echo "LICENSE file is not updated correctly" + git diff -U0 ./dist-material/release-docs/LICENSE +fi +``` + +If the LICENSE file changed, review the diff and commit it. **Important**: CI runs on Linux — some dependencies have platform-specific variants. If you're on macOS/Windows, the LICENSE diff may be a platform artifact. Verify before committing. + +## Rules + +Configuration is in `.licenserc.yaml`: +- Java, XML, YAML/YML files require Apache 2.0 headers +- JSON and Markdown files are excluded (JSON doesn't support comments) +- Generated files and certain vendor paths are excluded +- SPI service files (`META-INF/services/`) require headers (use `#` comment style) + +## Installation + +```bash +# Same version as CI (pinned commit) +go install github.com/apache/skywalking-eyes/cmd/license-eye@5b7ee1731d036b5aac68f8bd3fc9e6f98ada082e + +# Or via Homebrew (macOS) +brew install license-eye +``` diff --git a/.claude/skills/run-e2e/SKILL.md b/.claude/skills/run-e2e/SKILL.md new file mode 100644 index 000000000000..6f6999de85ce --- /dev/null +++ b/.claude/skills/run-e2e/SKILL.md @@ -0,0 +1,146 @@ +--- +name: run-e2e +description: Run SkyWalking E2E tests locally +disable-model-invocation: true +argument-hint: "[test-case-path]" +--- + +# Run SkyWalking E2E Test + +Run an E2E test case using `skywalking-infra-e2e`. The user provides a test case path (e.g., `simple/jdk`, `storage/banyandb`, `alarm`). + +## Prerequisites + +All tools require **Go** installed. Check `.github/workflows/` for the exact `e2e` commit used in CI. + +### e2e CLI + +Built from [apache/skywalking-infra-e2e](https://github.com/apache/skywalking-infra-e2e), pinned by commit in CI: + +```bash +# Install the pinned commit +go install github.com/apache/skywalking-infra-e2e/cmd/e2e@<commit-id> + +# Or clone and build locally (useful when debugging the e2e tool itself) +git clone https://github.com/apache/skywalking-infra-e2e.git +cd skywalking-infra-e2e +git checkout <commit-id> +make build +# binary is in bin/e2e — add to PATH or copy to $GOPATH/bin +``` + +### swctl, yq, and other tools + +E2E test cases run pre-install steps (see `setup.steps` in each `e2e.yaml`) that install tools into `/tmp/skywalking-infra-e2e/bin`. When running locally, you need these tools on your PATH. + +**swctl** — SkyWalking CLI, used in verify cases to query OAP's GraphQL API. Pinned at `SW_CTL_COMMIT` in `test/e2e-v2/script/env`: + +```bash +# Option 1: Use the install script (same as CI) +bash test/e2e-v2/script/prepare/setup-e2e-shell/install.sh swctl +export PATH=/tmp/skywalking-infra-e2e/bin:$PATH + +# Option 2: Build from source +go install github.com/apache/skywalking-cli/cmd/swctl@<SW_CTL_COMMIT> +``` + +**yq** — YAML processor, used in verify cases: + +```bash +# Option 1: Use the install script +bash test/e2e-v2/script/prepare/setup-e2e-shell/install.sh yq +export PATH=/tmp/skywalking-infra-e2e/bin:$PATH + +# Option 2: brew install yq (macOS) +``` + +**Other tools** (only needed for specific test cases): + +| Tool | Install script | Used by | +|------|---------------|---------| +| `kubectl` | `install.sh kubectl` | Kubernetes-based tests | +| `helm` | `install.sh helm` | Helm chart tests | +| `istioctl` | `install.sh istioctl` | Istio/service mesh tests | +| `etcdctl` | `install.sh etcdctl` | etcd cluster tests | + +All install scripts are at `test/e2e-v2/script/prepare/setup-e2e-shell/`. + +## Steps + +### 1. Determine the test case + +Resolve the user's argument to a full path under `test/e2e-v2/cases/`. If ambiguous, list matching directories and ask. + +```bash +ls test/e2e-v2/cases/<argument>/e2e.yaml +``` + +### 2. Check if rebuild is needed + +Compare source file timestamps against the last build: + +```bash +# OAP server changes since last build +find oap-server apm-protocol -type f \( \ + -name "*.java" -o -name "*.yaml" -o -name "*.yml" -o \ + -name "*.json" -o -name "*.xml" -o -name "*.properties" -o \ + -name "*.proto" \ +\) -newer dist/apache-skywalking-apm-bin.tar.gz 2>/dev/null | head -5 + +# Test service changes since last build +find test/e2e-v2/java-test-service -type f \( \ + -name "*.java" -o -name "*.xml" -o -name "*.yaml" -o -name "*.yml" \ +\) -newer test/e2e-v2/java-test-service/e2e-service-provider/target/*.jar 2>/dev/null | head -5 +``` + +If files are found, warn the user and suggest rebuilding before running. + +### 3. Rebuild if needed (only with user confirmation) + +```bash +# Rebuild OAP +./mvnw clean flatten:flatten package -Pall -Dmaven.test.skip && make docker + +# Rebuild test services +./mvnw -f test/e2e-v2/java-test-service/pom.xml clean flatten:flatten package +``` + +### 4. Run the E2E test + +Set required environment variables and run: + +```bash +export SW_AGENT_JDK_VERSION=8 +e2e run -c test/e2e-v2/cases/<case-path>/e2e.yaml +``` + +### 5. If the test fails + +Do NOT run cleanup immediately. Instead: + +1. Check container logs: + ```bash + docker compose -f test/e2e-v2/cases/<case-path>/docker-compose.yml logs oap + docker compose -f test/e2e-v2/cases/<case-path>/docker-compose.yml logs provider + ``` + +2. Run verify separately (can retry after investigation): + ```bash + e2e verify -c test/e2e-v2/cases/<case-path>/e2e.yaml + ``` + +3. Only cleanup when done debugging: + ```bash + e2e cleanup -c test/e2e-v2/cases/<case-path>/e2e.yaml + ``` + +## Common test cases + +| Shorthand | Path | +|-----------|------| +| `simple/jdk` | `test/e2e-v2/cases/simple/jdk/` | +| `storage/banyandb` | `test/e2e-v2/cases/storage/banyandb/` | +| `storage/elasticsearch` | `test/e2e-v2/cases/storage/elasticsearch/` | +| `alarm` | `test/e2e-v2/cases/alarm/` | +| `log` | `test/e2e-v2/cases/log/` | +| `profiling/trace` | `test/e2e-v2/cases/profiling/trace/` | diff --git a/.claude/skills/test/SKILL.md b/.claude/skills/test/SKILL.md new file mode 100644 index 000000000000..cca8166bfcf3 --- /dev/null +++ b/.claude/skills/test/SKILL.md @@ -0,0 +1,81 @@ +--- +name: test +description: Run unit tests, integration tests, or slow integration tests matching CI. Use to validate changes before submitting a PR. +argument-hint: "[unit|integration|slow|module-name]" +--- + +# Tests + +Run tests matching CI configuration. + +## Commands by argument + +### `unit` or no argument — unit tests + +```bash +./mvnw clean test -q -B -D"checkstyle.skip" +``` + +CI runs this on: +- JDK 11: ubuntu, macOS, Windows +- JDK 17, 21, 25: ubuntu only + +### `integration` — integration tests (excludes slow) + +```bash +./mvnw -B clean integration-test -Dcheckstyle.skip -DskipUTs=true -DexcludedGroups=slow +``` + +CI runs this on JDK 11, 17, 21, 25 (ubuntu only). + +### `slow` — slow integration tests + +```bash +./mvnw -B clean integration-test -Dcheckstyle.skip -DskipUTs=true \ + -Dit.test=org.apache.skywalking.library.elasticsearch.ElasticSearchIT \ + -Dfailsafe.failIfNoSpecifiedTests=false +``` + +CI runs on JDK 11 (ubuntu only). Currently only ElasticSearch/OpenSearch IT is in the slow matrix. + +### Module name — single module tests + +```bash +# Unit tests for a specific module +./mvnw clean test -pl oap-server/analyzer/<module-name> -D"checkstyle.skip" + +# Integration tests for a specific module +./mvnw -B clean integration-test -pl oap-server/analyzer/<module-name> -Dcheckstyle.skip -DskipUTs=true +``` + +## Test frameworks + +- JUnit 5 (`org.junit.jupiter`) +- Mockito for mocking +- AssertJ for assertions +- PowerMock for reflection utilities + +## Test naming conventions + +| Type | Pattern | Maven phase | +|------|---------|-------------| +| Unit tests | `*Test.java` | `test` | +| Integration tests | `IT*.java` or `*IT.java` | `integration-test` | + +## Slow test tagging + +Tests tagged with `@Tag("slow")` are excluded from the normal integration-test job and run separately in the slow-integration-test job. Use this tag for tests that spin up external services (Elasticsearch, etc.) and take significant time. + +## CI retry behavior + +All three CI jobs retry on failure (run the same command twice with `||`). This handles flaky tests but masks intermittent issues — if a test fails locally, investigate rather than relying on retries. + +## CI reference + +CI workflow: `.github/workflows/skywalking.yaml` + +| Job | JDK | OS | Timeout | +|-----|-----|----|---------| +| `unit-test` | 11 (3 OS) + 17, 21, 25 (ubuntu) | ubuntu, macOS, Windows | 30 min | +| `integration-test` | 11, 17, 21, 25 | ubuntu | 60 min | +| `slow-integration-test` | 11 | ubuntu | 60 min | diff --git a/.github/workflows/skywalking.yaml b/.github/workflows/skywalking.yaml index cfa5adb4b89d..76abddc09ef6 100644 --- a/.github/workflows/skywalking.yaml +++ b/.github/workflows/skywalking.yaml @@ -82,7 +82,8 @@ jobs: - name: Check Dependencies Licenses run: | go install github.com/apache/skywalking-eyes/cmd/license-eye@5b7ee1731d036b5aac68f8bd3fc9e6f98ada082e - license-eye dependency resolve --summary ./dist-material/release-docs/LICENSE.tpl || exit 1 + ./mvnw flatten:flatten install -Dmaven.test.skip -Dcheckstyle.skip -Dgpg.skip -q + license-eye -v debug dependency resolve --summary ./dist-material/release-docs/LICENSE.tpl || exit 1 if [ ! -z "$(git diff -U0 ./dist-material/release-docs/LICENSE)" ]; then echo "LICENSE file is not updated correctly" git diff -U0 ./dist-material/release-docs/LICENSE diff --git a/.gitignore b/.gitignore index 38241c2e1b25..f1e59ed33fcb 100644 --- a/.gitignore +++ b/.gitignore @@ -1,7 +1,6 @@ /build/ target/ .idea/ -.flattened-pom.xml *.iml .classpath .project @@ -33,3 +32,12 @@ oap-server/server-starter/src/main/resources/version.properties # Benchmark reports and downloaded tools benchmarks/reports/ benchmarks/.istio/ + +# Generated .class files from v2 compiler (created during v1-v2 checker tests) +test/script-cases/scripts/**/*.generated-classes/ + +# Flattened pom files generated by flatten-maven-plugin +.flattened-pom.xml + +# Claude Code local settings +.claude/settings.local.json diff --git a/CLAUDE.md b/CLAUDE.md index dfa342ad0b87..d26fe88a97fe 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -38,49 +38,6 @@ skywalking/ └── tools/ # Development tools ``` -## Build System - -### Prerequisites -- JDK 11, 17, or 21 (LTS versions) -- Maven 3.6+ -- Git (with submodule support) - -### Common Build Commands - -```bash -# Clone with submodules -git clone --recurse-submodules https://github.com/apache/skywalking.git - -# Or initialize submodules after clone -git submodule init && git submodule update - -# Full build (skip tests) -./mvnw clean package -Dmaven.test.skip - -# Build backend only -./mvnw package -Pbackend,dist -# or: make build.backend - -# Build UI only -./mvnw package -Pui,dist -# or: make build.ui - -# Run tests -./mvnw test - -# Run integration tests -./mvnw integration-test - -# Build with all profiles -./mvnw clean package -Pall -Dmaven.test.skip -``` - -### Maven Profiles -- `backend` (default): Builds OAP server modules -- `ui` (default): Builds web application -- `dist` (default): Creates distribution packages -- `all`: Builds everything including submodule initialization - ## Architecture & Key Concepts ### Module System @@ -216,321 +173,6 @@ grep -r "\.toList()" src/ --include="*.java" grep -r '"""' src/ --include="*.java" ``` -## Testing - -### Test Frameworks -- JUnit 5 (`org.junit.jupiter`) -- Mockito for mocking -- AssertJ for assertions -- PowerMock for reflection utilities - -### Test Naming -- Unit tests: `*Test.java` -- Integration tests: `IT*.java` or `*IT.java` - -### Running Tests -```bash -# Unit tests only -./mvnw test - -# Integration tests -./mvnw integration-test - -# Skip tests during build -./mvnw package -Dmaven.test.skip -``` - -## E2E Testing - -SkyWalking uses [Apache SkyWalking Infra E2E](https://github.com/apache/skywalking-infra-e2e) for end-to-end testing. E2E tests validate the entire system including OAP server, storage backends, agents, and integrations. - -### E2E Tool Installation - -```bash -# Install the same version used in CI (recommended) -go install github.com/apache/skywalking-infra-e2e/cmd/e2e@e7138da4f9b7a25a169c9f8d995795d4d2e34bde - -# Verify installation -e2e --help -``` - -### E2E Test Structure - -``` -test/e2e-v2/ -├── cases/ # 50+ test case directories -│ ├── simple/jdk/ # Basic Java agent test -│ ├── storage/ # Storage backend tests (BanyanDB, ES, MySQL, PostgreSQL) -│ ├── alarm/ # Alerting tests -│ ├── profiling/ # Profiling tests (trace, eBPF, async) -│ ├── kafka/ # Kafka integration -│ ├── istio/ # Service mesh tests -│ └── ... -├── script/ -│ ├── env # Environment variables (agent commits, versions) -│ ├── docker-compose/ -│ │ └── base-compose.yml # Base service definitions (oap, banyandb, provider, consumer) -│ └── prepare/ -│ └── setup-e2e-shell/ # Tool installers (swctl, yq, kubectl, helm) -└── java-test-service/ # Test service implementations - ├── e2e-service-provider/ - ├── e2e-service-consumer/ - └── ... -``` - -### E2E Configuration (e2e.yaml) - -Each test case has an `e2e.yaml` with four sections: - -```yaml -setup: - env: compose # Environment: compose or kind (Kubernetes) - file: docker-compose.yml # Docker compose file - timeout: 20m # Setup timeout - init-system-environment: ../../../script/env # Shared env variables - steps: # Initialization steps - - name: install swctl - command: bash test/e2e-v2/script/prepare/setup-e2e-shell/install.sh swctl - -trigger: - action: http # Generate test traffic - interval: 3s - times: -1 # -1 = run until verify succeeds - url: http://${consumer_host}:${consumer_9092}/users - method: POST - body: '{"id":"123","name":"skywalking"}' - -verify: - retry: - count: 20 - interval: 10s - cases: - - includes: - - ../simple-cases.yaml # Reusable verification cases - - query: swctl --display yaml --base-url=http://${oap_host}:${oap_12800}/graphql metrics exec ... - expected: expected/metrics.yml - -cleanup: - on: always # always|success|failure|never -``` - -### Running E2E Tests Locally - -**Prerequisites:** -- Docker and Docker Compose -- Go (for e2e tool installation) - -**Quick Start (run simple/jdk test):** -```bash -# 1. Build distribution and Docker image -./mvnw clean package -Pall -Dmaven.test.skip -make docker - -# 2. Build test services -./mvnw -f test/e2e-v2/java-test-service/pom.xml clean package - -# 3. Run e2e test (SW_AGENT_JDK_VERSION is required) -SW_AGENT_JDK_VERSION=8 e2e run -c test/e2e-v2/cases/simple/jdk/e2e.yaml -``` - -**Step-by-step debugging:** -```bash -# Set required environment variable -export SW_AGENT_JDK_VERSION=8 - -# Run individual steps instead of full test -e2e setup -c test/e2e-v2/cases/simple/jdk/e2e.yaml # Start containers -e2e trigger -c test/e2e-v2/cases/simple/jdk/e2e.yaml # Generate traffic -e2e verify -c test/e2e-v2/cases/simple/jdk/e2e.yaml # Validate results -e2e cleanup -c test/e2e-v2/cases/simple/jdk/e2e.yaml # Stop containers -``` - -### E2E CLI Commands - -| Command | Description | -|---------|-------------| -| `e2e run -c <path>` | Run complete test (setup → trigger → verify → cleanup) | -| `e2e setup -c <path>` | Start containers and initialize environment | -| `e2e trigger -c <path>` | Generate test traffic | -| `e2e verify -c <path>` | Validate results against expected output | -| `e2e cleanup -c <path>` | Stop and remove containers | - -### Common Test Cases - -| Category | Path | Description | -|----------|------|-------------| -| `simple/jdk` | `test/e2e-v2/cases/simple/jdk/` | Basic Java agent with BanyanDB | -| `storage/banyandb` | `test/e2e-v2/cases/storage/banyandb/` | BanyanDB storage backend | -| `storage/elasticsearch` | `test/e2e-v2/cases/storage/elasticsearch/` | Elasticsearch storage | -| `alarm/` | `test/e2e-v2/cases/alarm/` | Alerting functionality | -| `profiling/trace` | `test/e2e-v2/cases/profiling/trace/` | Trace profiling | -| `log/` | `test/e2e-v2/cases/log/` | Log analysis (LAL) | - -### Writing E2E Tests - -1. **Create test directory** under `test/e2e-v2/cases/<category>/<name>/` - -2. **Create docker-compose.yml** extending base services: - ```yaml - version: '2.1' - services: - oap: - extends: - file: ../../../script/docker-compose/base-compose.yml - service: oap - banyandb: - extends: - file: ../../../script/docker-compose/base-compose.yml - service: banyandb - ``` - -3. **Create e2e.yaml** with setup, trigger, verify sections - -4. **Create expected/ directory** with expected YAML outputs for verification - -5. **Create verification cases** (e.g., `simple-cases.yaml`) with swctl queries - -### Verification with swctl - -The `swctl` CLI queries OAP's GraphQL API: - -```bash -# Query service metrics -swctl --display yaml --base-url=http://${oap_host}:${oap_12800}/graphql \ - metrics exec --expression=service_resp_time --service-name=e2e-service-provider - -# List services -swctl --display yaml --base-url=http://${oap_host}:${oap_12800}/graphql \ - service ls - -# Query traces -swctl --display yaml --base-url=http://${oap_host}:${oap_12800}/graphql \ - trace ls --service-name=e2e-service-provider -``` - -### Environment Variables - -Key version commits in `test/e2e-v2/script/env`: -- `SW_AGENT_JAVA_COMMIT` - Java agent version -- `SW_BANYANDB_COMMIT` - BanyanDB version -- `SW_CTL_COMMIT` - swctl CLI version -- `SW_AGENT_*_COMMIT` - Other agent versions (Go, Python, NodeJS, PHP) - -### Debugging E2E Tests - -**If a test fails, do NOT run cleanup immediately.** Keep containers running to debug: - -```bash -# 1. Setup containers (only once) -e2e setup -c test/e2e-v2/cases/simple/jdk/e2e.yaml - -# 2. Generate traffic -e2e trigger -c test/e2e-v2/cases/simple/jdk/e2e.yaml - -# 3. Verify (can re-run multiple times after fixing issues) -e2e verify -c test/e2e-v2/cases/simple/jdk/e2e.yaml - -# Check container logs to debug failures -docker compose -f test/e2e-v2/cases/simple/jdk/docker-compose.yml logs oap -docker compose -f test/e2e-v2/cases/simple/jdk/docker-compose.yml logs provider - -# Only cleanup when done debugging -e2e cleanup -c test/e2e-v2/cases/simple/jdk/e2e.yaml -``` - -**Determining if rebuild is needed:** - -Compare file timestamps against last package build. If any files changed after package, rebuild is needed: -```bash -# Find OAP runtime files modified after package was built -find oap-server apm-protocol -type f \( \ - -name "*.java" -o -name "*.yaml" -o -name "*.yml" -o \ - -name "*.json" -o -name "*.xml" -o -name "*.properties" -o \ - -name "*.proto" \ -\) -newer dist/apache-skywalking-apm-bin.tar.gz 2>/dev/null - -# Find test service files modified after last build (needs service rebuild) -find test/e2e-v2/java-test-service -type f \( \ - -name "*.java" -o -name "*.xml" -o -name "*.yaml" -o -name "*.yml" \ -\) -newer test/e2e-v2/java-test-service/e2e-service-provider/target/*.jar 2>/dev/null - -# Find test case config files modified after package was built -find test/e2e-v2/cases -type f \( \ - -name "*.yaml" -o -name "*.yml" -o -name "*.json" \ -\) -newer dist/apache-skywalking-apm-bin.tar.gz 2>/dev/null -``` - -Also compare git commit ID in binary vs current HEAD: -```bash -# Commit ID in packaged binary -unzip -p dist/apache-skywalking-apm-bin/oap-libs/server-starter-*.jar version.properties | grep git.commit.id - -# Current HEAD -git rev-parse HEAD -``` - -**If rebuild is needed, stop e2e first:** -```bash -# 1. Cleanup running containers -e2e cleanup -c test/e2e-v2/cases/simple/jdk/e2e.yaml - -# 2. Rebuild OAP (if oap-server/apm-protocol files changed) -./mvnw clean package -Pall -Dmaven.test.skip && make docker - -# 3. Rebuild test services (if java-test-service files changed) -./mvnw -f test/e2e-v2/java-test-service/pom.xml clean package - -# 4. Restart e2e -e2e setup -c test/e2e-v2/cases/simple/jdk/e2e.yaml -e2e trigger -c test/e2e-v2/cases/simple/jdk/e2e.yaml -e2e verify -c test/e2e-v2/cases/simple/jdk/e2e.yaml -``` - -## License Checks (skywalking-eyes) - -SkyWalking uses [Apache SkyWalking Eyes](https://github.com/apache/skywalking-eyes) for license header and dependency license checks. **License checks must pass before submitting a PR.** - -### Installation - -```bash -# Install the same version used in CI -go install github.com/apache/skywalking-eyes/cmd/license-eye@5b7ee1731d036b5aac68f8bd3fc9e6f98ada082e - -# Or via Homebrew (macOS) -brew install license-eye -``` - -### Commands - -```bash -# Check license headers in source files (fast, run before PR) -license-eye header check - -# Fix missing license headers automatically -license-eye header fix - -# Generate LICENSE file from dependencies -license-eye dependency resolve --summary ./dist-material/release-docs/LICENSE.tpl - -# Check if LICENSE file needs update -git diff -U0 ./dist-material/release-docs/LICENSE -``` - -### Configuration - -Configuration is in `.licenserc.yaml`: -- Defines Apache-2.0 license header -- Lists paths to ignore (e.g., `**/*.md`, `**/*.json`, generated files) -- Configures dependency license mappings - -### CI Behavior - -The CI runs on **Linux (ubuntu-latest)** with two checks: -1. **license-header**: Verifies all source files have proper Apache-2.0 headers -2. **dependency-license**: Regenerates LICENSE file and fails if it differs from committed version - -**Important:** Some dependencies have platform-specific variants (Windows/macOS suffixes). The LICENSE file should reflect Linux dependencies since CI runs on Linux. If `dependency-license` fails and you're on macOS/Windows, ask maintainers to verify before committing LICENSE changes. - ## Git Submodules The project uses submodules for protocol definitions and UI: @@ -586,45 +228,13 @@ Always use `--recurse-submodules` when cloning or update submodules manually. ## Submitting Pull Requests -### Branch Strategy -- **Never work directly on master branch** -- Create a new branch for your changes: `git checkout -b feature/your-feature-name` or `git checkout -b fix/your-fix-name` -- Keep branch names descriptive and concise - -### PR Title -Summarize the changes in the PR title. Examples: -- `Fix BanyanDB query timeout issue` -- `Add support for OpenTelemetry metrics` -- `Improve documentation structure` - -### PR Description -Follow the PR template in `.github/PULL_REQUEST_TEMPLATE`. Key requirements: - -**For Bug Fixes:** -- Add unit test to verify the fix -- Explain briefly why the bug exists and how to fix it - -**For New Features:** -- Link to design doc if non-trivial -- Update documentation -- Add tests (UT, IT, E2E) -- Attach screenshots if UI related - -**For Performance Improvements:** -- Add benchmark for the improvement -- Include benchmark results -- Link to theory proof or discussion articles - -**Always:** -- Reference related issue: `Closes #<issue number>` -- Update [`CHANGES` log](https://github.com/apache/skywalking/blob/master/docs/en/changes/changes.md) -- Add `copilot` as a reviewer for AI-assisted code review -- Do NOT add AI assistant as co-author. Code responsibility is on the committer's hands. +Use the `/gh-pull-request` skill for committing and pushing to a PR branch. It runs pre-flight checks (compile, checkstyle, license headers) before every push, and creates the PR if one doesn't exist yet. ## Tips for AI Assistants 1. **Always check submodules**: Protocol changes may require submodule updates 2. **Generate sources first**: Run `mvnw compile` before analyzing generated code +3. **Install package**: Use `mvnw flatten:flatten install` to build the precompiler and export generated classes before running tests. ref to [compile skill doc](.claude/skills/compile/SKILL.md) 3. **Respect checkstyle**: No System.out, no @author, no Chinese characters 4. **Follow module patterns**: Use existing modules as templates 5. **Check multiple storage implementations**: Logic may vary by storage type @@ -633,4 +243,3 @@ Follow the PR template in `.github/PULL_REQUEST_TEMPLATE`. Key requirements: 8. **Test both unit and integration**: Different test patterns for different scopes 9. **Documentation is rendered via markdown**: When reviewing docs, consider how they will be rendered by a markdown engine 10. **Relative paths in docs are valid**: Relative file paths (e.g., `../../../oap-server/...`) in documentation work both in the repo and on the documentation website, supported by website build tooling -11. **Read PR template before creating PR**: Always read `.github/PULL_REQUEST_TEMPLATE` and use its exact format with checkboxes, not a custom summary diff --git a/dist-material/release-docs/LICENSE b/dist-material/release-docs/LICENSE index a23629e1ed5f..af44243e39c9 100644 --- a/dist-material/release-docs/LICENSE +++ b/dist-material/release-docs/LICENSE @@ -347,7 +347,6 @@ The text of each license is the standard Apache 2.0 license. https://mvnrepository.com/artifact/org.apache.curator/curator-framework/4.3.0 Apache-2.0 https://mvnrepository.com/artifact/org.apache.curator/curator-recipes/4.3.0 Apache-2.0 https://mvnrepository.com/artifact/org.apache.curator/curator-x-discovery/4.3.0 Apache-2.0 - https://mvnrepository.com/artifact/org.apache.groovy/groovy/5.0.3 Apache-2.0 https://mvnrepository.com/artifact/org.apache.httpcomponents/httpasyncclient/4.1.5 Apache-2.0 https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient/4.5.13 Apache-2.0 https://mvnrepository.com/artifact/org.apache.httpcomponents/httpcore/4.4.16 Apache-2.0 diff --git a/docker/.env b/docker/.env index d4fa1add3897..8ca45571cd8f 100644 --- a/docker/.env +++ b/docker/.env @@ -6,6 +6,6 @@ # docker compose up ELASTICSEARCH_IMAGE=docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.2 -BANYANDB_IMAGE=ghcr.io/apache/skywalking-banyandb:84f32b3969cdcc676aaee428383b34b3b67dbdf5 +BANYANDB_IMAGE=ghcr.io/apache/skywalking-banyandb:7568a326bb7b10b6aa804bf0f4239904c347d9d5 OAP_IMAGE=ghcr.io/apache/skywalking/oap:latest UI_IMAGE=ghcr.io/apache/skywalking/ui:latest diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml index cecefb30d876..7f6e56474dfc 100644 --- a/docker/docker-compose.yml +++ b/docker/docker-compose.yml @@ -43,7 +43,7 @@ services: banyandb: profiles: - "banyandb" - image: ${BANYANDB_IMAGE:-ghcr.io/apache/skywalking-banyandb:a091ac0c3efa7305288ae9fb8853bffb2186583a} + image: ${BANYANDB_IMAGE} container_name: banyandb networks: - demo diff --git a/docs/en/academy/dsl-compiler-design.md b/docs/en/academy/dsl-compiler-design.md new file mode 100644 index 000000000000..7d415cd8f0d6 --- /dev/null +++ b/docs/en/academy/dsl-compiler-design.md @@ -0,0 +1,163 @@ +# DSL Compiler Design: ANTLR4 + Javassist + +## Overview + +SkyWalking OAP server uses four domain-specific languages (DSLs) for telemetry analysis. +All four share the same compilation tech stack: **ANTLR4** for grammar parsing and **Javassist** for +runtime bytecode generation. + +| DSL | Purpose | Input | Generated Output | +|-----|---------|-------|-----------------| +| **OAL** (Observability Analysis Language) | Trace/mesh metrics aggregation | `.oal` script files | Metrics classes, builders, dispatchers | +| **MAL** (Meter Analysis Language) | Meter/metrics expression evaluation | YAML config `exp` fields | `MalExpression` implementations | +| **LAL** (Log Analysis Language) | Log processing pipelines | YAML config `filter` blocks | `LalExpression` implementations | +| **Hierarchy Matching Rules** | Service hierarchy relationship matching | YAML config expressions | `BiFunction<Service, Service, Boolean>` implementations | + +## Compilation Pipeline + +All four DSLs follow the same three-phase compilation pipeline at OAP startup: + +``` +DSL string (from .oal script or YAML config) + | + v +Phase 1: ANTLR4 Parsing + Lexer + Parser (generated from .g4 grammars at build time) + → Immutable AST model + | + v +Phase 2: Java Source Generation + Walk AST model, emit Java source code as strings + | + v +Phase 3: Javassist Bytecode Generation + ClassPool.makeClass() → CtClass → addMethod(source) → toClass() + → Ready-to-use class instance loaded into JVM +``` + +### What Each DSL Generates + +| DSL | Interface / Base Class | Key Method | +|-----|----------------------|------------| +| OAL | Extends metrics function class (e.g., `LongAvgMetrics`) | `id()`, `serialize()`, `deserialize()`, plus dispatcher `dispatch(source)` | +| MAL metric | `MalExpression` | `SampleFamily run(Map<String, SampleFamily> samples)` | +| MAL filter | `Predicate<Map<String, String>>` | `boolean test(Map<String, String> tags)` | +| LAL | `LalExpression` | `void execute(FilterSpec filterSpec, ExecutionContext ctx)` | +| Hierarchy | `BiFunction<Service, Service, Boolean>` | `Boolean apply(Service upper, Service lower)` | + +OAL is the most complex -- it generates **three classes per metric** (metrics class with storage annotations, +metrics builder for serialization, and source dispatcher for routing), whereas MAL/LAL/Hierarchy each generate +a single functional class per expression. + +## ANTLR4 Grammars + +Each DSL has its own ANTLR4 lexer and parser grammar. The Maven ANTLR4 plugin generates Java lexer/parser +classes at build time; these are then used at runtime to parse DSL strings. + +| DSL | Grammar Location | +|-----|-----------------| +| OAL | `oap-server/oal-grammar/src/main/antlr4/.../OALLexer.g4`, `OALParser.g4` | +| MAL | `oap-server/analyzer/meter-analyzer/src/main/antlr4/.../MALLexer.g4`, `MALParser.g4` | +| LAL | `oap-server/analyzer/log-analyzer/src/main/antlr4/.../LALLexer.g4`, `LALParser.g4` | +| Hierarchy | `oap-server/analyzer/hierarchy/src/main/antlr4/.../HierarchyRuleLexer.g4`, `HierarchyRuleParser.g4` | + +## Javassist Constraints + +Javassist compiles Java source strings into bytecode but has limitations that shape the code generation: + +- **No anonymous inner classes or lambdas** -- Callback-based APIs require workarounds. + LAL uses private methods called directly from `execute()` instead of Consumer callbacks. + OAL pre-compiles callbacks as separate `CtClass` instances where needed. +- **No generics in method bodies** -- Generated source uses raw types with explicit casts. +- **Class loading anchor** -- Each DSL uses a `PackageHolder` marker class so that + `ctClass.toClass(PackageHolder.class)` loads the generated class into the correct module/package + (required for JDK 9+ module system). + +OAL additionally uses **FreeMarker templates** to generate method bodies for metrics classes, builders, and +dispatchers, since these classes are more complex and benefit from template-driven generation. + +## Module Structure + +``` +oap-server/ + oal-grammar/ # OAL: ANTLR4 grammar + oal-rt/ # OAL: compiler + runtime (Javassist + FreeMarker) + analyzer/ + meter-analyzer/ # MAL: grammar + compiler + runtime + log-analyzer/ # LAL: grammar + compiler + runtime + hierarchy/ # Hierarchy: grammar + compiler + runtime + agent-analyzer/ # Calls MAL compiler for meter data +``` + +OAL keeps grammar and runtime in separate modules (`oal-grammar` and `oal-rt`) because `server-core` +depends on the grammar while the runtime implementation depends on `server-core` (avoiding circular +dependency). MAL, LAL, and Hierarchy are each self-contained in a single module. + +## Groovy Replacement (MAL, LAL, Hierarchy) + +Reference: [Discussion #13716](https://github.com/apache/skywalking/discussions/13716) + +MAL, LAL, and Hierarchy previously used **Groovy** as the runtime scripting engine. OAL has always used +ANTLR4 + Javassist. The Groovy-based DSLs were replaced for the following reasons: + +1. **Startup cost** -- 1,250+ `GroovyShell.parse()` calls at OAP boot, each spinning up the full Groovy + compiler pipeline. + +2. **Runtime execution overhead** -- MAL expressions execute on every metrics ingestion cycle. Per-expression + overhead from dynamic Groovy compounds at scale: property resolution through 4+ layers of indirection, + `ExpandoMetaClass` closure allocation for simple arithmetic, and megamorphic call sites that defeat JIT + optimization. + +3. **Late error detection** -- MAL uses dynamic Groovy; typos in metric names or invalid method chains are + only discovered when that specific expression runs with real data. + +4. **Debugging complexity** -- Stack traces include Groovy MOP internals (`CallSite`, `MetaClassImpl`, + `ExpandoMetaClass`), obscuring the actual expression logic. + +5. **GraalVM incompatibility** -- `invokedynamic` bootstrapping and `ExpandoMetaClass` are fundamentally + incompatible with ahead-of-time (AOT) compilation, blocking the + [GraalVM native-image distribution](https://github.com/apache/skywalking-graalvm-distro). + +The DSL grammar for users remains **100% unchanged** -- the same expressions written in YAML config files +work exactly as before. Only the internal compilation engine was replaced. + +### Verification: Groovy v1 Checker + +To ensure the new Java compilers produce identical results to the original Groovy implementation, +a **dual-path comparison test suite** is maintained under `test/script-cases/`: + +``` +test/script-cases/ + scripts/ + mal/ # Copies of shipped MAL configs (test-otel-rules, test-meter-analyzer-config, etc.) + lal/ # Copies of shipped LAL scripts (test-lal/) + hierarchy-rule/ # Copy of shipped hierarchy-definition.yml + script-runtime-with-groovy/ + mal-v1-with-groovy/ # MAL v1: original Groovy-based implementation + lal-v1-with-groovy/ # LAL v1: original Groovy-based implementation + hierarchy-v1-with-groovy/ # Hierarchy v1: original Groovy-based implementation + mal-lal-v1-v2-checker/ # Runs every MAL/LAL expression through BOTH v1 and v2, compares results + hierarchy-v1-v2-checker/ # Runs every hierarchy rule through BOTH v1 and v2, compares results +``` + +The checker mechanism: + +1. Loads all test copies of production YAML config files from `test/script-cases/scripts/` +2. For each DSL expression, compiles with **both** v1 (Groovy) and v2 (ANTLR4 + Javassist) +3. Compares the results: + - **MAL**: Two-level comparison for each expression: + 1. **Metadata comparison** -- sample names, aggregation labels, downsampling type, percentile config + 2. **Runtime execution comparison** -- builds mock `SampleFamily` input data from `ExpressionMetadata`, + executes with both v1 and v2, compares output samples (count, labels, values with epsilon). + For `increase()`/`rate()` expressions, the `CounterWindow` is primed with an initial run before + comparing the second run's output. + - **LAL**: Runtime execution comparison -- both v1 and v2 execute with mock LogData, + then compare execution state (service, layer, tags, abort/save flags). + For rules requiring extraLog (e.g., envoy-als), mock proto data is built from `.input.data` files + and the `LALSourceTypeProvider` SPI resolves the proto type per layer. + Test scripts include both copies of production configs (`oap-cases/`) and + dedicated feature-coverage rules (`feature-cases/`). + - **Hierarchy**: Compare `BiFunction` evaluation with test Service pairs + +This ensures 100% behavioral parity. The Groovy v1 modules are **test-only dependencies** -- they are not +included in the OAP distribution. \ No newline at end of file diff --git a/docs/en/changes/changes.md b/docs/en/changes/changes.md index 5752f12b6d2a..47cc56f67b57 100644 --- a/docs/en/changes/changes.md +++ b/docs/en/changes/changes.md @@ -7,9 +7,18 @@ - Precise error location reporting with file, line, and column numbers - Clean separation between parsing and code generation phases - Enhanced testability with models that can be constructed without parsing +* Introduce MAL/LAL/Hierarchy V2 engine — replace Groovy-based DSL runtime with ANTLR4 parser + Javassist bytecode generation: + - Remove Groovy runtime dependency from OAP backend + - Fail-fast compilation at startup — syntax and type errors are caught immediately instead of at first execution + - Thread-safe generated classes with no ThreadLocal or shared mutable state + - Immutable AST models for all three DSLs (MAL, LAL, Hierarchy rules) + - Explicit context passing replaces Groovy binding/closure capture + - v1 (Groovy) and v2 (ANTLR4+Javassist) cross-version checker validates behavioral equivalence across 1,290+ expressions + - JMH benchmarks confirm v2 runtime speedups: MAL execute ~6.8x, LAL compile ~39x / execute ~2.8x, Hierarchy execute ~2.6x faster than Groovy v1 * Fix E2E test metrics verify: make it failure if the metric values all null. * Support building, testing, and publishing with Java 25. * Add `CLAUDE.md` as AI assistant guide for the project. +* Upgrade Byte Buddy to 1.18.7 and configure explicit `-javaagent` for Mockito/Byte Buddy in Surefire to avoid JDK 25 dynamic agent loading warnings. * Upgrade Groovy to 5.0.3 in OAP backend. * Bump up nodejs to v24.13.0 for the latest UI(booster-ui) compiling. * Drop Elasticsearch 7.x (EOL) and OpenSearch 1.x from E2E tests, upgrade all ES tests to 8.18.8, and update skywalking-helm to use ECK 8.18.8. @@ -69,6 +78,7 @@ * Replace PowerMock Whitebox with standard Java Reflection in `server-library`, `server-core`, and `server-configuration` to support JDK 25+. * Fix `/debugging/config/dump` may leak sensitive information if there are second level properties in the configuration. + #### OAP Server * KubernetesCoordinator: make self instance return real pod IP address instead of `127.0.0.1`. @@ -136,7 +146,10 @@ * Add the spring-ai components and the GenAI layer. * Bump up netty to 4.2.10.Final. * Bump up log4j to 2.25.3 and jackson to 2.18.5. +* Remove PowerMock dependency. Replace `Whitebox` with `ReflectUtil` (standard Java reflection + `sun.misc.Unsafe` for final fields) across all modules to support JDK 25+. * Support TraceQL and Tempo API for Zipkin trace query. +* Remove `initExp` from MAL configuration. It was an internal Groovy startup validation mechanism, not an end-user feature. The v2 ANTLR4 compiler performs fail-fast validation at startup natively. +* Update hierarchy rule documentation: `auto-matching-rules` in `hierarchy-definition.yml` no longer use Groovy scripts. Rules now use a dedicated expression grammar supporting property access, String methods, if/else, comparisons, and logical operators. All shipped rules are fully compatible. #### UI * Fix the missing icon in new native trace view. diff --git a/docs/en/concepts-and-designs/lal.md b/docs/en/concepts-and-designs/lal.md index f843871cf818..4dff550d3dee 100644 --- a/docs/en/concepts-and-designs/lal.md +++ b/docs/en/concepts-and-designs/lal.md @@ -31,7 +31,7 @@ are cases where you may want the filter chain to stop earlier when specified con the remaining filter chain from where it's declared, and all the remaining components won't be executed at all. `abort` function serves as a fast-fail mechanism in LAL. -```groovy +``` filter { if (log.service == "TestingService") { // Don't waste resources on TestingServices abort {} // all remaining components won't be executed at all @@ -67,7 +67,7 @@ We can add tags like following: ] ``` And we can use this method to get the value of the tag key `TEST_KEY`. -```groovy +``` filter { if (tag("TEST_KEY") == "TEST_VALUE") { ... @@ -95,7 +95,7 @@ See examples below. #### `json` -```groovy +``` filter { json { abortOnFailure true // this is optional because it's default behaviour @@ -105,7 +105,7 @@ filter { #### `yaml` -```groovy +``` filter { yaml { abortOnFailure true // this is optional because it's default behaviour @@ -123,7 +123,7 @@ For unstructured logs, there are some `text` parsers for use. all the captured groups can be used later in the extractors or sinks. `regexp` returns a `boolean` indicating whether the log matches the pattern or not. -```groovy +``` filter { text { abortOnFailure true // this is optional because it's default behaviour @@ -181,7 +181,7 @@ dropped) and is used to associate with traces / metrics. not dropped) and is used to associate with traces / metrics. The parameter of `timestamp` can be a millisecond: -```groovy +``` filter { // ... parser @@ -191,7 +191,7 @@ filter { } ``` or a datetime string with a specified pattern: -```groovy +``` filter { // ... parser @@ -210,9 +210,7 @@ not dropped) and is used to associate with service. `tag` extracts the tags from the `parsed` result, and set them into the `LogData`. The form of this extractor should look something like this: `tag key1: value, key2: value2`. You may use the properties of `parsed` as both keys and values. -```groovy -import javax.swing.text.LayeredHighlighter - +``` filter { // ... parser @@ -242,7 +240,7 @@ log-analyzer: Examples are as follows: -```groovy +``` filter { // ... extractor { @@ -338,7 +336,7 @@ dropped) and is used to associate with TopNDatabaseStatement. An example of LAL to distinguish slow logs: -```groovy +``` filter { json{ } @@ -386,7 +384,7 @@ An example of JSON sent to OAP is as following: ``` Examples are as follows: -```groovy +``` filter { json { } @@ -447,7 +445,7 @@ final sampling result. See examples in [Enforcer](#enforcer). Examples 1, `rateLimit`: -```groovy +``` filter { // ... parser @@ -469,7 +467,7 @@ filter { Examples 2, `possibility`: -```groovy +``` filter { // ... parser @@ -492,7 +490,7 @@ filter { Dropper is a special sink, meaning that all logs are dropped without any exception. This is useful when you want to drop debugging logs. -```groovy +``` filter { // ... parser @@ -510,7 +508,7 @@ filter { Or if you have multiple filters, some of which are for extracting metrics, only one of them has to be persisted. -```groovy +``` filter { // filter A: this is for persistence // ... parser @@ -539,7 +537,7 @@ Enforcer is another special sink that forcibly samples the log. A typical use ca configured a sampler and want to save some logs forcibly, such as to save error logs even if the sampling mechanism has been configured. -```groovy +``` filter { // ... parser diff --git a/docs/en/concepts-and-designs/mal.md b/docs/en/concepts-and-designs/mal.md index 32f4b2319906..a6f0420f5ee6 100644 --- a/docs/en/concepts-and-designs/mal.md +++ b/docs/en/concepts-and-designs/mal.md @@ -294,8 +294,6 @@ Generic placeholders are defined as follows: * `<closure>`: A closure with custom logic. ```yaml -# initExp is the expression that initializes the current configuration file -initExp: <string> # filter the metrics, only those metrics that satisfy this condition will be passed into the `metricsRules` below. filter: <closure> # example: '{ tags -> tags.job_name == "vm-monitoring" }' # expPrefix is executed before the metrics executes other functions. diff --git a/docs/en/concepts-and-designs/oal.md b/docs/en/concepts-and-designs/oal.md index ce035f225afd..e0becc4dbfe9 100644 --- a/docs/en/concepts-and-designs/oal.md +++ b/docs/en/concepts-and-designs/oal.md @@ -23,7 +23,8 @@ However, the OAL script is a compiled language, and the OAL Runtime generates ja the changes of those scripts in the runtime. If your OAP servers are running in a cluster mode, these script defined metrics should be aligned. -You can set `SW_OAL_ENGINE_DEBUG=Y` at system env to see which classes are generated. +You can set `SW_DYNAMIC_CLASS_ENGINE_DEBUG=Y` at system env to dump generated `.class` files to disk. +See [Dynamic Code Generation and Debugging](../operation/dynamic-code-generation-debugging.md) for details. ## Grammar Scripts should be named `*.oal` diff --git a/docs/en/concepts-and-designs/service-hierarchy-configuration.md b/docs/en/concepts-and-designs/service-hierarchy-configuration.md index 6aa0d40d56b2..7aac0e99fa9e 100644 --- a/docs/en/concepts-and-designs/service-hierarchy-configuration.md +++ b/docs/en/concepts-and-designs/service-hierarchy-configuration.md @@ -69,7 +69,7 @@ layer-levels: ### Auto Matching Rules - The auto matching rules are defined in the `auto-matching-rules` section. -- Use Groovy script to define the matching rules, the input parameters are the upper service(u) and the lower service(l) and the return value is a boolean, +- The matching rules are expressions where the input parameters are the upper service(u) and the lower service(l) and the return value is a boolean, which are used to match the relation between the upper service(u) and the lower service(l) on the different layers. - The default matching rules required the service name configured as SkyWalking default and follow the [Showcase](https://github.com/apache/skywalking-showcase). If you customized the service name in any layer, you should customize the related matching rules according your service name rules. diff --git a/docs/en/concepts-and-designs/service-hierarchy.md b/docs/en/concepts-and-designs/service-hierarchy.md index 8cd18bccda73..5f3c5144fcf1 100644 --- a/docs/en/concepts-and-designs/service-hierarchy.md +++ b/docs/en/concepts-and-designs/service-hierarchy.md @@ -45,7 +45,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### GENERAL On K8S_SERVICE - Rule name: `lower-short-name-remove-ns` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName.substring(0, l.shortName.lastIndexOf('.')) }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName.substring(0, l.shortName.lastIndexOf('.')) }` - Description: GENERAL.service.shortName == K8S_SERVICE.service.shortName without namespace - Matched Example: - GENERAL.service.name: `agent::songs` @@ -53,7 +53,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### GENERAL On APISIX - Rule name: `lower-short-name-remove-ns` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName.substring(0, l.shortName.lastIndexOf('.')) }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName.substring(0, l.shortName.lastIndexOf('.')) }` - Description: GENERAL.service.shortName == APISIX.service.shortName without namespace - Matched Example: - GENERAL.service.name: `agent::frontend` @@ -62,7 +62,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### VIRTUAL_DATABASE On MYSQL - Rule name: `lower-short-name-with-fqdn` -- Groovy script: `{ (u, l) -> u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local') }` +- Matching expression: `{ (u, l) -> u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local') }` - Description: VIRTUAL_DATABASE.service.shortName remove port == MYSQL.service.shortName with fqdn suffix - Matched Example: - VIRTUAL_DATABASE.service.name: `mysql.skywalking-showcase.svc.cluster.local:3306` @@ -70,7 +70,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### VIRTUAL_DATABASE On POSTGRESQL - Rule name: `lower-short-name-with-fqdn` -- Groovy script: `{ (u, l) -> u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local') }` +- Matching expression: `{ (u, l) -> u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local') }` - Description: VIRTUAL_DATABASE.service.shortName remove port == POSTGRESQL.service.shortName with fqdn suffix - Matched Example: - VIRTUAL_DATABASE.service.name: `psql.skywalking-showcase.svc.cluster.local:5432` @@ -78,7 +78,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### VIRTUAL_DATABASE On CLICKHOUSE - Rule name: `lower-short-name-with-fqdn` -- Groovy script: `{ (u, l) -> u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local') }` +- Matching expression: `{ (u, l) -> u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local') }` - Description: VIRTUAL_DATABASE.service.shortName remove port == CLICKHOUSE.service.shortName with fqdn suffix - Matched Example: - VIRTUAL_DATABASE.service.name: `clickhouse.skywalking-showcase.svc.cluster.local:8123` @@ -87,7 +87,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### VIRTUAL_MQ On ROCKETMQ - Rule name: `lower-short-name-with-fqdn` -- Groovy script: `{ (u, l) -> u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local') }` +- Matching expression: `{ (u, l) -> u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local') }` - Description: VIRTUAL_MQ.service.shortName remove port == ROCKETMQ.service.shortName with fqdn suffix - Matched Example: - VIRTUAL_MQ.service.name: `rocketmq.skywalking-showcase.svc.cluster.local:9876` @@ -95,7 +95,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### VIRTUAL_MQ On RABBITMQ - Rule name: `lower-short-name-with-fqdn` -- Groovy script: `{ (u, l) -> u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local') }` +- Matching expression: `{ (u, l) -> u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local') }` - Description: VIRTUAL_MQ.service.shortName remove port == RABBITMQ.service.shortName with fqdn suffix - Matched Example: - VIRTUAL_MQ.service.name: `rabbitmq.skywalking-showcase.svc.cluster.local:5672` @@ -103,7 +103,7 @@ If you want to customize it according to your own needs, please refer to [Servic - #### VIRTUAL_MQ On KAFKA - Rule name: `lower-short-name-with-fqdn` -- Groovy script: `{ (u, l) -> u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local') }` +- Matching expression: `{ (u, l) -> u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local') }` - Description: VIRTUAL_MQ.service.shortName remove port == KAFKA.service.shortName with fqdn suffix - Matched Example: - VIRTUAL_MQ.service.name: `kafka.skywalking-showcase.svc.cluster.local:9092` @@ -111,7 +111,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### VIRTUAL_MQ On PULSAR - Rule name: `lower-short-name-with-fqdn` -- Groovy script: `{ (u, l) -> u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local') }` +- Matching expression: `{ (u, l) -> u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local') }` - Description: VIRTUAL_MQ.service.shortName remove port == PULSAR.service.shortName with fqdn suffix - Matched Example: - VIRTUAL_MQ.service.name: `pulsar.skywalking-showcase.svc.cluster.local:6650` @@ -119,7 +119,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### MESH On MESH_DP - Rule name: `name` -- Groovy script: `{ (u, l) -> u.name == l.name }` +- Matching expression: `{ (u, l) -> u.name == l.name }` - Description: MESH.service.name == MESH_DP.service.name - Matched Example: - MESH.service.name: `mesh-svr::songs.sample-services` @@ -127,7 +127,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### MESH On K8S_SERVICE - Rule name: `short-name` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName }` - Description: MESH.service.shortName == K8S_SERVICE.service.shortName - Matched Example: - MESH.service.name: `mesh-svr::songs.sample-services` @@ -135,7 +135,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### MESH_DP On K8S_SERVICE - Rule name: `short-name` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName }` - Description: MESH_DP.service.shortName == K8S_SERVICE.service.shortName - Matched Example: - MESH_DP.service.name: `mesh-svr::songs.sample-services` @@ -143,7 +143,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### MYSQL On K8S_SERVICE - Rule name: `short-name` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName }` - Description: MYSQL.service.shortName == K8S_SERVICE.service.shortName - Matched Example: - MYSQL.service.name: `mysql::mysql.skywalking-showcase` @@ -151,7 +151,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### POSTGRESQL On K8S_SERVICE - Rule name: `short-name` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName }` - Description: POSTGRESQL.service.shortName == K8S_SERVICE.service.shortName - Matched Example: - POSTGRESQL.service.name: `postgresql::psql.skywalking-showcase` @@ -159,7 +159,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### CLICKHOUSE On K8S_SERVICE - Rule name: `short-name` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName }` - Description: CLICKHOUSE.service.shortName == K8S_SERVICE.service.shortName - Matched Example: - CLICKHOUSE.service.name: `clickhouse::clickhouse.skywalking-showcase` @@ -167,7 +167,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### NGINX On K8S_SERVICE - Rule name: `short-name` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName }` - Description: NGINX.service.shortName == K8S_SERVICE.service.shortName - Matched Example: - NGINX.service.name: `nginx::nginx.skywalking-showcase` @@ -175,7 +175,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### APISIX On K8S_SERVICE - Rule name: `short-name` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName }` - Description: APISIX.service.shortName == K8S_SERVICE.service.shortName - Matched Example: - APISIX.service.name: `APISIX::frontend.sample-services` @@ -183,7 +183,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### ROCKETMQ On K8S_SERVICE - Rule name: `short-name` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName }` - Description: ROCKETMQ.service.shortName == K8S_SERVICE.service.shortName - Matched Example: - ROCKETMQ.service.name: `rocketmq::rocketmq.skywalking-showcase` @@ -191,7 +191,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### RABBITMQ On K8S_SERVICE - Rule name: `short-name` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName }` - Description: RABBITMQ.service.shortName == K8S_SERVICE.service.shortName - Matched Example: - RABBITMQ.service.name: `rabbitmq::rabbitmq.skywalking-showcase` @@ -199,7 +199,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### KAFKA On K8S_SERVICE - Rule name: `short-name` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName }` - Description: KAFKA.service.shortName == K8S_SERVICE.service.shortName - Matched Example: - KAFKA.service.name: `kafka::kafka.skywalking-showcase` @@ -207,7 +207,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### PULSAR On K8S_SERVICE - Rule name: `short-name` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName }` - Description: PULSAR.service.shortName == K8S_SERVICE.service.shortName - Matched Example: - PULSAR.service.name: `pulsar::pulsar.skywalking-showcase` @@ -215,7 +215,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### SO11Y_OAP On K8S_SERVICE - Rule name: `short-name` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName }` - Description: SO11Y_OAP.service.shortName == K8S_SERVICE.service.shortName - Matched Example: - SO11Y_OAP.service.name: `demo-oap.skywalking-showcase` @@ -223,7 +223,7 @@ If you want to customize it according to your own needs, please refer to [Servic #### KONG On K8S_SERVICE - Rule name: `short-name` -- Groovy script: `{ (u, l) -> u.shortName == l.shortName }` +- Matching expression: `{ (u, l) -> u.shortName == l.shortName }` - Description: KONG.service.shortName == K8S_SERVICE.service.shortName - Matched Example: - KONG.service.name: `kong::kong.skywalking-showcase` diff --git a/docs/en/guides/claude-code-skills.md b/docs/en/guides/claude-code-skills.md new file mode 100644 index 000000000000..93c6cb012f91 --- /dev/null +++ b/docs/en/guides/claude-code-skills.md @@ -0,0 +1,29 @@ +# Claude Code Skills for SkyWalking Development + +[Claude Code](https://docs.anthropic.com/en/docs/claude-code) is Anthropic's CLI tool for AI-assisted coding. +This project provides a set of Claude Code skills (slash commands) that automate common development workflows. + +**Note**: These skills are specific to Claude Code. They are defined in the `.claude/skills/` directory +and are not recognized by other AI coding tools. + +## Available Skills + +| Skill | Command | Description | +|-------|---------|-------------| +| Compile | `/compile [all\|backend\|javadoc\|checkstyle\|module-name]` | Build the OAP server, run javadoc checks, or verify checkstyle | +| Test | `/test [unit\|integration\|slow\|module-name]` | Run unit tests, integration tests, or slow integration tests matching CI | +| License | `/license [check\|fix\|deps]` | Check and fix Apache 2.0 license headers and dependency licenses | +| Pull Request | `/gh-pull-request` | Commit, push, and create a PR with pre-flight checks (compile, checkstyle, license headers) | +| E2E Debug | `/ci-e2e-debug <RUN_ID or URL>` | Download and inspect CI e2e test logs from GitHub Actions artifacts | +| Generate Classes | `/generate-classes <mal\|oal\|lal\|hierarchy\|all>` | Generate bytecode classes from DSL scripts for inspection | +| Run E2E | `/run-e2e [test-case-path]` | Run SkyWalking e2e tests locally | + +## Typical Workflow + +1. Make your code changes. +2. Run `/compile` to verify the build passes. +3. Run `/test` to run relevant tests. +4. Run `/license check` to verify license headers. +5. Run `/gh-pull-request` to commit, push, and open a PR. + +If a CI e2e test fails after pushing, use `/ci-e2e-debug <RUN_ID>` to download and inspect the logs. diff --git a/docs/en/operation/dynamic-code-generation-debugging.md b/docs/en/operation/dynamic-code-generation-debugging.md new file mode 100644 index 000000000000..e4a8ff838e32 --- /dev/null +++ b/docs/en/operation/dynamic-code-generation-debugging.md @@ -0,0 +1,187 @@ +# Dynamic Code Generation and Debugging + +SkyWalking OAP server uses four Domain-Specific Languages (DSLs) to define observability logic: +**OAL** (traces/mesh metrics), **MAL** (meter metrics), **LAL** (log analysis), and **Hierarchy** (service matching rules). +These DSL scripts are compiled into JVM bytecode when the OAP server starts. +The generated classes run in-process — there are no intermediate source files. + +When a runtime error occurs inside these generated classes, the stack trace references class names and +source locations that map back to the original DSL configuration files. +This document explains how to dump the generated bytecode to disk for inspection and how to read the +error messages. + +## DSL Configuration Files + +| DSL | Config Location | What It Generates | +|-----|----------------|-------------------| +| OAL | `config/*.oal` | Metrics, MetricsBuilder, and Dispatcher classes per metric definition | +| MAL | `config/meter-analyzer-config/*.yaml`, `config/otel-rules/**`, `config/envoy-metrics-rules/*.yaml` | One class per metric expression | +| LAL | `config/lal/*.yaml` | One class per log filter rule | +| Hierarchy | `config/hierarchy-definition.yml` | One class per auto-matching rule | + +All paths are relative to the OAP distribution root directory. + +## Dumping Generated Classes + +Set the environment variable `SW_DYNAMIC_CLASS_ENGINE_DEBUG` to any non-empty value before starting the OAP server. +All four DSL compilers check this variable and dump `.class` files to disk when it is set. + +```shell +# Binary distribution +export SW_DYNAMIC_CLASS_ENGINE_DEBUG=Y +bin/oapService.sh +``` + +```shell +# Docker +docker run -e SW_DYNAMIC_CLASS_ENGINE_DEBUG=Y ... apache/skywalking-oap-server +``` + +```yaml +# Kubernetes (in container env section) +env: + - name: SW_DYNAMIC_CLASS_ENGINE_DEBUG + value: "Y" +``` + +### Output Directory Structure + +The generated `.class` files are written to sibling directories next to `oap-libs/`: + +**Binary distribution** (`apache-skywalking-apm-bin/`): +``` +apache-skywalking-apm-bin/ +├── config/ ← DSL source scripts (*.oal, *.yaml, *.yml) +├── oap-libs/ ← OAP server jars +├── oal-rt/ ← Generated OAL classes +│ ├── metrics/ ← e.g., ServiceRespTimeMetrics.class +│ ├── metrics/builder/ ← e.g., ServiceRespTimeMetricsBuilder.class +│ └── dispatcher/ ← e.g., ServiceDispatcher.class +├── mal-rt/ ← Generated MAL classes (e.g., meter_vm_cpu_total_percentage.class) +├── lal-rt/ ← Generated LAL classes (e.g., default_default.class) +└── hierarchy-rt/ ← Generated Hierarchy classes (e.g., name.class) +``` + +**Docker** (`/skywalking/`): +``` +/skywalking/ +├── config/ +├── oap-libs/ +├── oal-rt/ +├── mal-rt/ +├── lal-rt/ +└── hierarchy-rt/ +``` + +The OAL output directories are cleaned on each restart. MAL, LAL, and Hierarchy directories are created +on demand if they don't exist. + +### Inspecting Generated Classes + +Use `javap` to decompile a generated `.class` file: + +```shell +javap -v -p oal-rt/metrics/ServiceRespTimeMetrics.class +``` + +The output includes: +- **SourceFile attribute** — shows the DSL source file and the generated class name. +- **LineNumberTable** — maps bytecode offsets to statement numbers, used by the JVM in stack traces. +- **LocalVariableTable** — shows named local variables for readability. + +## Reading Error Stack Traces + +When a runtime error occurs inside a generated class, the JVM prints a stack trace that combines the +`SourceFile` attribute and the `LineNumberTable`. The format is: + +``` +at <package>.<ClassName>.<method>(SourceFile:LineNumber) +``` + +The `SourceFile` attribute encodes the original DSL configuration file in parentheses: + +``` +(<dsl_source_file>:<rule_line_or_index>)<GeneratedClassName>.java +``` + +### Example Stack Trace + +``` +java.lang.ArithmeticException: / by zero + at ...metrics.generated.ServiceRespTimeMetrics.id0((core.oal:20)ServiceRespTimeMetrics.java:3) + at ...worker.MetricsStreamProcessor.in(MetricsStreamProcessor.java:...) + ... +``` + +Reading this: +- `(core.oal:20)` — the error originates from OAL file `core.oal`, line 20 +- `ServiceRespTimeMetrics.java` — the generated class for metric `ServiceRespTime` +- `:3` — statement 3 within the generated `id0` method + +### Format Per DSL + +| DSL | SourceFile Example | How to Read | +|-----|-------------------|-------------| +| OAL | `(core.oal:20)ServiceRespTimeMetrics.java` | OAL file `core.oal`, line 20 defines this metric | +| MAL | `(spring-sleuth.yaml:3)cluster_up_rq_incr.java` | YAML file `spring-sleuth.yaml`, rule index 3 (0-based, the 4th `metricsRules` entry) | +| LAL | `(default.yaml)default_default.java` | YAML file `default.yaml`, rule named `default_default` | +| Hierarchy | `(hierarchy-definition.yml)name.java` | Rule `name` in `hierarchy-definition.yml` | + +**Notes:** +- The number after `:` in the MAL source prefix is the 0-based index of the rule within the `metricsRules` list in that YAML file. +- LAL and Hierarchy rules are standalone entries, so the source prefix contains only the YAML file name without a line number. +- When source information is unavailable, the SourceFile falls back to just `ClassName.java` without the parenthesized prefix. + +### Mapping Back to DSL Source + +1. **Identify the DSL type** from the package or class suffix: + - `...metrics.generated.*Metrics` or `...Dispatcher` → OAL + - `...meter.analyzer.v2.compiler.rt.MalExpr_*` → MAL + - `...log.analyzer.v2.compiler.rt.LalExpr_*` → LAL + - `...hierarchy.rule.rt.*` → Hierarchy + +2. **Find the source file** from the parenthesized prefix (e.g., `core.oal`, `spring-sleuth.yaml`). + These files are in the `config/` directory. + +3. **Locate the rule** using the line number (OAL) or rule index (MAL) or rule name (LAL/Hierarchy). + +4. **Use the statement number** (after the last `:`) as a rough indicator of which operation within the + generated method failed. Dump the class (see above) and use `javap -v` to see the exact mapping. + +## Common Error Patterns + +### OAL Compilation Failure + +OAL compilation errors are logged at `ERROR` level during OAP startup: + +``` +ERROR o.a.s.o.v.g.OALClassGeneratorV2 - Can't generate method id for ServiceRespTimeMetrics. +``` + +This indicates that the generated Java source for the `id` method failed to compile. +Check the OAL script syntax at the reported metric name. + +### MAL/LAL Runtime Error + +MAL and LAL errors during metric processing are caught and logged per-expression: + +``` +ERROR o.a.s.o.m.a.v.MetricConvert - Analyze Analyzer{...} error +java.lang.NullPointerException + at ...MalExpr_5.run((vm.yaml:2)meter_vm_cpu_total_percentage.java:5) +``` + +This tells you: the error is in `vm.yaml`, rule index 2 (the 3rd `metricsRules` entry), +metric `meter_vm_cpu_total_percentage`, at statement 5 of the generated `run()` method. +The processing continues for other metrics — a single expression failure does not crash the server. + +### Hierarchy Compilation Failure + +Hierarchy rule compilation errors are thrown at startup: + +``` +IllegalStateException: Failed to compile hierarchy rule: lower-short-name-remove-namespace, + expression: { (u, l) -> { if (...) { ... } } } +``` + +Check the rule expression syntax in `config/hierarchy-definition.yml` under `auto-matching-rules`. diff --git a/docs/en/setup/backend/backend-meter.md b/docs/en/setup/backend/backend-meter.md index 497bee60cce1..cf4ee971dc38 100644 --- a/docs/en/setup/backend/backend-meter.md +++ b/docs/en/setup/backend/backend-meter.md @@ -81,8 +81,6 @@ parameter is optional. ### Meters configuration ```yaml -# initExp is the expression that initializes the current configuration file -initExp: <string> # filter the metrics, only those metrics that satisfy this condition will be passed into the `metricsRules` below. filter: <closure> # example: '{ tags -> tags.job_name == "vm-monitoring" }' # expPrefix is executed before the metrics executes other functions. diff --git a/docs/en/setup/backend/backend-zabbix.md b/docs/en/setup/backend/backend-zabbix.md index 1ee46a092362..7d84310fa528 100644 --- a/docs/en/setup/backend/backend-zabbix.md +++ b/docs/en/setup/backend/backend-zabbix.md @@ -30,8 +30,6 @@ You can find details on Zabbix agent items from [Zabbix Agent documentation](htt ### Configuration file ```yaml -# initExp is the expression that initializes the current configuration file -initExp: <string> # insert metricPrefix into metric name: <metricPrefix>_<raw_metric_name> metricPrefix: <string> # expPrefix is executed before the metrics executes other functions. diff --git a/docs/en/setup/backend/configuration-vocabulary.md b/docs/en/setup/backend/configuration-vocabulary.md index c1f512f468e1..1654a83148e3 100644 --- a/docs/en/setup/backend/configuration-vocabulary.md +++ b/docs/en/setup/backend/configuration-vocabulary.md @@ -549,7 +549,7 @@ process environment and take effect across all modules. | Environment Variable | Value(s) and Explanation | Default | |-----------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------| -| SW_OAL_ENGINE_DEBUG | Set to any non-empty value to dump OAL-generated `.class` files to disk (under the `oal-rt/` directory relative to the OAP working path). Useful for debugging code generation issues. Leave unset in production. | (not set, no files written) | +| SW_DYNAMIC_CLASS_ENGINE_DEBUG | Set to any non-empty value to dump dynamically generated `.class` files to disk for all four DSL compilers (OAL, MAL, LAL, Hierarchy). See [Dynamic Code Generation and Debugging](../../operation/dynamic-code-generation-debugging.md) for output directory details. Leave unset in production. | (not set, no files written) | | SW_VIRTUAL_THREADS_ENABLED | Set to `false` to disable virtual threads on JDK 25+. On JDK 25+, gRPC server handler threads and HTTP blocking task executors are virtual threads by default. Set this variable to `false` to force traditional platform thread pools. Ignored on JDK versions below 25. | (not set, virtual threads enabled on JDK 25+) | ## Note diff --git a/docs/menu.yml b/docs/menu.yml index 767d830fb769..4558a252eac7 100644 --- a/docs/menu.yml +++ b/docs/menu.yml @@ -380,6 +380,10 @@ catalog: path: "/en/concepts-and-designs/service-hierarchy-configuration" - name: "Metrics Attributes" path: "/en/concepts-and-designs/metrics-additional-attributes" + - name: "Operation" + catalog: + - name: "Dynamic Code Generation and Debugging" + path: "/en/operation/dynamic-code-generation-debugging" - name: "Security Notice" path: "/en/security/readme" - name: "Academy" @@ -394,6 +398,8 @@ catalog: path: "/en/concepts-and-designs/ebpf-cpu-profiling" - name: "Diagnose Service Mesh Network Performance with eBPF" path: "/en/academy/diagnose-service-mesh-network-performance-with-ebpf" + - name: "DSL Compiler Design" + path: "/en/academy/dsl-compiler-design" - name: "FAQs" path: "/en/faq/readme" - name: "Contributing Guides" @@ -428,6 +434,8 @@ catalog: path: "/en/guides/how-to-bump-up-zipkin" - name: "I18n" path: "/en/guides/i18n" + - name: "Claude Code Skills" + path: "/en/guides/claude-code-skills" - name: "SWIP" path: "/en/swip/readme" - name: "Changelog" diff --git a/oap-server/ai-pipeline/pom.xml b/oap-server/ai-pipeline/pom.xml index ddcf480d7caf..4ec64c5f9933 100644 --- a/oap-server/ai-pipeline/pom.xml +++ b/oap-server/ai-pipeline/pom.xml @@ -73,8 +73,9 @@ <scope>test</scope> </dependency> <dependency> - <groupId>org.powermock</groupId> - <artifactId>powermock-reflect</artifactId> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> <scope>test</scope> </dependency> </dependencies> diff --git a/oap-server/ai-pipeline/src/test/java/BaselineServerTest.java b/oap-server/ai-pipeline/src/test/java/BaselineServerTest.java index 2685a849af79..57021d818a19 100644 --- a/oap-server/ai-pipeline/src/test/java/BaselineServerTest.java +++ b/oap-server/ai-pipeline/src/test/java/BaselineServerTest.java @@ -30,7 +30,7 @@ import org.junit.jupiter.api.AfterEach; import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.io.IOException; import java.util.Arrays; @@ -63,7 +63,7 @@ public void before() throws IOException { queryService = new BaselineQueryServiceImpl("", 0); org.apache.skywalking.apm.baseline.v3.AlarmBaselineServiceGrpc.AlarmBaselineServiceBlockingStub blockingStub = org.apache.skywalking.apm.baseline.v3.AlarmBaselineServiceGrpc.newBlockingStub(channel); - Whitebox.setInternalState(queryService, "stub", blockingStub); + ReflectUtil.setInternalState(queryService, "stub", blockingStub); } @AfterEach diff --git a/oap-server/analyzer/agent-analyzer/pom.xml b/oap-server/analyzer/agent-analyzer/pom.xml index b4b96cd62eb8..5a281800589b 100644 --- a/oap-server/analyzer/agent-analyzer/pom.xml +++ b/oap-server/analyzer/agent-analyzer/pom.xml @@ -43,5 +43,11 @@ <artifactId>meter-analyzer</artifactId> <version>${project.version}</version> </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> </project> \ No newline at end of file diff --git a/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/config/MeterConfig.java b/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/config/MeterConfig.java index 4bc3bec8c7bb..8e13ab8c2c43 100644 --- a/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/config/MeterConfig.java +++ b/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/config/MeterConfig.java @@ -20,7 +20,7 @@ import lombok.Data; import lombok.NoArgsConstructor; -import org.apache.skywalking.oap.meter.analyzer.MetricRuleConfig; +import org.apache.skywalking.oap.meter.analyzer.v2.MetricRuleConfig; import java.util.List; @@ -32,7 +32,6 @@ public class MeterConfig implements MetricRuleConfig { private String expPrefix; private String filter; private List<Rule> metricsRules; - private String initExp; @Data @NoArgsConstructor diff --git a/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/MeterProcessService.java b/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/MeterProcessService.java index e3f30c96804d..94d0909e5ba3 100644 --- a/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/MeterProcessService.java +++ b/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/MeterProcessService.java @@ -18,7 +18,7 @@ package org.apache.skywalking.oap.server.analyzer.provider.meter.process; -import org.apache.skywalking.oap.meter.analyzer.MetricConvert; +import org.apache.skywalking.oap.meter.analyzer.v2.MetricConvert; import org.apache.skywalking.oap.server.analyzer.provider.meter.config.MeterConfig; import org.apache.skywalking.oap.server.core.CoreModule; import org.apache.skywalking.oap.server.core.analysis.meter.MeterSystem; diff --git a/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/MeterProcessor.java b/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/MeterProcessor.java index d94afe0cdf4f..4635c5084ed5 100644 --- a/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/MeterProcessor.java +++ b/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/MeterProcessor.java @@ -27,9 +27,9 @@ import org.apache.skywalking.apm.network.language.agent.v3.MeterHistogram; import org.apache.skywalking.apm.network.language.agent.v3.MeterSingleValue; import org.apache.skywalking.oap.server.library.util.StringUtil; -import org.apache.skywalking.oap.meter.analyzer.MetricConvert; -import org.apache.skywalking.oap.meter.analyzer.dsl.Sample; -import org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamilyBuilder; +import org.apache.skywalking.oap.meter.analyzer.v2.MetricConvert; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyBuilder; import org.apache.skywalking.oap.server.library.util.CollectionUtils; import java.util.ArrayList; @@ -53,7 +53,7 @@ public class MeterProcessor { private final MeterProcessService processService; /** - * All of meters has been read. Using it to process groovy script. + * All of meters has been read. Using it to process MAL expressions. */ private final Map<String, List<SampleBuilder>> meters = new HashMap<>(); diff --git a/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/SampleBuilder.java b/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/SampleBuilder.java index ed656e24f311..e7afa79b0aff 100644 --- a/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/SampleBuilder.java +++ b/oap-server/analyzer/agent-analyzer/src/main/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/SampleBuilder.java @@ -20,7 +20,7 @@ import com.google.common.collect.ImmutableMap; import lombok.Builder; -import org.apache.skywalking.oap.meter.analyzer.dsl.Sample; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample; /** * Help to build Sample with agent side meter. diff --git a/oap-server/analyzer/agent-analyzer/src/test/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/MeterProcessorTest.java b/oap-server/analyzer/agent-analyzer/src/test/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/MeterProcessorTest.java index 6898f4e4538f..8d9e94f7399e 100644 --- a/oap-server/analyzer/agent-analyzer/src/test/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/MeterProcessorTest.java +++ b/oap-server/analyzer/agent-analyzer/src/test/java/org/apache/skywalking/oap/server/analyzer/provider/meter/process/MeterProcessorTest.java @@ -49,7 +49,7 @@ import org.junit.jupiter.api.extension.ExtendWith; import org.mockito.Mock; import org.mockito.junit.jupiter.MockitoExtension; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.anyString; @@ -83,7 +83,7 @@ public void setup() throws StorageException, ModuleStartException { when(moduleManager.find(CoreModule.NAME).provider()).thenReturn(mock(ModuleServiceHolder.class)); when(moduleManager.find(CoreModule.NAME).provider().getService(MeterSystem.class)).thenReturn(meterSystem); MetricsStreamProcessor mockProcessor = mock(MetricsStreamProcessor.class); - Whitebox.setInternalState( + ReflectUtil.setInternalState( MetricsStreamProcessor.class, "PROCESSOR", mockProcessor diff --git a/oap-server/analyzer/agent-analyzer/src/test/java/org/apache/skywalking/oap/server/analyzer/provider/trace/TraceSamplingPolicyWatcherTest.java b/oap-server/analyzer/agent-analyzer/src/test/java/org/apache/skywalking/oap/server/analyzer/provider/trace/TraceSamplingPolicyWatcherTest.java index 445aa6e1a278..25e8c45631e7 100644 --- a/oap-server/analyzer/agent-analyzer/src/test/java/org/apache/skywalking/oap/server/analyzer/provider/trace/TraceSamplingPolicyWatcherTest.java +++ b/oap-server/analyzer/agent-analyzer/src/test/java/org/apache/skywalking/oap/server/analyzer/provider/trace/TraceSamplingPolicyWatcherTest.java @@ -33,7 +33,7 @@ import org.junit.jupiter.api.Timeout; import org.junit.jupiter.api.extension.ExtendWith; import org.mockito.junit.jupiter.MockitoExtension; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.Optional; import java.util.Set; @@ -232,7 +232,7 @@ public void testServiceSampleRateDynamicUpdate() throws InterruptedException { ConfigWatcherRegister register = new ServiceMockConfigWatcherRegister(3); TraceSamplingPolicyWatcher watcher = new TraceSamplingPolicyWatcher(moduleConfig, provider); - Whitebox.setInternalState(provider, "moduleConfig", moduleConfig); + ReflectUtil.setInternalState(provider, "moduleConfig", moduleConfig); provider.getModuleConfig().setTraceSamplingPolicyWatcher(watcher); register.registerConfigChangeWatcher(watcher); register.start(); @@ -369,7 +369,7 @@ private void globalDefaultDurationEquals(TraceSamplingPolicyWatcher watcher, int } private SamplingPolicy getSamplingPolicy(String service, TraceSamplingPolicyWatcher watcher) { - AtomicReference<SamplingPolicySettings> samplingPolicySettings = Whitebox.getInternalState( + AtomicReference<SamplingPolicySettings> samplingPolicySettings = ReflectUtil.getInternalState( watcher, "samplingPolicySettings"); return samplingPolicySettings.get().get(service); } diff --git a/oap-server/analyzer/agent-analyzer/src/test/java/org/apache/skywalking/oap/server/analyzer/provider/trace/UninstrumentedGatewaysConfigTest.java b/oap-server/analyzer/agent-analyzer/src/test/java/org/apache/skywalking/oap/server/analyzer/provider/trace/UninstrumentedGatewaysConfigTest.java index 1824fa0b7669..f64467e2ac2d 100644 --- a/oap-server/analyzer/agent-analyzer/src/test/java/org/apache/skywalking/oap/server/analyzer/provider/trace/UninstrumentedGatewaysConfigTest.java +++ b/oap-server/analyzer/agent-analyzer/src/test/java/org/apache/skywalking/oap/server/analyzer/provider/trace/UninstrumentedGatewaysConfigTest.java @@ -24,7 +24,7 @@ import org.apache.skywalking.oap.server.library.module.ServiceNotProvidedException; import org.junit.jupiter.api.Assertions; import org.junit.jupiter.api.Test; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; public class UninstrumentedGatewaysConfigTest { @Test @@ -32,7 +32,7 @@ public void testParseGatewayYAML() throws Exception { final UninstrumentedGatewaysConfig uninstrumentedGatewaysConfig = new UninstrumentedGatewaysConfig(new MockProvider()); UninstrumentedGatewaysConfig.GatewayInfos gatewayInfos - = Whitebox.invokeMethod(uninstrumentedGatewaysConfig, "parseGatewaysFromFile", "gateways.yml"); + = ReflectUtil.invokeMethod(uninstrumentedGatewaysConfig, "parseGatewaysFromFile", "gateways.yml"); Assertions.assertEquals(1, gatewayInfos.getGateways().size()); } diff --git a/oap-server/analyzer/hierarchy/CLAUDE.md b/oap-server/analyzer/hierarchy/CLAUDE.md new file mode 100644 index 000000000000..76569131cc5b --- /dev/null +++ b/oap-server/analyzer/hierarchy/CLAUDE.md @@ -0,0 +1,140 @@ +# Hierarchy Rule Compiler + +Compiles hierarchy matching rule expressions into `BiFunction<Service, Service, Boolean>` implementation classes at runtime using ANTLR4 parsing and Javassist bytecode generation. + +## Compilation Workflow + +``` +Rule expression string (e.g., "{ (u, l) -> u.name == l.name }") + → HierarchyRuleScriptParser.parse(expression) [ANTLR4 lexer/parser → visitor] + → HierarchyRuleModel (immutable AST) + → HierarchyRuleClassGenerator.compile(ruleName, expression) + 1. classPool.makeClass() — create class implementing BiFunction + 2. generateApplyMethod(model) — emit Java source for apply(Object, Object) + 3. ctClass.toClass(HierarchyRulePackageHolder.class) — load via package anchor + → BiFunction<Service, Service, Boolean> instance +``` + +The generated class implements: +```java +Object apply(Object arg0, Object arg1) + // cast internally to Service and returns Boolean +``` + +No separate consumer/closure classes are needed — hierarchy rules are simple enough to compile into a single method body. + +## File Structure + +``` +oap-server/analyzer/hierarchy/ + src/main/antlr4/.../HierarchyRuleLexer.g4 — ANTLR4 lexer grammar + src/main/antlr4/.../HierarchyRuleParser.g4 — ANTLR4 parser grammar + + src/main/java/.../compiler/ + HierarchyRuleScriptParser.java — ANTLR4 facade: expression → AST + HierarchyRuleModel.java — Immutable AST model classes + HierarchyRuleClassGenerator.java — Javassist code generator + CompiledHierarchyRuleProvider.java — SPI provider: compiles rule expressions + hierarchy/rule/rt/ + HierarchyRulePackageHolder.java — Class loading anchor (empty marker) + + src/main/resources/META-INF/services/ + ...HierarchyDefinitionService$HierarchyRuleProvider — SPI registration + + src/test/java/.../compiler/ + HierarchyRuleScriptParserTest.java — 5 parser tests + HierarchyRuleClassGeneratorTest.java — 4 generator tests +``` + +## Package & Class Naming + +| Component | Package / Name | +|-----------|---------------| +| Parser/Model/Generator | `org.apache.skywalking.oap.server.core.config.v2.compiler` | +| Generated classes | `org.apache.skywalking.oap.server.core.config.v2.compiler.hierarchy.rule.rt.HierarchyRule_<N>` | +| Package holder | `org.apache.skywalking.oap.server.core.config.v2.compiler.hierarchy.rule.rt.HierarchyRulePackageHolder` | +| SPI provider | `org.apache.skywalking.oap.server.core.config.v2.compiler.CompiledHierarchyRuleProvider` | +| Service type | `org.apache.skywalking.oap.server.core.query.type.Service` (in server-core) | + +`<N>` is a global `AtomicInteger` counter. + +## Code Generation Details + +**Field access mapping**: Property access in expressions maps to getter methods: +- `u.name` → `u.getName()` +- `l.shortName` → `l.getShortName()` +- Generic: `x.foo` → `x.getFoo()` + +**Comparison operators**: `==` and `!=` use `java.util.Objects.equals()`. Numeric comparisons (`>`, `<`, `>=`, `<=`) emit direct operators. + +**Method chains**: `l.shortName.substring(0, l.shortName.lastIndexOf("."))` generates chained Java method calls directly. + +## Example + +**Input**: `{ (u, l) -> u.name == l.name }` + +**Generated `apply()` method**: +```java +public Object apply(Object arg0, Object arg1) { + Service u = (Service) arg0; + Service l = (Service) arg1; + return Boolean.valueOf(java.util.Objects.equals(u.getName(), l.getName())); +} +``` + +**Input with block body**: `{ (u, l) -> { if (l.shortName.lastIndexOf(".") > 0) { return u.name == l.shortName.substring(0, l.shortName.lastIndexOf(".")); } return false; } }` + +**Generated `apply()` method**: +```java +public Object apply(Object arg0, Object arg1) { + Service u = (Service) arg0; + Service l = (Service) arg1; + if (l.getShortName().lastIndexOf(".") > 0) { + return Boolean.valueOf( + java.util.Objects.equals( + u.getName(), + l.getShortName().substring(0, l.getShortName().lastIndexOf(".")))); + } + return Boolean.valueOf(false); +} +``` + +## Rule Patterns + +Four rule types are defined in `hierarchy-definition.yml`: + +| Rule Name | Expression Pattern | +|-----------|-------------------| +| `name` | `{ (u, l) -> u.name == l.name }` | +| `short-name` | `{ (u, l) -> u.shortName == l.shortName }` | +| `lower-short-name-remove-namespace` | `{ (u, l) -> { if (l.shortName.lastIndexOf(".") > 0) { return u.name == l.shortName.substring(0, l.shortName.lastIndexOf(".")); } return false; } }` | +| `lower-short-name-with-fqdn` | `{ (u, l) -> u.shortName == l.shortName.concat("." + u.shortName) }` | + +## Hierarchy Input Data Mock Principles + +Hierarchy rules are simpler than MAL/LAL — they take two `Service` objects and return a boolean. The test data is implicit in the test code rather than YAML files. + +### Principles + +1. **Service objects are the input**: Each rule receives `(Service upper, Service lower)` and returns whether they match. Test data is built programmatically with `Service.builder().name("...").shortName("...").build()`. +2. **Four rule patterns**: See "Rule Patterns" above. Tests cover all four with various `name`/`shortName` combinations. +3. **v1-v2 comparison**: The checker test (`HierarchyComparisonTest` in `mal-lal-v1-v2-checker`) compiles each rule with both v1 (Groovy) and v2 (ANTLR4), runs them on the same `Service` pairs, and asserts identical results. +4. **No `.data.yaml` files**: Hierarchy rules are purely functional (two inputs → boolean), so mock data is inline in tests. + +## Debug Output + +When `SW_DYNAMIC_CLASS_ENGINE_DEBUG=true` environment variable is set, generated `.class` files are written to disk for inspection: + +``` +{skywalking}/hierarchy-rt/ + *.class - Generated HierarchyRule .class files +``` + +This is the same env variable used by OAL. Useful for debugging code generation issues. In tests, use `setClassOutputDir(dir)` instead. + +## Dependencies + +Grammar, compiler, and runtime are merged into this module: +- ANTLR4 grammar → generates lexer/parser at build time +- `server-core` — `Service` type +- `javassist` — bytecode generation diff --git a/oap-server/analyzer/hierarchy/pom.xml b/oap-server/analyzer/hierarchy/pom.xml new file mode 100644 index 000000000000..04adce9ed05b --- /dev/null +++ b/oap-server/analyzer/hierarchy/pom.xml @@ -0,0 +1,65 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!-- + ~ Licensed to the Apache Software Foundation (ASF) under one or more + ~ contributor license agreements. See the NOTICE file distributed with + ~ this work for additional information regarding copyright ownership. + ~ The ASF licenses this file to You under the Apache License, Version 2.0 + ~ (the "License"); you may not use this file except in compliance with + ~ the License. You may obtain a copy of the License at + ~ + ~ http://www.apache.org/licenses/LICENSE-2.0 + ~ + ~ Unless required by applicable law or agreed to in writing, software + ~ distributed under the License is distributed on an "AS IS" BASIS, + ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + ~ See the License for the specific language governing permissions and + ~ limitations under the License. + ~ + --> + +<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> + <parent> + <artifactId>analyzer</artifactId> + <groupId>org.apache.skywalking</groupId> + <version>${revision}</version> + </parent> + <modelVersion>4.0.0</modelVersion> + + <artifactId>hierarchy</artifactId> + + <dependencies> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-core</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>org.antlr</groupId> + <artifactId>antlr4-runtime</artifactId> + </dependency> + <dependency> + <groupId>org.javassist</groupId> + <artifactId>javassist</artifactId> + </dependency> + </dependencies> + + <build> + <plugins> + <plugin> + <groupId>org.antlr</groupId> + <artifactId>antlr4-maven-plugin</artifactId> + <configuration> + <visitor>true</visitor> + </configuration> + <executions> + <execution> + <id>antlr</id> + <goals> + <goal>antlr4</goal> + </goals> + </execution> + </executions> + </plugin> + </plugins> + </build> +</project> diff --git a/oap-server/analyzer/hierarchy/src/main/antlr4/org/apache/skywalking/hierarchy/rt/grammar/HierarchyRuleLexer.g4 b/oap-server/analyzer/hierarchy/src/main/antlr4/org/apache/skywalking/hierarchy/rt/grammar/HierarchyRuleLexer.g4 new file mode 100644 index 000000000000..2cd29899928c --- /dev/null +++ b/oap-server/analyzer/hierarchy/src/main/antlr4/org/apache/skywalking/hierarchy/rt/grammar/HierarchyRuleLexer.g4 @@ -0,0 +1,105 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +// Hierarchy Rule matching expression lexer +// +// Covers expressions like: +// { (u, l) -> u.name == l.name } +// { (u, l) -> { if(l.shortName.lastIndexOf('.') > 0) return u.shortName == l.shortName.substring(0, l.shortName.lastIndexOf('.')); return false; } } +lexer grammar HierarchyRuleLexer; + +@Header {package org.apache.skywalking.hierarchy.rt.grammar;} + +// Keywords +IF: 'if'; +ELSE: 'else'; +RETURN: 'return'; +TRUE: 'true'; +FALSE: 'false'; + +// Comparison and logical operators +DEQ: '=='; +NEQ: '!='; +AND: '&&'; +OR: '||'; +NOT: '!'; +GT: '>'; +LT: '<'; +GTE: '>='; +LTE: '<='; + +// Delimiters +DOT: '.'; +COMMA: ','; +SEMI: ';'; +L_PAREN: '('; +R_PAREN: ')'; +L_BRACE: '{'; +R_BRACE: '}'; +ARROW: '->'; + +// Arithmetic (for substring index arguments) +PLUS: '+'; +MINUS: '-'; + +// Literals +NUMBER + : Digit+ + ; + +STRING + : '\'' (~['\\\r\n] | EscapeSequence)* '\'' + | '"' (~["\\\r\n] | EscapeSequence)* '"' + ; + +// Comments +LINE_COMMENT + : '//' ~[\r\n]* -> channel(HIDDEN) + ; + +BLOCK_COMMENT + : '/*' .*? '*/' -> channel(HIDDEN) + ; + +// Whitespace +WS + : [ \t\r\n]+ -> channel(HIDDEN) + ; + +// Identifiers +IDENTIFIER + : Letter LetterOrDigit* + ; + +// Fragments +fragment EscapeSequence + : '\\' [btnfr"'\\] + ; + +fragment Digit + : [0-9] + ; + +fragment Letter + : [a-zA-Z_] + ; + +fragment LetterOrDigit + : Letter + | [0-9] + ; diff --git a/oap-server/analyzer/hierarchy/src/main/antlr4/org/apache/skywalking/hierarchy/rt/grammar/HierarchyRuleParser.g4 b/oap-server/analyzer/hierarchy/src/main/antlr4/org/apache/skywalking/hierarchy/rt/grammar/HierarchyRuleParser.g4 new file mode 100644 index 000000000000..7d30c016c1f9 --- /dev/null +++ b/oap-server/analyzer/hierarchy/src/main/antlr4/org/apache/skywalking/hierarchy/rt/grammar/HierarchyRuleParser.g4 @@ -0,0 +1,135 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +// Hierarchy Rule matching expression parser +// +// Parses expressions from hierarchy-definition.yml auto-matching-rules: +// name: "{ (u, l) -> u.name == l.name }" +// short-name: "{ (u, l) -> u.shortName == l.shortName }" +// lower-short-name-remove-ns: +// "{ (u, l) -> { if(l.shortName.lastIndexOf('.') > 0) return u.shortName == l.shortName.substring(0, l.shortName.lastIndexOf('.')); return false; } }" +// lower-short-name-with-fqdn: +// "{ (u, l) -> { if(u.shortName.lastIndexOf(':') > 0) return u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local'); return false; } }" +parser grammar HierarchyRuleParser; + +@Header {package org.apache.skywalking.hierarchy.rt.grammar;} + +options { tokenVocab=HierarchyRuleLexer; } + +// ==================== Top-level ==================== + +// { (u, l) -> body } +matchingRule + : L_BRACE L_PAREN param COMMA param R_PAREN ARROW ruleBody R_BRACE EOF + ; + +param + : IDENTIFIER + ; + +ruleBody + : simpleExpression // u.name == l.name + | blockBody // { if(...) ...; return false; } + ; + +// ==================== Block body ==================== + +blockBody + : L_BRACE statement+ R_BRACE + ; + +statement + : ifStatement + | returnStatement + ; + +ifStatement + : IF L_PAREN condition R_PAREN + (returnStatement | blockBody) + (ELSE IF L_PAREN condition R_PAREN + (returnStatement | blockBody) + )* + (ELSE + (returnStatement | blockBody) + )? + ; + +returnStatement + : RETURN returnValue SEMI? + ; + +returnValue + : ruleExpr DEQ ruleExpr # returnComparison + | ruleExpr NEQ ruleExpr # returnNeqComparison + | ruleExpr # returnExpr + ; + +// ==================== Conditions ==================== + +condition + : condition AND condition # condAnd + | condition OR condition # condOr + | NOT condition # condNot + | L_PAREN condition R_PAREN # condParen + | ruleExpr DEQ ruleExpr # condEq + | ruleExpr NEQ ruleExpr # condNeq + | ruleExpr GT ruleExpr # condGt + | ruleExpr LT ruleExpr # condLt + | ruleExpr GTE ruleExpr # condGte + | ruleExpr LTE ruleExpr # condLte + | ruleExpr # condExpr + ; + +// ==================== Expressions ==================== + +simpleExpression + : ruleExpr DEQ ruleExpr + | ruleExpr NEQ ruleExpr + ; + +ruleExpr + : ruleExpr PLUS ruleExpr # exprAdd + | ruleExpr MINUS ruleExpr # exprSub + | ruleExprPrimary # exprPrimary + ; + +ruleExprPrimary + : methodChain # exprMethodChain + | STRING # exprString + | NUMBER # exprNumber + | TRUE # exprTrue + | FALSE # exprFalse + ; + +// ==================== Method chains ==================== + +// u.name, l.shortName, l.shortName.lastIndexOf('.'), +// u.shortName.substring(0, l.shortName.lastIndexOf(':')) +// l.shortName.concat('.svc.cluster.local') +methodChain + : IDENTIFIER (DOT chainSegment)+ + ; + +chainSegment + : IDENTIFIER L_PAREN argList? R_PAREN # chainMethodCall + | IDENTIFIER # chainFieldAccess + ; + +argList + : ruleExpr (COMMA ruleExpr)* + ; diff --git a/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/CompiledHierarchyRuleProvider.java b/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/CompiledHierarchyRuleProvider.java new file mode 100644 index 000000000000..f1607d2369da --- /dev/null +++ b/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/CompiledHierarchyRuleProvider.java @@ -0,0 +1,73 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.core.config.v2.compiler; + +import java.util.HashMap; +import java.util.Map; +import java.util.function.BiFunction; +import lombok.extern.slf4j.Slf4j; +import org.apache.skywalking.oap.server.core.config.HierarchyDefinitionService; +import org.apache.skywalking.oap.server.core.query.type.Service; + +/** + * SPI implementation of {@link HierarchyDefinitionService.HierarchyRuleProvider} + * that compiles hierarchy matching rule expressions using ANTLR4 + Javassist. + * + * <p>Discovered at startup via {@code ServiceLoader} by + * {@link HierarchyDefinitionService}. For each rule expression + * (e.g., {@code "{ (u, l) -> u.name == l.name }"}): + * <ol> + * <li>{@link HierarchyRuleClassGenerator#compile} parses the expression + * with ANTLR4 into an AST, then generates a Java class implementing + * {@code BiFunction<Service, Service, Boolean>} via Javassist.</li> + * <li>The generated class casts both arguments to {@link Service}, + * evaluates the expression body, and returns a {@code Boolean}.</li> + * </ol> + * + * <p>The compiled matchers are returned to {@link HierarchyDefinitionService} + * and used at runtime by + * {@link org.apache.skywalking.oap.server.core.hierarchy.HierarchyService} + * to match service pairs. + */ +@Slf4j +public class CompiledHierarchyRuleProvider implements HierarchyDefinitionService.HierarchyRuleProvider { + + private final HierarchyRuleClassGenerator generator; + + public CompiledHierarchyRuleProvider() { + generator = new HierarchyRuleClassGenerator(); + generator.setYamlSource("hierarchy-definition.yml"); + } + + @Override + public Map<String, BiFunction<Service, Service, Boolean>> buildRules( + final Map<String, String> ruleExpressions) { + final Map<String, BiFunction<Service, Service, Boolean>> rules = new HashMap<>(); + ruleExpressions.forEach((name, expression) -> { + try { + rules.put(name, generator.compile(name, expression)); + log.debug("Compiled hierarchy rule: {}", name); + } catch (Exception e) { + throw new IllegalStateException( + "Failed to compile hierarchy rule: " + name + + ", expression: " + expression, e); + } + }); + return rules; + } +} diff --git a/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleClassGenerator.java b/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleClassGenerator.java new file mode 100644 index 000000000000..62e6e37a43db --- /dev/null +++ b/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleClassGenerator.java @@ -0,0 +1,543 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.core.config.v2.compiler; + +import java.io.DataOutputStream; +import java.io.File; +import java.io.FileOutputStream; +import java.util.List; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.BiFunction; +import javassist.ClassPool; +import javassist.CtClass; +import javassist.CtNewConstructor; +import javassist.CtNewMethod; +import lombok.extern.slf4j.Slf4j; +import org.apache.skywalking.oap.server.core.WorkPath; +import org.apache.skywalking.oap.server.core.config.v2.compiler.hierarchy.rule.rt.HierarchyRulePackageHolder; +import org.apache.skywalking.oap.server.library.util.StringUtil; +import org.apache.skywalking.oap.server.core.query.type.Service; + +/** + * Generates {@link BiFunction BiFunction<Service, Service, Boolean>} implementation classes + * from {@link HierarchyRuleModel} AST using Javassist bytecode generation. + * + * <p>Rule expressions are parsed from {@code hierarchy-definition.yml} {@code auto-matching-rules} + * section. The grammar ({@code HierarchyRuleParser.g4}) supports: + * <ul> + * <li>Property access on {@link Service}: {@code u.name}, {@code l.shortName} + * — mapped to getter methods ({@code getName()}, {@code getShortName()}) + * via {@link #toGetter(String)}</li> + * <li>String method calls: {@code substring()}, {@code lastIndexOf()}, {@code concat()}</li> + * <li>Comparisons: {@code ==}, {@code !=} (via {@code Objects.equals()}), + * {@code >}, {@code <}, {@code >=}, {@code <=}</li> + * <li>Logical operators: {@code &&}, {@code ||}, {@code !}</li> + * <li>Control flow: {@code if/else}, {@code return}</li> + * <li>Literals: strings, numbers, booleans</li> + * </ul> + * + * <p>Debugging support in generated bytecode: + * <ul> + * <li><b>SourceFile</b>: set via {@link #formatSourceFileName(String)} to + * {@code (hierarchy-definition.yml)ruleName.java} so stack traces show the + * originating YAML file and rule name</li> + * <li><b>LineNumberTable</b>: added via {@link #addLineNumberTable} so debuggers + * can step through generated code</li> + * <li><b>LocalVariableTable</b>: added via {@link #addLocalVariableTable} with + * the original parameter names (e.g., {@code u}, {@code l}) for debugger + * variable inspection</li> + * </ul> + */ +@Slf4j +public final class HierarchyRuleClassGenerator { + + private static final AtomicInteger CLASS_COUNTER = new AtomicInteger(0); + + private static final String PACKAGE_PREFIX = + "org.apache.skywalking.oap.server.core.config.v2.compiler.hierarchy.rule.rt."; + + private static final java.util.Set<String> USED_CLASS_NAMES = + java.util.Collections.synchronizedSet(new java.util.HashSet<>()); + + private final ClassPool classPool; + private File classOutputDir; + private String classNameHint; + private String yamlSource; + + public HierarchyRuleClassGenerator() { + this(ClassPool.getDefault()); + if (StringUtil.isNotEmpty(System.getenv("SW_DYNAMIC_CLASS_ENGINE_DEBUG"))) { + classOutputDir = new File(WorkPath.getPath().getParentFile(), "hierarchy-rt"); + } + } + + public HierarchyRuleClassGenerator(final ClassPool classPool) { + this.classPool = classPool; + } + + public void setClassOutputDir(final File dir) { + this.classOutputDir = dir; + } + + public void setClassNameHint(final String hint) { + this.classNameHint = hint; + } + + public void setYamlSource(final String yamlSource) { + this.yamlSource = yamlSource; + } + + private String makeClassName(final String defaultPrefix) { + if (classNameHint != null) { + return dedupClassName(PACKAGE_PREFIX + sanitizeName(classNameHint)); + } + return PACKAGE_PREFIX + defaultPrefix + CLASS_COUNTER.getAndIncrement(); + } + + private String dedupClassName(final String base) { + if (USED_CLASS_NAMES.add(base)) { + return base; + } + for (int i = 2; ; i++) { + final String candidate = base + "_" + i; + if (USED_CLASS_NAMES.add(candidate)) { + return candidate; + } + } + } + + private static String sanitizeName(final String name) { + final StringBuilder sb = new StringBuilder(name.length()); + for (int i = 0; i < name.length(); i++) { + final char c = name.charAt(i); + sb.append(i == 0 + ? (Character.isJavaIdentifierStart(c) ? c : '_') + : (Character.isJavaIdentifierPart(c) ? c : '_')); + } + return sb.length() == 0 ? "Generated" : sb.toString(); + } + + /** + * Builds the {@code SourceFile} attribute value embedded in the generated class bytecode. + * Format: {@code (hierarchy-definition.yml)name.java} so that stack traces show both + * the originating YAML file and the rule name. Falls back to just {@code name.java} + * when {@code yamlSource} is not set (e.g., in unit tests). + */ + private String formatSourceFileName(final String ruleName) { + final String classFile = sanitizeName(ruleName) + ".java"; + if (yamlSource != null) { + return "(" + yamlSource + ")" + classFile; + } + return classFile; + } + + private static void setSourceFile(final CtClass ctClass, final String name) { + try { + final javassist.bytecode.ClassFile cf = ctClass.getClassFile(); + final javassist.bytecode.AttributeInfo sf = cf.getAttribute("SourceFile"); + if (sf != null) { + final javassist.bytecode.ConstPool cp = cf.getConstPool(); + final int idx = cp.addUtf8Info(name); + sf.set(new byte[]{(byte) (idx >> 8), (byte) idx}); + } + } catch (Exception e) { + // best-effort + } + } + + private void addLineNumberTable(final javassist.CtMethod method, + final int firstResultSlot) { + try { + final javassist.bytecode.MethodInfo mi = method.getMethodInfo(); + final javassist.bytecode.CodeAttribute code = mi.getCodeAttribute(); + if (code == null) { + return; + } + + final java.util.ArrayList<int[]> entries = new java.util.ArrayList<>(); + int line = 1; + boolean nextIsNewLine = true; + + final javassist.bytecode.CodeIterator ci = code.iterator(); + while (ci.hasNext()) { + final int pc = ci.next(); + if (nextIsNewLine) { + entries.add(new int[]{pc, line++}); + nextIsNewLine = false; + } + final int op = ci.byteAt(pc) & 0xFF; + int slot = -1; + if (op >= 59 && op <= 78) { + slot = (op - 59) % 4; + } else if (op >= 54 && op <= 58) { + slot = ci.byteAt(pc + 1) & 0xFF; + } + if (slot >= firstResultSlot) { + nextIsNewLine = true; + } + } + + if (entries.isEmpty()) { + return; + } + + final javassist.bytecode.ConstPool cp = mi.getConstPool(); + final byte[] info = new byte[2 + entries.size() * 4]; + info[0] = (byte) (entries.size() >> 8); + info[1] = (byte) entries.size(); + for (int i = 0; i < entries.size(); i++) { + final int off = 2 + i * 4; + info[off] = (byte) (entries.get(i)[0] >> 8); + info[off + 1] = (byte) entries.get(i)[0]; + info[off + 2] = (byte) (entries.get(i)[1] >> 8); + info[off + 3] = (byte) entries.get(i)[1]; + } + code.getAttributes().add( + new javassist.bytecode.AttributeInfo(cp, "LineNumberTable", info)); + } catch (Exception e) { + log.warn("Failed to add LineNumberTable: {}", e.getMessage()); + } + } + + private void writeClassFile(final CtClass ctClass) { + if (classOutputDir == null) { + return; + } + if (!classOutputDir.exists()) { + classOutputDir.mkdirs(); + } + final File file = new File(classOutputDir, ctClass.getSimpleName() + ".class"); + try (DataOutputStream out = new DataOutputStream(new FileOutputStream(file))) { + ctClass.toBytecode(out); + } catch (Exception e) { + log.warn("Failed to write class file {}: {}", file, e.getMessage()); + } + } + + private void addLocalVariableTable(final javassist.CtMethod method, + final String className, + final String[][] vars) { + try { + final javassist.bytecode.MethodInfo mi = method.getMethodInfo(); + final javassist.bytecode.CodeAttribute code = mi.getCodeAttribute(); + if (code == null) { + return; + } + final javassist.bytecode.ConstPool cp = mi.getConstPool(); + final int len = code.getCodeLength(); + final javassist.bytecode.LocalVariableAttribute lva = + new javassist.bytecode.LocalVariableAttribute(cp); + lva.addEntry(0, len, + cp.addUtf8Info("this"), + cp.addUtf8Info("L" + className.replace('.', '/') + ";"), 0); + for (int i = 0; i < vars.length; i++) { + lva.addEntry(0, len, + cp.addUtf8Info(vars[i][0]), + cp.addUtf8Info(vars[i][1]), i + 1); + } + code.getAttributes().add(lva); + } catch (Exception e) { + log.warn("Failed to add LocalVariableTable: {}", e.getMessage()); + } + } + + /** + * Compiles a hierarchy rule expression into a {@code BiFunction} class. + * + * <p>Flow: expression → {@link HierarchyRuleScriptParser#parse} (ANTLR4 AST) + * → {@link #generateApplyMethod} (Java source) → Javassist compile → class load. + * Property access (e.g., {@code u.name}) is mapped to getter calls + * (e.g., {@code u.getName()}) via {@link #toGetter(String)}. + * Equality comparisons use {@code java.util.Objects.equals()} for null safety. + * + * <p>If parsing or compilation fails, throws immediately (fail-fast at startup). + * + * @param ruleName the rule name (e.g., "name", "short-name") + * @param expression the rule expression string + * @return a BiFunction that matches two Service objects + */ + @SuppressWarnings("unchecked") + public BiFunction<Service, Service, Boolean> compile( + final String ruleName, final String expression) throws Exception { + final HierarchyRuleModel model = HierarchyRuleScriptParser.parse(expression); + final String saved = classNameHint; + if (classNameHint == null) { + classNameHint = ruleName; + } + final String className; + try { + className = makeClassName("HierarchyRule_"); + } finally { + classNameHint = saved; + } + + final CtClass ctClass = classPool.makeClass(className); + ctClass.addInterface(classPool.get("java.util.function.BiFunction")); + + ctClass.addConstructor(CtNewConstructor.defaultConstructor(ctClass)); + + final String applyBody = generateApplyMethod(model); + + if (log.isDebugEnabled()) { + log.debug("Hierarchy compile [{}] AST: {}", ruleName, model); + log.debug("Hierarchy compile [{}] apply():\n{}", ruleName, applyBody); + } + + final javassist.CtMethod applyMethod = CtNewMethod.make(applyBody, ctClass); + ctClass.addMethod(applyMethod); + addLineNumberTable(applyMethod, 3); + final String svcDesc = "Lorg/apache/skywalking/oap/server/core/query/type/Service;"; + addLocalVariableTable(applyMethod, className, new String[][]{ + {"arg0", "Ljava/lang/Object;"}, + {"arg1", "Ljava/lang/Object;"}, + {model.getUpperParam(), svcDesc}, + {model.getLowerParam(), svcDesc} + }); + + setSourceFile(ctClass, formatSourceFileName(ruleName)); + writeClassFile(ctClass); + final Class<?> clazz = ctClass.toClass(HierarchyRulePackageHolder.class); + ctClass.detach(); + return (BiFunction<Service, Service, Boolean>) clazz.getDeclaredConstructor().newInstance(); + } + + private String generateApplyMethod(final HierarchyRuleModel model) { + final StringBuilder sb = new StringBuilder(); + sb.append("public Object apply(Object arg0, Object arg1) {\n"); + sb.append(" org.apache.skywalking.oap.server.core.query.type.Service "); + sb.append(model.getUpperParam()).append(" = (org.apache.skywalking.oap.server.core.query.type.Service) arg0;\n"); + sb.append(" org.apache.skywalking.oap.server.core.query.type.Service "); + sb.append(model.getLowerParam()).append(" = (org.apache.skywalking.oap.server.core.query.type.Service) arg1;\n"); + + generateRuleBody(sb, model.getBody()); + + sb.append("}\n"); + return sb.toString(); + } + + private void generateRuleBody(final StringBuilder sb, + final HierarchyRuleModel.RuleBody body) { + if (body instanceof HierarchyRuleModel.SimpleComparison) { + final HierarchyRuleModel.SimpleComparison cmp = + (HierarchyRuleModel.SimpleComparison) body; + sb.append(" return Boolean.valueOf("); + generateComparison(sb, cmp.getLeft(), cmp.getOp(), cmp.getRight()); + sb.append(");\n"); + } else if (body instanceof HierarchyRuleModel.BlockBody) { + final HierarchyRuleModel.BlockBody block = (HierarchyRuleModel.BlockBody) body; + for (final HierarchyRuleModel.Statement stmt : block.getStatements()) { + generateStatement(sb, stmt); + } + } + } + + private void generateStatement(final StringBuilder sb, + final HierarchyRuleModel.Statement stmt) { + if (stmt instanceof HierarchyRuleModel.IfStatement) { + generateIfStatement(sb, (HierarchyRuleModel.IfStatement) stmt); + } else if (stmt instanceof HierarchyRuleModel.ReturnStatement) { + generateReturnStatement(sb, (HierarchyRuleModel.ReturnStatement) stmt); + } + } + + private void generateIfStatement(final StringBuilder sb, + final HierarchyRuleModel.IfStatement ifStmt) { + sb.append(" if ("); + generateCondition(sb, ifStmt.getCondition()); + sb.append(") {\n"); + for (final HierarchyRuleModel.Statement s : ifStmt.getThenBranch()) { + generateStatement(sb, s); + } + sb.append(" }\n"); + if (ifStmt.getElseBranch() != null && !ifStmt.getElseBranch().isEmpty()) { + sb.append(" else {\n"); + for (final HierarchyRuleModel.Statement s : ifStmt.getElseBranch()) { + generateStatement(sb, s); + } + sb.append(" }\n"); + } + } + + private void generateReturnStatement(final StringBuilder sb, + final HierarchyRuleModel.ReturnStatement retStmt) { + final HierarchyRuleModel.Expr expr = retStmt.getValue(); + if (expr instanceof HierarchyRuleModel.SimpleComparison) { + final HierarchyRuleModel.SimpleComparison cmp = + (HierarchyRuleModel.SimpleComparison) expr; + sb.append(" return Boolean.valueOf("); + generateComparison(sb, cmp.getLeft(), cmp.getOp(), cmp.getRight()); + sb.append(");\n"); + } else if (expr instanceof HierarchyRuleModel.BoolLiteralExpr) { + sb.append(" return Boolean.valueOf(") + .append(((HierarchyRuleModel.BoolLiteralExpr) expr).isValue()) + .append(");\n"); + } else { + sb.append(" return Boolean.valueOf("); + generateExpr(sb, expr); + sb.append(" != null);\n"); + } + } + + private void generateComparison(final StringBuilder sb, + final HierarchyRuleModel.Expr left, + final HierarchyRuleModel.CompareOp op, + final HierarchyRuleModel.Expr right) { + switch (op) { + case EQ: + sb.append("java.util.Objects.equals("); + generateExpr(sb, left); + sb.append(", "); + generateExpr(sb, right); + sb.append(")"); + break; + case NEQ: + sb.append("!java.util.Objects.equals("); + generateExpr(sb, left); + sb.append(", "); + generateExpr(sb, right); + sb.append(")"); + break; + case GT: + generateExpr(sb, left); + sb.append(" > "); + generateExpr(sb, right); + break; + case LT: + generateExpr(sb, left); + sb.append(" < "); + generateExpr(sb, right); + break; + case GTE: + generateExpr(sb, left); + sb.append(" >= "); + generateExpr(sb, right); + break; + case LTE: + generateExpr(sb, left); + sb.append(" <= "); + generateExpr(sb, right); + break; + default: + throw new IllegalArgumentException("Unsupported comparison op: " + op); + } + } + + private void generateCondition(final StringBuilder sb, + final HierarchyRuleModel.Condition cond) { + if (cond instanceof HierarchyRuleModel.ComparisonCondition) { + final HierarchyRuleModel.ComparisonCondition cc = + (HierarchyRuleModel.ComparisonCondition) cond; + generateComparison(sb, cc.getLeft(), cc.getOp(), cc.getRight()); + } else if (cond instanceof HierarchyRuleModel.LogicalCondition) { + final HierarchyRuleModel.LogicalCondition lc = + (HierarchyRuleModel.LogicalCondition) cond; + sb.append("("); + generateCondition(sb, lc.getLeft()); + sb.append(lc.getOp() == HierarchyRuleModel.LogicalOp.AND ? " && " : " || "); + generateCondition(sb, lc.getRight()); + sb.append(")"); + } else if (cond instanceof HierarchyRuleModel.NotCondition) { + sb.append("!("); + generateCondition(sb, ((HierarchyRuleModel.NotCondition) cond).getInner()); + sb.append(")"); + } else if (cond instanceof HierarchyRuleModel.ExprCondition) { + generateExpr(sb, ((HierarchyRuleModel.ExprCondition) cond).getExpr()); + } + } + + private void generateExpr(final StringBuilder sb, + final HierarchyRuleModel.Expr expr) { + if (expr instanceof HierarchyRuleModel.MethodChainExpr) { + generateMethodChainExpr(sb, (HierarchyRuleModel.MethodChainExpr) expr); + } else if (expr instanceof HierarchyRuleModel.StringLiteralExpr) { + sb.append('"') + .append(escapeJava(((HierarchyRuleModel.StringLiteralExpr) expr).getValue())) + .append('"'); + } else if (expr instanceof HierarchyRuleModel.NumberLiteralExpr) { + sb.append(((HierarchyRuleModel.NumberLiteralExpr) expr).getValue()); + } else if (expr instanceof HierarchyRuleModel.BoolLiteralExpr) { + sb.append(((HierarchyRuleModel.BoolLiteralExpr) expr).isValue()); + } else if (expr instanceof HierarchyRuleModel.BinaryExpr) { + final HierarchyRuleModel.BinaryExpr bin = (HierarchyRuleModel.BinaryExpr) expr; + generateExpr(sb, bin.getLeft()); + sb.append(bin.getOp() == HierarchyRuleModel.ArithmeticOp.ADD ? " + " : " - "); + generateExpr(sb, bin.getRight()); + } else if (expr instanceof HierarchyRuleModel.SimpleComparison) { + final HierarchyRuleModel.SimpleComparison cmp = + (HierarchyRuleModel.SimpleComparison) expr; + generateComparison(sb, cmp.getLeft(), cmp.getOp(), cmp.getRight()); + } + } + + private void generateMethodChainExpr(final StringBuilder sb, + final HierarchyRuleModel.MethodChainExpr expr) { + sb.append(expr.getTarget()); + for (final HierarchyRuleModel.ChainSegment seg : expr.getSegments()) { + sb.append('.'); + if (seg instanceof HierarchyRuleModel.FieldAccess) { + final String fieldName = ((HierarchyRuleModel.FieldAccess) seg).getName(); + sb.append(toGetter(fieldName)).append("()"); + } else if (seg instanceof HierarchyRuleModel.MethodCallSegment) { + final HierarchyRuleModel.MethodCallSegment mc = + (HierarchyRuleModel.MethodCallSegment) seg; + sb.append(mc.getName()).append('('); + final List<HierarchyRuleModel.Expr> args = mc.getArguments(); + for (int i = 0; i < args.size(); i++) { + if (i > 0) { + sb.append(", "); + } + generateExpr(sb, args.get(i)); + } + sb.append(')'); + } + } + } + + /** + * Maps a field name in rule expressions to the corresponding getter method + * on {@link Service}. Known fields ({@code name} → {@code getName}, + * {@code shortName} → {@code getShortName}) are hard-coded for clarity; + * unknown fields fall back to JavaBean convention ({@code foo} → {@code getFoo}). + * If the getter does not exist on {@link Service}, Javassist compilation will + * fail at startup with a clear error. + */ + private static String toGetter(final String fieldName) { + if ("name".equals(fieldName)) { + return "getName"; + } else if ("shortName".equals(fieldName)) { + return "getShortName"; + } + return "get" + Character.toUpperCase(fieldName.charAt(0)) + fieldName.substring(1); + } + + private static String escapeJava(final String s) { + return s.replace("\\", "\\\\") + .replace("\"", "\\\"") + .replace("\n", "\\n") + .replace("\r", "\\r") + .replace("\t", "\\t"); + } + + /** + * Generates the Java source body of the apply method for debugging/testing. + */ + public String generateSource(final String expression) { + final HierarchyRuleModel model = HierarchyRuleScriptParser.parse(expression); + return generateApplyMethod(model); + } +} diff --git a/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleModel.java b/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleModel.java new file mode 100644 index 000000000000..b0f587871b60 --- /dev/null +++ b/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleModel.java @@ -0,0 +1,260 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.core.config.v2.compiler; + +import java.util.Collections; +import java.util.List; +import lombok.Getter; + +/** + * Immutable AST model for hierarchy matching rule expressions. + * Represents parsed expressions like: + * <pre> + * { (u, l) -> u.name == l.name } + * { (u, l) -> { if(l.shortName.lastIndexOf('.') > 0) return ...; return false; } } + * </pre> + */ +@Getter +public final class HierarchyRuleModel { + private final String upperParam; + private final String lowerParam; + private final RuleBody body; + + private HierarchyRuleModel(final String upperParam, final String lowerParam, final RuleBody body) { + this.upperParam = upperParam; + this.lowerParam = lowerParam; + this.body = body; + } + + public static HierarchyRuleModel of(final String upperParam, final String lowerParam, final RuleBody body) { + return new HierarchyRuleModel(upperParam, lowerParam, body); + } + + /** + * Rule body — either a simple comparison or a block with if/return statements. + */ + public interface RuleBody { + } + + /** + * Simple comparison body: {@code u.name == l.name} + */ + @Getter + public static final class SimpleComparison implements RuleBody, Expr { + private final Expr left; + private final CompareOp op; + private final Expr right; + + public SimpleComparison(final Expr left, final CompareOp op, final Expr right) { + this.left = left; + this.op = op; + this.right = right; + } + } + + /** + * Block body with multiple statements: {@code { if(...) return ...; return false; }} + */ + @Getter + public static final class BlockBody implements RuleBody { + private final List<Statement> statements; + + public BlockBody(final List<Statement> statements) { + this.statements = Collections.unmodifiableList(statements); + } + } + + // ==================== Statements ==================== + + public interface Statement { + } + + @Getter + public static final class IfStatement implements Statement { + private final Condition condition; + private final List<Statement> thenBranch; + private final List<Statement> elseBranch; + + public IfStatement(final Condition condition, + final List<Statement> thenBranch, + final List<Statement> elseBranch) { + this.condition = condition; + this.thenBranch = Collections.unmodifiableList(thenBranch); + this.elseBranch = elseBranch != null + ? Collections.unmodifiableList(elseBranch) : Collections.emptyList(); + } + } + + @Getter + public static final class ReturnStatement implements Statement { + private final Expr value; + + public ReturnStatement(final Expr value) { + this.value = value; + } + } + + // ==================== Conditions ==================== + + public interface Condition { + } + + @Getter + public static final class ComparisonCondition implements Condition { + private final Expr left; + private final CompareOp op; + private final Expr right; + + public ComparisonCondition(final Expr left, final CompareOp op, final Expr right) { + this.left = left; + this.op = op; + this.right = right; + } + } + + @Getter + public static final class LogicalCondition implements Condition { + private final Condition left; + private final LogicalOp op; + private final Condition right; + + public LogicalCondition(final Condition left, final LogicalOp op, final Condition right) { + this.left = left; + this.op = op; + this.right = right; + } + } + + @Getter + public static final class NotCondition implements Condition { + private final Condition inner; + + public NotCondition(final Condition inner) { + this.inner = inner; + } + } + + @Getter + public static final class ExprCondition implements Condition { + private final Expr expr; + + public ExprCondition(final Expr expr) { + this.expr = expr; + } + } + + // ==================== Expressions ==================== + + public interface Expr { + } + + /** + * Method chain: {@code u.name}, {@code l.shortName.lastIndexOf('.')}, + * {@code u.shortName.substring(0, l.shortName.lastIndexOf(':'))} + */ + @Getter + public static final class MethodChainExpr implements Expr { + private final String target; + private final List<ChainSegment> segments; + + public MethodChainExpr(final String target, final List<ChainSegment> segments) { + this.target = target; + this.segments = Collections.unmodifiableList(segments); + } + } + + @Getter + public static final class StringLiteralExpr implements Expr { + private final String value; + + public StringLiteralExpr(final String value) { + this.value = value; + } + } + + @Getter + public static final class NumberLiteralExpr implements Expr { + private final long value; + + public NumberLiteralExpr(final long value) { + this.value = value; + } + } + + @Getter + public static final class BoolLiteralExpr implements Expr { + private final boolean value; + + public BoolLiteralExpr(final boolean value) { + this.value = value; + } + } + + @Getter + public static final class BinaryExpr implements Expr { + private final Expr left; + private final ArithmeticOp op; + private final Expr right; + + public BinaryExpr(final Expr left, final ArithmeticOp op, final Expr right) { + this.left = left; + this.op = op; + this.right = right; + } + } + + // ==================== Chain segments ==================== + + public interface ChainSegment { + String getName(); + } + + @Getter + public static final class FieldAccess implements ChainSegment { + private final String name; + + public FieldAccess(final String name) { + this.name = name; + } + } + + @Getter + public static final class MethodCallSegment implements ChainSegment { + private final String name; + private final List<Expr> arguments; + + public MethodCallSegment(final String name, final List<Expr> arguments) { + this.name = name; + this.arguments = Collections.unmodifiableList(arguments); + } + } + + // ==================== Enums ==================== + + public enum CompareOp { + EQ, NEQ, GT, LT, GTE, LTE + } + + public enum LogicalOp { + AND, OR + } + + public enum ArithmeticOp { + ADD, SUB + } +} diff --git a/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleScriptParser.java b/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleScriptParser.java new file mode 100644 index 000000000000..b409372c4c8f --- /dev/null +++ b/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleScriptParser.java @@ -0,0 +1,375 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.core.config.v2.compiler; + +import java.util.ArrayList; +import java.util.List; +import org.antlr.v4.runtime.BaseErrorListener; +import org.antlr.v4.runtime.CharStreams; +import org.antlr.v4.runtime.CommonTokenStream; +import org.antlr.v4.runtime.RecognitionException; +import org.antlr.v4.runtime.Recognizer; +import org.apache.skywalking.hierarchy.rt.grammar.HierarchyRuleLexer; +import org.apache.skywalking.hierarchy.rt.grammar.HierarchyRuleParser; +import org.apache.skywalking.hierarchy.rt.grammar.HierarchyRuleParserBaseVisitor; + +/** + * Facade: parses hierarchy rule expression strings into {@link HierarchyRuleModel}. + * + * <pre> + * HierarchyRuleModel model = HierarchyRuleScriptParser.parse( + * "{ (u, l) -> u.name == l.name }"); + * </pre> + */ +public final class HierarchyRuleScriptParser { + + private HierarchyRuleScriptParser() { + } + + public static HierarchyRuleModel parse(final String expression) { + final HierarchyRuleLexer lexer = new HierarchyRuleLexer( + CharStreams.fromString(expression)); + final CommonTokenStream tokens = new CommonTokenStream(lexer); + final HierarchyRuleParser parser = new HierarchyRuleParser(tokens); + + final List<String> errors = new ArrayList<>(); + parser.removeErrorListeners(); + parser.addErrorListener(new BaseErrorListener() { + @Override + public void syntaxError(final Recognizer<?, ?> recognizer, + final Object offendingSymbol, + final int line, + final int charPositionInLine, + final String msg, + final RecognitionException e) { + errors.add(line + ":" + charPositionInLine + " " + msg); + } + }); + + final HierarchyRuleParser.MatchingRuleContext tree = parser.matchingRule(); + if (!errors.isEmpty()) { + throw new IllegalArgumentException( + "Hierarchy rule parsing failed: " + String.join("; ", errors) + + " in expression: " + expression); + } + + return new HierarchyRuleModelVisitor().visit(tree); + } + + /** + * Visitor that transforms the ANTLR4 parse tree into {@link HierarchyRuleModel}. + */ + private static final class HierarchyRuleModelVisitor + extends HierarchyRuleParserBaseVisitor<HierarchyRuleModel> { + + @Override + public HierarchyRuleModel visitMatchingRule(final HierarchyRuleParser.MatchingRuleContext ctx) { + final String upperParam = ctx.param(0).getText(); + final String lowerParam = ctx.param(1).getText(); + final HierarchyRuleModel.RuleBody body = convertRuleBody(ctx.ruleBody()); + return HierarchyRuleModel.of(upperParam, lowerParam, body); + } + + private HierarchyRuleModel.RuleBody convertRuleBody( + final HierarchyRuleParser.RuleBodyContext ctx) { + if (ctx.simpleExpression() != null) { + return convertSimpleExpression(ctx.simpleExpression()); + } + return convertBlockBody(ctx.blockBody()); + } + + private HierarchyRuleModel.SimpleComparison convertSimpleExpression( + final HierarchyRuleParser.SimpleExpressionContext ctx) { + final HierarchyRuleModel.Expr left = new ExprVisitor().visitRuleExpr(ctx.ruleExpr(0)); + final HierarchyRuleModel.Expr right = new ExprVisitor().visitRuleExpr(ctx.ruleExpr(1)); + final HierarchyRuleModel.CompareOp op = ctx.DEQ() != null + ? HierarchyRuleModel.CompareOp.EQ : HierarchyRuleModel.CompareOp.NEQ; + return new HierarchyRuleModel.SimpleComparison(left, op, right); + } + + private HierarchyRuleModel.BlockBody convertBlockBody( + final HierarchyRuleParser.BlockBodyContext ctx) { + final List<HierarchyRuleModel.Statement> stmts = new ArrayList<>(); + for (final HierarchyRuleParser.StatementContext stmtCtx : ctx.statement()) { + stmts.add(convertStatement(stmtCtx)); + } + return new HierarchyRuleModel.BlockBody(stmts); + } + + private HierarchyRuleModel.Statement convertStatement( + final HierarchyRuleParser.StatementContext ctx) { + if (ctx.ifStatement() != null) { + return convertIfStatement(ctx.ifStatement()); + } + return convertReturnStatement(ctx.returnStatement()); + } + + private HierarchyRuleModel.IfStatement convertIfStatement( + final HierarchyRuleParser.IfStatementContext ctx) { + final HierarchyRuleModel.Condition condition = + new ConditionVisitor().visit(ctx.condition(0)); + + final List<HierarchyRuleModel.Statement> thenBranch = new ArrayList<>(); + if (ctx.returnStatement(0) != null) { + thenBranch.add(convertReturnStatement(ctx.returnStatement(0))); + } else if (ctx.blockBody(0) != null) { + thenBranch.addAll(convertBlockBody(ctx.blockBody(0)).getStatements()); + } + + final List<HierarchyRuleModel.Statement> elseBranch = new ArrayList<>(); + // Handle else-if and else branches + final int condCount = ctx.condition().size(); + final int retCount = ctx.returnStatement().size(); + final int blockCount = ctx.blockBody().size(); + + // If there are more conditions (else if branches) + if (condCount > 1 || retCount > 1 || blockCount > 1) { + // Simplification: flatten else-if into the else branch + // For the current hierarchy rules, we don't have else-if patterns + // so this handles the basic else case + if (retCount > 1) { + elseBranch.add(convertReturnStatement(ctx.returnStatement(retCount - 1))); + } else if (blockCount > 1) { + elseBranch.addAll( + convertBlockBody(ctx.blockBody(blockCount - 1)).getStatements()); + } + } + + return new HierarchyRuleModel.IfStatement(condition, thenBranch, elseBranch); + } + + private HierarchyRuleModel.ReturnStatement convertReturnStatement( + final HierarchyRuleParser.ReturnStatementContext ctx) { + final HierarchyRuleParser.ReturnValueContext rv = ctx.returnValue(); + if (rv instanceof HierarchyRuleParser.ReturnComparisonContext) { + final HierarchyRuleParser.ReturnComparisonContext rc = + (HierarchyRuleParser.ReturnComparisonContext) rv; + final ExprVisitor ev = new ExprVisitor(); + final HierarchyRuleModel.SimpleComparison comp = + new HierarchyRuleModel.SimpleComparison( + ev.visitRuleExpr(rc.ruleExpr(0)), + HierarchyRuleModel.CompareOp.EQ, + ev.visitRuleExpr(rc.ruleExpr(1))); + return new HierarchyRuleModel.ReturnStatement(comp); + } + if (rv instanceof HierarchyRuleParser.ReturnNeqComparisonContext) { + final HierarchyRuleParser.ReturnNeqComparisonContext rnc = + (HierarchyRuleParser.ReturnNeqComparisonContext) rv; + final ExprVisitor ev = new ExprVisitor(); + final HierarchyRuleModel.SimpleComparison comp = + new HierarchyRuleModel.SimpleComparison( + ev.visitRuleExpr(rnc.ruleExpr(0)), + HierarchyRuleModel.CompareOp.NEQ, + ev.visitRuleExpr(rnc.ruleExpr(1))); + return new HierarchyRuleModel.ReturnStatement(comp); + } + // returnExpr + final HierarchyRuleParser.ReturnExprContext re = + (HierarchyRuleParser.ReturnExprContext) rv; + final HierarchyRuleModel.Expr value = new ExprVisitor().visitRuleExpr(re.ruleExpr()); + return new HierarchyRuleModel.ReturnStatement(value); + } + } + + /** + * Visitor for condition nodes. + */ + private static final class ConditionVisitor + extends HierarchyRuleParserBaseVisitor<HierarchyRuleModel.Condition> { + + @Override + public HierarchyRuleModel.Condition visitCondAnd( + final HierarchyRuleParser.CondAndContext ctx) { + return new HierarchyRuleModel.LogicalCondition( + visit(ctx.condition(0)), + HierarchyRuleModel.LogicalOp.AND, + visit(ctx.condition(1))); + } + + @Override + public HierarchyRuleModel.Condition visitCondOr( + final HierarchyRuleParser.CondOrContext ctx) { + return new HierarchyRuleModel.LogicalCondition( + visit(ctx.condition(0)), + HierarchyRuleModel.LogicalOp.OR, + visit(ctx.condition(1))); + } + + @Override + public HierarchyRuleModel.Condition visitCondNot( + final HierarchyRuleParser.CondNotContext ctx) { + return new HierarchyRuleModel.NotCondition(visit(ctx.condition())); + } + + @Override + public HierarchyRuleModel.Condition visitCondParen( + final HierarchyRuleParser.CondParenContext ctx) { + return visit(ctx.condition()); + } + + @Override + public HierarchyRuleModel.Condition visitCondEq( + final HierarchyRuleParser.CondEqContext ctx) { + final ExprVisitor ev = new ExprVisitor(); + return new HierarchyRuleModel.ComparisonCondition( + ev.visitRuleExpr(ctx.ruleExpr(0)), + HierarchyRuleModel.CompareOp.EQ, + ev.visitRuleExpr(ctx.ruleExpr(1))); + } + + @Override + public HierarchyRuleModel.Condition visitCondNeq( + final HierarchyRuleParser.CondNeqContext ctx) { + final ExprVisitor ev = new ExprVisitor(); + return new HierarchyRuleModel.ComparisonCondition( + ev.visitRuleExpr(ctx.ruleExpr(0)), + HierarchyRuleModel.CompareOp.NEQ, + ev.visitRuleExpr(ctx.ruleExpr(1))); + } + + @Override + public HierarchyRuleModel.Condition visitCondGt( + final HierarchyRuleParser.CondGtContext ctx) { + final ExprVisitor ev = new ExprVisitor(); + return new HierarchyRuleModel.ComparisonCondition( + ev.visitRuleExpr(ctx.ruleExpr(0)), + HierarchyRuleModel.CompareOp.GT, + ev.visitRuleExpr(ctx.ruleExpr(1))); + } + + @Override + public HierarchyRuleModel.Condition visitCondLt( + final HierarchyRuleParser.CondLtContext ctx) { + final ExprVisitor ev = new ExprVisitor(); + return new HierarchyRuleModel.ComparisonCondition( + ev.visitRuleExpr(ctx.ruleExpr(0)), + HierarchyRuleModel.CompareOp.LT, + ev.visitRuleExpr(ctx.ruleExpr(1))); + } + + @Override + public HierarchyRuleModel.Condition visitCondExpr( + final HierarchyRuleParser.CondExprContext ctx) { + final ExprVisitor ev = new ExprVisitor(); + return new HierarchyRuleModel.ExprCondition(ev.visitRuleExpr(ctx.ruleExpr())); + } + } + + /** + * Visitor for expression nodes. + */ + private static final class ExprVisitor + extends HierarchyRuleParserBaseVisitor<HierarchyRuleModel.Expr> { + + public HierarchyRuleModel.Expr visitRuleExpr( + final HierarchyRuleParser.RuleExprContext ctx) { + return visit(ctx); + } + + @Override + public HierarchyRuleModel.Expr visitExprAdd( + final HierarchyRuleParser.ExprAddContext ctx) { + return new HierarchyRuleModel.BinaryExpr( + visit(ctx.ruleExpr(0)), + HierarchyRuleModel.ArithmeticOp.ADD, + visit(ctx.ruleExpr(1))); + } + + @Override + public HierarchyRuleModel.Expr visitExprSub( + final HierarchyRuleParser.ExprSubContext ctx) { + return new HierarchyRuleModel.BinaryExpr( + visit(ctx.ruleExpr(0)), + HierarchyRuleModel.ArithmeticOp.SUB, + visit(ctx.ruleExpr(1))); + } + + @Override + public HierarchyRuleModel.Expr visitExprPrimary( + final HierarchyRuleParser.ExprPrimaryContext ctx) { + return visit(ctx.ruleExprPrimary()); + } + + @Override + public HierarchyRuleModel.Expr visitExprMethodChain( + final HierarchyRuleParser.ExprMethodChainContext ctx) { + return convertMethodChain(ctx.methodChain()); + } + + @Override + public HierarchyRuleModel.Expr visitExprString( + final HierarchyRuleParser.ExprStringContext ctx) { + return new HierarchyRuleModel.StringLiteralExpr(stripQuotes(ctx.STRING().getText())); + } + + @Override + public HierarchyRuleModel.Expr visitExprNumber( + final HierarchyRuleParser.ExprNumberContext ctx) { + return new HierarchyRuleModel.NumberLiteralExpr(Long.parseLong(ctx.NUMBER().getText())); + } + + @Override + public HierarchyRuleModel.Expr visitExprTrue( + final HierarchyRuleParser.ExprTrueContext ctx) { + return new HierarchyRuleModel.BoolLiteralExpr(true); + } + + @Override + public HierarchyRuleModel.Expr visitExprFalse( + final HierarchyRuleParser.ExprFalseContext ctx) { + return new HierarchyRuleModel.BoolLiteralExpr(false); + } + + private HierarchyRuleModel.MethodChainExpr convertMethodChain( + final HierarchyRuleParser.MethodChainContext ctx) { + final String target = ctx.IDENTIFIER().getText(); + final List<HierarchyRuleModel.ChainSegment> segments = new ArrayList<>(); + for (final HierarchyRuleParser.ChainSegmentContext seg : ctx.chainSegment()) { + segments.add(convertChainSegment(seg)); + } + return new HierarchyRuleModel.MethodChainExpr(target, segments); + } + + private HierarchyRuleModel.ChainSegment convertChainSegment( + final HierarchyRuleParser.ChainSegmentContext ctx) { + if (ctx instanceof HierarchyRuleParser.ChainMethodCallContext) { + final HierarchyRuleParser.ChainMethodCallContext mc = + (HierarchyRuleParser.ChainMethodCallContext) ctx; + final String name = mc.IDENTIFIER().getText(); + final List<HierarchyRuleModel.Expr> args = new ArrayList<>(); + if (mc.argList() != null) { + for (final HierarchyRuleParser.RuleExprContext argCtx : + mc.argList().ruleExpr()) { + args.add(visit(argCtx)); + } + } + return new HierarchyRuleModel.MethodCallSegment(name, args); + } + final HierarchyRuleParser.ChainFieldAccessContext fa = + (HierarchyRuleParser.ChainFieldAccessContext) ctx; + return new HierarchyRuleModel.FieldAccess(fa.IDENTIFIER().getText()); + } + } + + private static String stripQuotes(final String s) { + if (s.length() >= 2 && (s.charAt(0) == '\'' || s.charAt(0) == '"')) { + return s.substring(1, s.length() - 1); + } + return s; + } +} diff --git a/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/hierarchy/rule/rt/HierarchyRulePackageHolder.java b/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/hierarchy/rule/rt/HierarchyRulePackageHolder.java new file mode 100644 index 000000000000..6ad8dd96b874 --- /dev/null +++ b/oap-server/analyzer/hierarchy/src/main/java/org/apache/skywalking/oap/server/core/config/v2/compiler/hierarchy/rule/rt/HierarchyRulePackageHolder.java @@ -0,0 +1,26 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.core.config.v2.compiler.hierarchy.rule.rt; + +/** + * Empty marker class used as the class loading anchor for Javassist + * {@code CtClass.toClass(Class)} on JDK 16+. + * Generated hierarchy rule classes are loaded in this package. + */ +public class HierarchyRulePackageHolder { +} diff --git a/oap-server/analyzer/hierarchy/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.core.config.HierarchyDefinitionService$HierarchyRuleProvider b/oap-server/analyzer/hierarchy/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.core.config.HierarchyDefinitionService$HierarchyRuleProvider new file mode 100644 index 000000000000..66092b335560 --- /dev/null +++ b/oap-server/analyzer/hierarchy/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.core.config.HierarchyDefinitionService$HierarchyRuleProvider @@ -0,0 +1,19 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# + +org.apache.skywalking.oap.server.core.config.v2.compiler.CompiledHierarchyRuleProvider diff --git a/oap-server/analyzer/hierarchy/src/test/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleClassGeneratorTest.java b/oap-server/analyzer/hierarchy/src/test/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleClassGeneratorTest.java new file mode 100644 index 000000000000..72488ef61c85 --- /dev/null +++ b/oap-server/analyzer/hierarchy/src/test/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleClassGeneratorTest.java @@ -0,0 +1,158 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.core.config.v2.compiler; + +import java.util.function.BiFunction; +import javassist.ClassPool; +import org.apache.skywalking.oap.server.core.query.type.Service; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; + +class HierarchyRuleClassGeneratorTest { + + private HierarchyRuleClassGenerator generator; + + @BeforeEach + void setUp() { + generator = new HierarchyRuleClassGenerator(new ClassPool(true)); + } + + @Test + void compileSimpleNameEquality() throws Exception { + final BiFunction<Service, Service, Boolean> fn = generator.compile( + "name", "{ (u, l) -> u.name == l.name }"); + + assertNotNull(fn); + + final Service upper = new Service(); + upper.setName("svc-a"); + final Service lower = new Service(); + lower.setName("svc-a"); + assertTrue(fn.apply(upper, lower)); + + lower.setName("svc-b"); + assertFalse(fn.apply(upper, lower)); + } + + @Test + void compileShortNameEquality() throws Exception { + final BiFunction<Service, Service, Boolean> fn = generator.compile( + "short-name", "{ (u, l) -> u.shortName == l.shortName }"); + + assertNotNull(fn); + + final Service upper = new Service(); + upper.setShortName("svc"); + final Service lower = new Service(); + lower.setShortName("svc"); + assertTrue(fn.apply(upper, lower)); + + lower.setShortName("other"); + assertFalse(fn.apply(upper, lower)); + } + + @Test + void compileLowerShortNameRemoveNs() throws Exception { + final String expr = "{ (u, l) -> {" + + " if (l.shortName.lastIndexOf('.') > 0) {" + + " return u.shortName == l.shortName.substring(0, l.shortName.lastIndexOf('.'));" + + " }" + + " return false;" + + "} }"; + final BiFunction<Service, Service, Boolean> fn = generator.compile( + "lower-short-name-remove-ns", expr); + + assertNotNull(fn); + + final Service upper = new Service(); + upper.setShortName("svc-a"); + final Service lower = new Service(); + lower.setShortName("svc-a.ns1"); + assertTrue(fn.apply(upper, lower)); + + lower.setShortName("svc-b.ns1"); + assertFalse(fn.apply(upper, lower)); + + lower.setShortName("no-dot"); + assertFalse(fn.apply(upper, lower)); + } + + @Test + void compileLowerShortNameWithFqdn() throws Exception { + final String expr = "{ (u, l) -> {" + + " if (u.shortName.lastIndexOf(':') > 0) {" + + " return u.shortName.substring(0, u.shortName.lastIndexOf(':'))" + + " == l.shortName.concat('.svc.cluster.local');" + + " }" + + " return false;" + + "} }"; + final BiFunction<Service, Service, Boolean> fn = generator.compile( + "lower-short-name-with-fqdn", expr); + + assertNotNull(fn); + + final Service upper = new Service(); + upper.setShortName("svc-a.svc.cluster.local:8080"); + final Service lower = new Service(); + lower.setShortName("svc-a"); + assertTrue(fn.apply(upper, lower)); + + upper.setShortName("no-port"); + assertFalse(fn.apply(upper, lower)); + } + + // ==================== Error handling tests ==================== + + @Test + void emptyExpressionThrows() { + // Demo error: Hierarchy rule parsing failed: 1:0 mismatched input '<EOF>' + // expecting '{' + assertThrows(Exception.class, + () -> generator.compile("empty", "")); + } + + @Test + void missingClosureBracesThrows() { + // Demo error: Hierarchy rule parsing failed: 1:0 mismatched input 'u' + // expecting '{' + assertThrows(Exception.class, + () -> generator.compile("test", "u.name == l.name")); + } + + @Test + void missingParametersThrows() { + // Demo error: Hierarchy rule parsing failed: 1:2 mismatched input '}' + // expecting '(' + assertThrows(Exception.class, + () -> generator.compile("test", "{ }")); + } + + @Test + void invalidFieldAccessThrows() { + // Demo error: [source error] getNonExistent() not found in Service + // (Javassist cannot find the getter for a non-existent field) + assertThrows(Exception.class, + () -> generator.compile("test", + "{ (u, l) -> u.nonExistent == l.nonExistent }")); + } +} diff --git a/oap-server/analyzer/hierarchy/src/test/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleScriptParserTest.java b/oap-server/analyzer/hierarchy/src/test/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleScriptParserTest.java new file mode 100644 index 000000000000..9fa60481bfee --- /dev/null +++ b/oap-server/analyzer/hierarchy/src/test/java/org/apache/skywalking/oap/server/core/config/v2/compiler/HierarchyRuleScriptParserTest.java @@ -0,0 +1,159 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.core.config.v2.compiler; + +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertInstanceOf; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; + +class HierarchyRuleScriptParserTest { + + @Test + void parseSimpleNameEquality() { + final HierarchyRuleModel model = HierarchyRuleScriptParser.parse( + "{ (u, l) -> u.name == l.name }"); + + assertEquals("u", model.getUpperParam()); + assertEquals("l", model.getLowerParam()); + assertInstanceOf(HierarchyRuleModel.SimpleComparison.class, model.getBody()); + + final HierarchyRuleModel.SimpleComparison cmp = + (HierarchyRuleModel.SimpleComparison) model.getBody(); + assertEquals(HierarchyRuleModel.CompareOp.EQ, cmp.getOp()); + + final HierarchyRuleModel.MethodChainExpr left = + (HierarchyRuleModel.MethodChainExpr) cmp.getLeft(); + assertEquals("u", left.getTarget()); + assertEquals(1, left.getSegments().size()); + assertEquals("name", left.getSegments().get(0).getName()); + + final HierarchyRuleModel.MethodChainExpr right = + (HierarchyRuleModel.MethodChainExpr) cmp.getRight(); + assertEquals("l", right.getTarget()); + assertEquals("name", right.getSegments().get(0).getName()); + } + + @Test + void parseShortNameEquality() { + final HierarchyRuleModel model = HierarchyRuleScriptParser.parse( + "{ (u, l) -> u.shortName == l.shortName }"); + + final HierarchyRuleModel.SimpleComparison cmp = + (HierarchyRuleModel.SimpleComparison) model.getBody(); + final HierarchyRuleModel.MethodChainExpr left = + (HierarchyRuleModel.MethodChainExpr) cmp.getLeft(); + assertEquals("shortName", left.getSegments().get(0).getName()); + } + + @Test + void parseLowerShortNameRemoveNs() { + // lower-short-name-remove-ns rule + final String expr = + "{ (u, l) -> { if(l.shortName.lastIndexOf('.') > 0) " + + "return u.shortName == l.shortName.substring(0, l.shortName.lastIndexOf('.')); " + + "return false; } }"; + + final HierarchyRuleModel model = HierarchyRuleScriptParser.parse(expr); + assertEquals("u", model.getUpperParam()); + assertEquals("l", model.getLowerParam()); + assertInstanceOf(HierarchyRuleModel.BlockBody.class, model.getBody()); + + final HierarchyRuleModel.BlockBody block = + (HierarchyRuleModel.BlockBody) model.getBody(); + assertEquals(2, block.getStatements().size()); + + // First statement: if + final HierarchyRuleModel.IfStatement ifStmt = + (HierarchyRuleModel.IfStatement) block.getStatements().get(0); + assertInstanceOf( + HierarchyRuleModel.ComparisonCondition.class, ifStmt.getCondition()); + final HierarchyRuleModel.ComparisonCondition cond = + (HierarchyRuleModel.ComparisonCondition) ifStmt.getCondition(); + assertEquals(HierarchyRuleModel.CompareOp.GT, cond.getOp()); + + // Condition left: l.shortName.lastIndexOf('.') + final HierarchyRuleModel.MethodChainExpr condLeft = + (HierarchyRuleModel.MethodChainExpr) cond.getLeft(); + assertEquals("l", condLeft.getTarget()); + assertEquals(2, condLeft.getSegments().size()); + assertEquals("shortName", condLeft.getSegments().get(0).getName()); + assertInstanceOf( + HierarchyRuleModel.MethodCallSegment.class, condLeft.getSegments().get(1)); + final HierarchyRuleModel.MethodCallSegment lastIndexOf = + (HierarchyRuleModel.MethodCallSegment) condLeft.getSegments().get(1); + assertEquals("lastIndexOf", lastIndexOf.getName()); + assertEquals(1, lastIndexOf.getArguments().size()); + assertInstanceOf( + HierarchyRuleModel.StringLiteralExpr.class, lastIndexOf.getArguments().get(0)); + assertEquals(".", + ((HierarchyRuleModel.StringLiteralExpr) lastIndexOf.getArguments().get(0)).getValue()); + + // Then branch: return u.shortName == l.shortName.substring(0, ...) + assertEquals(1, ifStmt.getThenBranch().size()); + assertInstanceOf( + HierarchyRuleModel.ReturnStatement.class, ifStmt.getThenBranch().get(0)); + + // Second statement: return false + final HierarchyRuleModel.ReturnStatement retFalse = + (HierarchyRuleModel.ReturnStatement) block.getStatements().get(1); + assertInstanceOf(HierarchyRuleModel.BoolLiteralExpr.class, retFalse.getValue()); + final HierarchyRuleModel.BoolLiteralExpr falseExpr = + (HierarchyRuleModel.BoolLiteralExpr) retFalse.getValue(); + assertTrue(!falseExpr.isValue()); + } + + @Test + void parseLowerShortNameWithFqdn() { + // lower-short-name-with-fqdn rule + final String expr = + "{ (u, l) -> { if(u.shortName.lastIndexOf(':') > 0) " + + "return u.shortName.substring(0, u.shortName.lastIndexOf(':')) " + + "== l.shortName.concat('.svc.cluster.local'); " + + "return false; } }"; + + final HierarchyRuleModel model = HierarchyRuleScriptParser.parse(expr); + assertInstanceOf(HierarchyRuleModel.BlockBody.class, model.getBody()); + + final HierarchyRuleModel.BlockBody block = + (HierarchyRuleModel.BlockBody) model.getBody(); + assertEquals(2, block.getStatements().size()); + + // Verify the if condition checks u.shortName.lastIndexOf(':') > 0 + final HierarchyRuleModel.IfStatement ifStmt = + (HierarchyRuleModel.IfStatement) block.getStatements().get(0); + final HierarchyRuleModel.ComparisonCondition cond = + (HierarchyRuleModel.ComparisonCondition) ifStmt.getCondition(); + assertEquals(HierarchyRuleModel.CompareOp.GT, cond.getOp()); + + // Then branch has a return statement with == comparison + final HierarchyRuleModel.ReturnStatement retStmt = + (HierarchyRuleModel.ReturnStatement) ifStmt.getThenBranch().get(0); + // The return value should be a comparison (u.shortName.substring(...) == l.shortName.concat(...)) + // But since our grammar wraps returns as expressions, check the structure + assertInstanceOf(HierarchyRuleModel.Expr.class, retStmt.getValue()); + } + + @Test + void parseSyntaxErrorThrows() { + assertThrows(IllegalArgumentException.class, + () -> HierarchyRuleScriptParser.parse("{ invalid }")); + } +} diff --git a/oap-server/analyzer/log-analyzer/CLAUDE.md b/oap-server/analyzer/log-analyzer/CLAUDE.md new file mode 100644 index 000000000000..881a40ac7aa4 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/CLAUDE.md @@ -0,0 +1,263 @@ +# LAL Compiler + +Compiles LAL (Log Analysis Language) scripts into `LalExpression` implementation classes at runtime using ANTLR4 parsing and Javassist bytecode generation. + +## Compilation Workflow + +``` +LAL DSL string + → LALScriptParser.parse(dsl) [ANTLR4 lexer/parser → listener] + → LALScriptModel (immutable AST) + → LALClassGenerator.compileFromModel(model) + 1. detectParserType(model) — compile-time data source analysis (JSON/YAML/TEXT/NONE) + 2. generateExecuteMethod() — emit execute() + private methods (_extractor, _sink) + 3. classPool.makeClass() — single class implementing LalExpression + 4. addLocalVariableTable() — named LVT entries for all methods + 5. ctClass.toClass() — load into JVM + → LalExpression instance +``` + +The generated class implements: +```java +void execute(FilterSpec filterSpec, ExecutionContext ctx) +``` + +## File Structure + +``` +oap-server/analyzer/log-analyzer/ + src/main/antlr4/.../LALLexer.g4 — ANTLR4 lexer grammar + src/main/antlr4/.../LALParser.g4 — ANTLR4 parser grammar + + src/main/java/.../compiler/ + LALScriptParser.java — ANTLR4 facade: DSL string → AST + LALScriptModel.java — Immutable AST model classes + LALClassGenerator.java — Public API, execute method codegen, class scaffolding + LALBlockCodegen.java — Extractor/sink/condition/value-access codegen + LALCodegenHelper.java — Static utility methods and shared constants + rt/ + LalExpressionPackageHolder.java — Class loading anchor (empty marker) + LalRuntimeHelper.java — Instance-based helper called by generated code + + src/main/java/.../dsl/ + LalExpression.java — Functional interface: execute(FilterSpec, ExecutionContext) + ExecutionContext.java — Per-log execution state (log, parsed, flags) + DSL.java — Wraps compiled expression + FilterSpec + spec/filter/FilterSpec.java — Top-level filter spec (all methods take ctx explicitly) + spec/extractor/ExtractorSpec.java — Extractor field setters (all methods take ctx explicitly) + spec/sink/SinkSpec.java — Sink spec (save/drop/sample) + spec/sink/SamplerSpec.java — Rate-limit sampler + + src/test/java/.../compiler/ + LALScriptParserTest.java — 20 parser tests + LALClassGeneratorTest.java — 37 generator tests + LALExpressionExecutionTest.java — 27 data-driven execution tests (from YAML + .input.data) +``` + +## Package & Class Naming + +All v2 classes live under `org.apache.skywalking.oap.log.analyzer.v2.*` to avoid FQCN conflicts with the v1 (Groovy) classes. + +| Component | Package / Name | +|-----------|---------------| +| Parser/Model/Generator | `org.apache.skywalking.oap.log.analyzer.v2.compiler` | +| Generated classes | `org.apache.skywalking.oap.log.analyzer.v2.compiler.rt.LalExpr_<N>` | +| Package holder | `org.apache.skywalking.oap.log.analyzer.v2.compiler.rt.LalExpressionPackageHolder` | +| Runtime helper | `org.apache.skywalking.oap.log.analyzer.v2.compiler.rt.LalRuntimeHelper` | +| Functional interface | `org.apache.skywalking.oap.log.analyzer.v2.dsl.LalExpression` | + +`<N>` is a global `AtomicInteger` counter. + +## Single Class with Private Methods + +The generator produces a single class per LAL script. Extractor and sink blocks become private methods called directly from `execute()` — no Consumer classes, no callback indirection. + +Method naming: `_extractor`, `_extractor_2`, `_extractor_3` (no `_0` suffix for single methods). + +Sub-blocks (slowSql, sampledTrace, metrics, sampler, rateLimit) are inlined within their parent method. + +## Explicit Context Passing (No ThreadLocal) + +All spec methods take `ExecutionContext ctx` as an explicit parameter — there is no `BINDING` ThreadLocal or `bind()` method. The `execute()` method receives `ctx` directly and passes it through: + +- `execute(FilterSpec filterSpec, ExecutionContext ctx)` — entry point +- `filterSpec.json(ctx)`, `filterSpec.text(ctx)`, `filterSpec.sink(ctx)` — parser/sink calls +- `_e.service(h.ctx(), ...)`, `_e.tag(h.ctx(), ...)` — extractor calls via `h.ctx()` +- `_f.sampler().rateLimit(h.ctx(), ...)` — sink calls via `h.ctx()` + +The generated `execute()` method guards `_extractor()` and `_sink()` calls with `if (!ctx.shouldAbort())`, matching v1 Groovy behavior where `extractor {}` and `sink {}` closures check the abort flag before running their body. `finalizeSink(ctx)` also checks the flag. Individual spec methods inside each block additionally check `ctx.shouldAbort()` as a defense-in-depth measure. + +## LocalVariableTable (LVT) + +All generated methods include a `LocalVariableTable` attribute for debugger/decompiler readability. Without LVT, tools show `var0`, `var1`, `var2`, `var3` instead of named variables. + +| Method | Slot 0 | Slot 1 | Slot 2 | Slot 3 | +|--------|--------|--------|--------|--------| +| `execute()` | `this` | `filterSpec` | `ctx` | `h` | +| `_extractor()` | `this` | `_e` | `h` | — | +| `_sink()` | `this` | `_f` | `h` | — | + +LVT entries are added via `PrivateMethod` inner class which carries both source code and variable descriptors. + +## Compile-Time Data Source Analysis + +The generator detects the parser type from the AST at compile time and generates typed value access: + +| Parser Type | LAL Example | Generated Code | +|---|---|---| +| JSON/YAML | `parsed.service` | `h.mapVal("service")` | +| JSON/YAML nested | `parsed.a.b` | `h.mapVal("a", "b")` | +| TEXT (regexp) | `parsed.level` | `h.group("level")` | +| NONE + extraLogType | `parsed.response.code` | `((ExtraLogType) h.ctx().extraLog()).getResponse().getCode()` | +| NONE + no extraLogType | `parsed.service` | `h.ctx().log().getService()` (LogData.Builder fallback) | +| log fields | `log.service` | `h.ctx().log().getService()` | +| log trace | `log.traceContext.traceId` | `h.ctx().log().getTraceContext().getTraceId()` | +| tags | `tag("KEY")` | `h.tagValue("KEY")` | + +### extraLogType and LALSourceTypeProvider SPI + +For LAL rules with no DSL parser (`json{}`/`yaml{}`/`text{}`), the compiler needs a type to generate direct getter calls on `parsed.*` fields. Per-rule resolution order: + +1. **DSL parser** (`json{}`, `yaml{}`, `text{}`) — parser wins, extraLogType is ignored +2. **Explicit `extraLogType`** in YAML rule config — FQCN string, resolved via `Class.forName()` +3. **`LALSourceTypeProvider` SPI** — default extraLogType for a layer, discovered via `ServiceLoader` +4. **`LogData.Builder` fallback** — if none of the above, `parsed.*` generates getter chains on `LogData.Builder` with compile-time reflection validation. Fields not found on `LogData.Builder` cause `IllegalArgumentException` at boot. + +The SPI interface is in `org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider`. Receiver plugins implement it and register in `META-INF/services/`. Example: `EnvoyHTTPLALSourceTypeProvider` registers `HTTPAccessLogEntry` for `Layer.MESH`. + +A single YAML file can have rules with different input types (e.g., `envoy-als.yaml` has a proto-based rule and a `json{}` rule, both in layer MESH). Resolution is per-rule, not per-file. + +## Example + +**Input**: `filter { json {} extractor { service parsed.service as String } sink {} }` + +One class is generated: + +```java +public class LalExpr_0 implements LalExpression { + public void execute(FilterSpec filterSpec, ExecutionContext ctx) { + LalRuntimeHelper h = new LalRuntimeHelper(ctx); + filterSpec.json(ctx); + if (!ctx.shouldAbort()) { + _extractor(filterSpec.extractor(), h); + } + filterSpec.sink(ctx); + } + private void _extractor(ExtractorSpec _e, LalRuntimeHelper h) { + _e.service(h.ctx(), h.toStr(h.mapVal("service"))); + } +} +``` + +## Runtime Helper (LalRuntimeHelper) + +Instance-based helper created at the start of `execute()`, holds the `ExecutionContext`. + +**Data source methods:** +- `mapVal(key)`, `mapVal(k1, k2)`, `mapVal(k1, k2, k3)` — JSON/YAML map access +- `group(name)` — text regexp named group +- `tagValue(key)` — log tag lookup +- `ctx()` — access to ExecutionContext (for `h.ctx().log()` proto getters) + +**Type conversion:** `toStr()`, `toLong()`, `toInt()`, `toBool()` + +**Boolean evaluation:** `isTrue()`, `isNotEmpty()` + +**Safe navigation:** `toString()`, `trim()` + +## JSON/YAML LogData Field Population + +When `json{}` or `yaml{}` parses the log body, `FilterSpec` also adds LogData proto fields +(`service`, `serviceInstance`, `endpoint`, `layer`, `timestamp`) to the parsed map via +`putIfAbsent`. Body-parsed values take priority; proto fields serve as fallback. This matches +v1 Groovy `Binding.Parsed.getAt(key)` behavior where `parsed.service` falls back to +`LogData.getService()` when the JSON body doesn't contain a `service` key. + +## Null-Safe String Conversion + +Generated code calls `h.toStr()` instead of `String.valueOf()` for casting parsed values to String. +This preserves Java `null` for missing fields (matching Groovy's `null as String` → `null` behavior), +whereas `String.valueOf(null)` would produce the string `"null"`. + +## Data-Driven Execution Tests + +`LALExpressionExecutionTest` loads LAL rules from YAML and mock input from `.input.data` files: + +``` +test/script-cases/scripts/lal/test-lal/ + oap-cases/ — copies of shipped LAL configs (each with .input.data) + feature-cases/ + execution-basic.yaml — 17 LAL feature-coverage rules + execution-basic.input.data — mock input + expected output per rule +``` + +Each `.input.data` entry specifies `body-type`, `body`, optional `tags`, and `expect` assertions +(service, instance, endpoint, layer, tags, abort, save, timestamp, sampledTrace fields). + +## LAL Input Data Mock Principles + +LAL test data lives in `.input.data` files alongside rule YAML files under `test/script-cases/scripts/lal/`. Each entry describes one log to process and the expected output. + +### Input Entry Structure + +```yaml +rule-name: + - service: test-svc # LogData.service + instance: test-inst # LogData.serviceInstance (optional) + body-type: json|yaml|text|none # How to parse the body + body: '{"key": "value"}' # Log body string + trace-id: trace-001 # Trace context (optional) + timestamp: 1609459200000 # LogData.timestamp (optional) + tags: # LogData tags (optional) + LOG_KIND: NET_PROFILING_SAMPLED_TRACE + extra-log: # For proto-typed rules (e.g., envoy-als) + proto-class: io.envoyproxy.envoy.data.accesslog.v3.HTTPAccessLogEntry + proto-json: '{"response":{"responseCode":500}}' + expect: # Expected output assertions + save: true # SinkSpec.save() called + abort: false # Not aborted + service: expected-svc # Extracted service name + layer: MESH # Extracted layer + tag.status.code: "500" # Extracted tag value + sampledTrace.traceId: trace-001 # SampledTrace field +``` + +### Principles + +1. **`body-type` determines parsing**: `json` → `json{}` block, `text` → `text{}` block, `none` → proto extraLog or raw LogData access. +2. **`extra-log` for proto types**: When rules access `parsed.*` on protobuf types (e.g., `HTTPAccessLogEntry`), provide `proto-class` and `proto-json`. The test harness parses via `JsonFormat`. +3. **`expect` section is mandatory**: Every entry must have `expect` with at least `save` and `abort`. +4. **Enum fields as strings**: Fields like `sampledTrace.reason` and `sampledTrace.detectPoint` are enums. Expected values use enum names (e.g., `slow`, `client`) compared via `.name()`. +5. **Tag assertions**: `tag.KEY` in expect asserts extracted tag values (e.g., `tag.status.code: "500"`). +6. **SampledTrace assertions**: `sampledTrace.FIELD` asserts fields on `NativeSampledTrace` (traceId, processId, destProcessId, componentId, etc.). +7. **v1 is the truth**: Both v1 (Groovy) and v2 (ANTLR4) must produce identical results. If v1 produces different output than expected, the expected data has a bug. + +### Directory Structure + +``` +test/script-cases/scripts/lal/test-lal/ + oap-cases/ — copies of shipped LAL configs + default.yaml / default.input.data + envoy-als.yaml / envoy-als.input.data + ... + feature-cases/ + execution-basic.yaml / execution-basic.input.data — LAL feature tests +``` + +## Debug Output + +When `SW_DYNAMIC_CLASS_ENGINE_DEBUG=true` environment variable is set, generated `.class` files are written to disk for inspection: + +``` +{skywalking}/lal-rt/ + *.class - Generated LalExpression .class files +``` + +This is the same env variable used by OAL. Useful for debugging code generation issues or comparing V1 vs V2 output. In tests, use `setClassOutputDir(dir)` instead. + +## Dependencies + +All within this module (grammar, compiler, and runtime are merged): +- ANTLR4 grammar → generates lexer/parser at build time +- `LalExpression`, `ExecutionContext`, `FilterSpec`, all Spec classes — in `dsl` package of this module +- `javassist` — bytecode generation diff --git a/oap-server/analyzer/log-analyzer/pom.xml b/oap-server/analyzer/log-analyzer/pom.xml index 576539cb8594..180a8435dfed 100644 --- a/oap-server/analyzer/log-analyzer/pom.xml +++ b/oap-server/analyzer/log-analyzer/pom.xml @@ -7,13 +7,14 @@ ~ (the "License"); you may not use this file except in compliance with ~ the License. You may obtain a copy of the License at ~ - ~ http://www.apache.org/licenses/LICENSE-2.0 + ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, software ~ distributed under the License is distributed on an "AS IS" BASIS, ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ~ See the License for the specific language governing permissions and ~ limitations under the License. + ~ --> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> @@ -25,7 +26,6 @@ <modelVersion>4.0.0</modelVersion> <artifactId>log-analyzer</artifactId> - <packaging>jar</packaging> <dependencies> <dependency> @@ -43,13 +43,40 @@ <artifactId>agent-analyzer</artifactId> <version>${project.version}</version> </dependency> - <dependency> - <groupId>org.apache.groovy</groupId> - <artifactId>groovy</artifactId> - </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> </dependency> + <dependency> + <groupId>org.antlr</groupId> + <artifactId>antlr4-runtime</artifactId> + </dependency> + <dependency> + <groupId>org.javassist</groupId> + <artifactId>javassist</artifactId> + </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>receiver-proto</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> + + <build> + <plugins> + <plugin> + <groupId>org.antlr</groupId> + <artifactId>antlr4-maven-plugin</artifactId> + <executions> + <execution> + <id>antlr</id> + <goals> + <goal>antlr4</goal> + </goals> + </execution> + </executions> + </plugin> + </plugins> + </build> </project> diff --git a/oap-server/analyzer/log-analyzer/src/main/antlr4/org/apache/skywalking/lal/rt/grammar/LALLexer.g4 b/oap-server/analyzer/log-analyzer/src/main/antlr4/org/apache/skywalking/lal/rt/grammar/LALLexer.g4 new file mode 100644 index 000000000000..4df6f7113ab1 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/antlr4/org/apache/skywalking/lal/rt/grammar/LALLexer.g4 @@ -0,0 +1,175 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +// Log Analysis Language lexer +lexer grammar LALLexer; + +@Header {package org.apache.skywalking.lal.rt.grammar;} + +// Keywords - block structure +FILTER: 'filter'; +TEXT: 'text'; +JSON: 'json'; +YAML: 'yaml'; +EXTRACTOR: 'extractor'; +SINK: 'sink'; +ABORT: 'abort'; + +// Keywords - extractor statements +SERVICE: 'service'; +INSTANCE: 'instance'; +ENDPOINT: 'endpoint'; +LAYER: 'layer'; +TRACE_ID: 'traceId'; +SEGMENT_ID: 'segmentId'; +SPAN_ID: 'spanId'; +TIMESTAMP: 'timestamp'; +TAG: 'tag'; +METRICS: 'metrics'; +SLOW_SQL: 'slowSql'; +SAMPLED_TRACE: 'sampledTrace'; +REGEXP: 'regexp'; +ABORT_ON_FAILURE: 'abortOnFailure'; +NAME: 'name'; +VALUE: 'value'; +LABELS: 'labels'; +ID: 'id'; +STATEMENT: 'statement'; +LATENCY: 'latency'; +URI: 'uri'; +REASON: 'reason'; +PROCESS_ID: 'processId'; +DEST_PROCESS_ID: 'destProcessId'; +DETECT_POINT: 'detectPoint'; +COMPONENT_ID: 'componentId'; +REPORT_SERVICE: 'reportService'; + +// Keywords - sink statements +SAMPLER: 'sampler'; +RATE_LIMIT: 'rateLimit'; +RPM: 'rpm'; +ENFORCER: 'enforcer'; +DROPPER: 'dropper'; + +// Keywords - control flow +IF: 'if'; +ELSE: 'else'; + +// Keywords - type cast +AS: 'as'; +STRING_TYPE: 'String'; +LONG_TYPE: 'Long'; +INTEGER_TYPE: 'Integer'; +BOOLEAN_TYPE: 'Boolean'; + +// Keywords - built-in references +LOG: 'log'; +PARSED: 'parsed'; + +// Keywords - utility class references +PROCESS_REGISTRY: 'ProcessRegistry'; + +// Comparison and logical operators +DEQ: '=='; +NEQ: '!='; +AND: '&&'; +OR: '||'; +NOT: '!'; +GT: '>'; +LT: '<'; +GTE: '>='; +LTE: '<='; + +// Delimiters +DOT: '.'; +COMMA: ','; +COLON: ':'; +SEMI: ';'; +L_PAREN: '('; +R_PAREN: ')'; +L_BRACE: '{'; +R_BRACE: '}'; +L_BRACKET: '['; +R_BRACKET: ']'; +QUESTION: '?'; +ASSIGN: '='; + +// Arithmetic +PLUS: '+'; +MINUS: '-'; +STAR: '*'; +SLASH: '/'; + +// Literals +TRUE: 'true'; +FALSE: 'false'; +NULL: 'null'; + +NUMBER + : Digit+ ('.' Digit+)? + ; + +// String literal: single or double quoted +STRING + : '\'' (~['\\\r\n] | EscapeSequence)* '\'' + | '"' (~["\\\r\n] | EscapeSequence)* '"' + ; + +// Groovy-style slashy string for regex patterns: $/pattern/$ +SLASHY_STRING + : '$/' .*? '/$' + ; + +// Comments +LINE_COMMENT + : '//' ~[\r\n]* -> channel(HIDDEN) + ; + +BLOCK_COMMENT + : '/*' .*? '*/' -> channel(HIDDEN) + ; + +// Whitespace +WS + : [ \t\r\n]+ -> channel(HIDDEN) + ; + +// Identifiers +IDENTIFIER + : Letter LetterOrDigit* + ; + +// Fragments +fragment EscapeSequence + : '\\' [btnfr"'\\] + | '\\' ([0-3]? [0-7])? [0-7] + | '\\' . // catch-all for regex escapes like \d, \w, \s + ; + +fragment Digit + : [0-9] + ; + +fragment Letter + : [a-zA-Z_] + ; + +fragment LetterOrDigit + : Letter + | [0-9] + ; diff --git a/oap-server/analyzer/log-analyzer/src/main/antlr4/org/apache/skywalking/lal/rt/grammar/LALParser.g4 b/oap-server/analyzer/log-analyzer/src/main/antlr4/org/apache/skywalking/lal/rt/grammar/LALParser.g4 new file mode 100644 index 000000000000..b11802a61673 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/antlr4/org/apache/skywalking/lal/rt/grammar/LALParser.g4 @@ -0,0 +1,476 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +// Log Analysis Language parser +// +// Covers LAL DSL patterns: +// filter { parser {} extractor {} sink {} } +// if (tag("LOG_KIND") == "NGINX_ACCESS_LOG") { ... } +// text { regexp $/pattern/$ } +// json { abortOnFailure true } +// extractor { service parsed.service as String; tag 'key': value } +// metrics { name "metric_name"; value 1; labels key: val } +// slowSql { id parsed.id as String; statement parsed.statement as String; latency parsed.query_time as Long } +// sampledTrace { latency parsed.latency as Long; uri parsed.uri as String; ... } +// sink { sampler { rateLimit("id") { rpm 6000 } } } +parser grammar LALParser; + +@Header {package org.apache.skywalking.lal.rt.grammar;} + +options { tokenVocab=LALLexer; } + +// ==================== Top-level ==================== + +root + : filterBlock EOF + ; + +// ==================== Filter block ==================== + +filterBlock + : FILTER L_BRACE filterContent R_BRACE + ; + +filterContent + : filterStatement* + ; + +filterStatement + : parserBlock + | extractorBlock + | sinkBlock + | ifStatement + | abortBlock + ; + +// ==================== Parser blocks ==================== + +parserBlock + : textBlock + | jsonBlock + | yamlBlock + ; + +textBlock + : TEXT L_BRACE textContent R_BRACE + ; + +textContent + : (regexpStatement | abortOnFailureStatement)* + ; + +regexpStatement + : REGEXP regexpPattern + ; + +regexpPattern + : SLASHY_STRING + | STRING + ; + +jsonBlock + : JSON L_BRACE jsonContent R_BRACE + ; + +jsonContent + : abortOnFailureStatement? + ; + +yamlBlock + : YAML L_BRACE yamlContent R_BRACE + ; + +yamlContent + : abortOnFailureStatement? + ; + +abortOnFailureStatement + : ABORT_ON_FAILURE boolValue + ; + +abortBlock + : ABORT L_BRACE R_BRACE + ; + +// ==================== Extractor block ==================== + +extractorBlock + : EXTRACTOR L_BRACE extractorContent R_BRACE + ; + +extractorContent + : extractorStatement* + ; + +extractorStatement + : serviceStatement + | instanceStatement + | endpointStatement + | layerStatement + | traceIdStatement + | segmentIdStatement + | spanIdStatement + | timestampStatement + | tagStatement + | metricsBlock + | slowSqlBlock + | sampledTraceBlock + | ifStatement + ; + +serviceStatement + : SERVICE valueAccess typeCast? + ; + +instanceStatement + : INSTANCE valueAccess typeCast? + ; + +endpointStatement + : ENDPOINT valueAccess typeCast? + ; + +layerStatement + : LAYER valueAccess typeCast? + ; + +traceIdStatement + : TRACE_ID valueAccess typeCast? + ; + +segmentIdStatement + : SEGMENT_ID valueAccess typeCast? + ; + +spanIdStatement + : SPAN_ID valueAccess typeCast? + ; + +timestampStatement + : TIMESTAMP valueAccess typeCast? (COMMA STRING)? + ; + +tagStatement + : TAG tagMap + | TAG STRING COLON valueAccess typeCast? + ; + +tagMap + : anyIdentifier COLON valueAccess typeCast? (COMMA anyIdentifier COLON valueAccess typeCast?)* + ; + +// ==================== Metrics block ==================== + +metricsBlock + : METRICS L_BRACE metricsContent R_BRACE + ; + +metricsContent + : metricsStatement* + ; + +metricsStatement + : metricsNameStatement + | metricsTimestampStatement + | metricsLabelsStatement + | metricsValueStatement + ; + +metricsNameStatement + : NAME valueAccess typeCast? + ; + +metricsTimestampStatement + : TIMESTAMP valueAccess typeCast? + ; + +metricsLabelsStatement + : LABELS labelMap + ; + +labelMap + : labelEntry (COMMA labelEntry)* + ; + +labelEntry + : anyIdentifier COLON valueAccess typeCast? + ; + +metricsValueStatement + : VALUE valueAccess typeCast? + ; + +// ==================== Slow SQL block ==================== + +slowSqlBlock + : SLOW_SQL L_BRACE slowSqlContent R_BRACE + ; + +slowSqlContent + : slowSqlStatement* + ; + +slowSqlStatement + : slowSqlIdStatement + | slowSqlStatementStatement + | slowSqlLatencyStatement + ; + +slowSqlIdStatement + : ID valueAccess typeCast? + ; + +slowSqlStatementStatement + : STATEMENT valueAccess typeCast? + ; + +slowSqlLatencyStatement + : LATENCY valueAccess typeCast? + ; + +// ==================== Sampled trace block ==================== + +sampledTraceBlock + : SAMPLED_TRACE L_BRACE sampledTraceContent R_BRACE + ; + +sampledTraceContent + : sampledTraceStatement* + ; + +sampledTraceStatement + : sampledTraceLatencyStatement + | sampledTraceUriStatement + | sampledTraceReasonStatement + | sampledTraceProcessIdStatement + | sampledTraceDestProcessIdStatement + | sampledTraceDetectPointStatement + | sampledTraceComponentIdStatement + | reportServiceStatement + | ifStatement + ; + +sampledTraceLatencyStatement + : LATENCY valueAccess typeCast? + ; + +sampledTraceUriStatement + : URI valueAccess typeCast? + ; + +sampledTraceReasonStatement + : REASON valueAccess typeCast? + ; + +sampledTraceProcessIdStatement + : PROCESS_ID valueAccess typeCast? + ; + +sampledTraceDestProcessIdStatement + : DEST_PROCESS_ID valueAccess typeCast? + ; + +sampledTraceDetectPointStatement + : DETECT_POINT valueAccess typeCast? + ; + +sampledTraceComponentIdStatement + : COMPONENT_ID valueAccess typeCast? + ; + +reportServiceStatement + : REPORT_SERVICE valueAccess typeCast? + ; + +// ==================== Sink block ==================== + +sinkBlock + : SINK L_BRACE sinkContent R_BRACE + ; + +sinkContent + : sinkStatement* + ; + +sinkStatement + : samplerBlock + | enforcerStatement + | dropperStatement + | ifStatement + ; + +samplerBlock + : SAMPLER L_BRACE samplerContent R_BRACE + ; + +samplerContent + : (rateLimitBlock | ifStatement)* + ; + +rateLimitBlock + : RATE_LIMIT L_PAREN rateLimitId R_PAREN L_BRACE rateLimitContent R_BRACE + ; + +rateLimitId + : STRING + ; + +rateLimitContent + : RPM NUMBER + ; + +enforcerStatement + : ENFORCER L_BRACE R_BRACE + ; + +dropperStatement + : DROPPER L_BRACE R_BRACE + ; + +// ==================== Control flow ==================== + +ifStatement + : IF L_PAREN condition R_PAREN L_BRACE + ifBody + R_BRACE + (ELSE IF L_PAREN condition R_PAREN L_BRACE + ifBody + R_BRACE)* + (ELSE L_BRACE + ifBody + R_BRACE)? + ; + +ifBody + : filterStatement* + | extractorStatement* + | sinkStatement* + | sampledTraceStatement* + | samplerContent + ; + +// ==================== Conditions ==================== + +condition + : condition AND condition # condAnd + | condition OR condition # condOr + | NOT condition # condNot + | conditionExpr DEQ conditionExpr # condEq + | conditionExpr NEQ conditionExpr # condNeq + | conditionExpr GT conditionExpr # condGt + | conditionExpr LT conditionExpr # condLt + | conditionExpr GTE conditionExpr # condGte + | conditionExpr LTE conditionExpr # condLte + | conditionExpr # condSingle + ; + +conditionExpr + : valueAccess typeCast? # condValueAccess + | L_PAREN condition R_PAREN # condParenGroup + | STRING # condString + | NUMBER # condNumber + | boolValue # condBool + | NULL # condNull + | functionInvocation # condFunctionCall + ; + +// ==================== Value access ==================== + +// Accessing parsed values, log fields, and method calls: +// parsed.level, parsed?.response?.responseCode?.value +// log.service, log.timestamp, log.serviceInstance +// tag("LOG_KIND") +// ProcessRegistry.generateVirtualLocalProcess(...) + +valueAccess + : valueAccessTerm (PLUS valueAccessTerm)* + ; + +valueAccessTerm + : valueAccessPrimary (valueAccessSegment)* + ; + +valueAccessPrimary + : PARSED # valueParsed + | LOG # valueLog + | PROCESS_REGISTRY # valueProcessRegistry + | IDENTIFIER # valueIdentifier + | STRING # valueString + | NUMBER # valueNumber + | boolValue # valueBool + | NULL # valueNull + | functionInvocation # valueFunctionCall + | L_PAREN valueAccess typeCast? R_PAREN # valueParen + ; + +valueAccessSegment + : DOT anyIdentifier # segmentField + | QUESTION DOT anyIdentifier # segmentSafeField + | DOT functionInvocation # segmentMethod + | QUESTION DOT functionInvocation # segmentSafeMethod + | L_BRACKET NUMBER R_BRACKET # segmentIndex + ; + +functionInvocation + : functionName L_PAREN functionArgList? R_PAREN + ; + +functionName + : IDENTIFIER + | TAG + ; + +functionArgList + : functionArg (COMMA functionArg)* + ; + +functionArg + : valueAccess typeCast? + | STRING + | NUMBER + | boolValue + | NULL + ; + +// ==================== Type cast ==================== + +typeCast + : AS (STRING_TYPE | LONG_TYPE | INTEGER_TYPE | BOOLEAN_TYPE) + ; + +// ==================== Common ==================== + +// Allows keywords to be used as identifiers in contexts like field names, +// labels, and value access segments (e.g. parsed.service, parsed.layer). +anyIdentifier + : IDENTIFIER + | SERVICE | INSTANCE | ENDPOINT | LAYER + | TRACE_ID | SEGMENT_ID | SPAN_ID | TIMESTAMP + | TAG | METRICS | SLOW_SQL | SAMPLED_TRACE + | REGEXP | ABORT_ON_FAILURE + | NAME | VALUE | LABELS + | ID | STATEMENT | LATENCY + | URI | REASON | PROCESS_ID | DEST_PROCESS_ID + | DETECT_POINT | COMPONENT_ID | REPORT_SERVICE + | SAMPLER | RATE_LIMIT | RPM | ENFORCER | DROPPER + | TEXT | JSON | YAML | FILTER | EXTRACTOR | SINK | ABORT + ; + +boolValue + : TRUE | FALSE + ; diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALBlockCodegen.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALBlockCodegen.java new file mode 100644 index 000000000000..e0b0d546054d --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALBlockCodegen.java @@ -0,0 +1,1157 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.compiler; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import org.apache.skywalking.apm.network.logging.v3.LogData; + +/** + * Static code-generation methods for LAL extractor, sink, condition, and + * value-access blocks. Extracted from {@link LALClassGenerator} for + * readability; all methods are stateless and take a + * {@link LALClassGenerator.GenCtx} parameter for shared state. + */ +final class LALBlockCodegen { + + private static final String FILTER_SPEC = + "org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.filter.FilterSpec"; + private static final String EXTRACTOR_SPEC = + "org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.extractor.ExtractorSpec"; + private static final String SAMPLE_BUILDER = + EXTRACTOR_SPEC + "$SampleBuilder"; + private static final String H = + "org.apache.skywalking.oap.log.analyzer.v2.compiler.rt.LalRuntimeHelper"; + private static final String PROCESS_REGISTRY = + "org.apache.skywalking.oap.meter.analyzer.v2.dsl.registry.ProcessRegistry"; + + private LALBlockCodegen() { + // utility class + } + + // ==================== Extractor method generation ==================== + + static void generateExtractorMethod(final StringBuilder sb, + final LALScriptModel.ExtractorBlock block, + final LALClassGenerator.GenCtx genCtx) { + final String methodName = genCtx.nextMethodName("extractor"); + final Object[] savedState = genCtx.saveProtoVarState(); + genCtx.resetProtoVars(); + + // Generate body first to collect proto var declarations + final StringBuilder bodyContent = new StringBuilder(); + final List<LALScriptModel.FilterStatement> extractorStmts = new ArrayList<>(); + for (final LALScriptModel.ExtractorStatement es : block.getStatements()) { + extractorStmts.add((LALScriptModel.FilterStatement) es); + } + generateExtractorBody(bodyContent, extractorStmts, genCtx); + + // Assemble method with declarations before body + final StringBuilder body = new StringBuilder(); + body.append("private void ").append(methodName).append("(") + .append(EXTRACTOR_SPEC).append(" _e, ").append(H).append(" h) {\n"); + + final List<String[]> lvtVars = new ArrayList<>(); + lvtVars.add(new String[]{"_e", "L" + EXTRACTOR_SPEC.replace('.', '/') + ";"}); + lvtVars.add(new String[]{"h", "L" + H.replace('.', '/') + ";"}); + + if (genCtx.usedProtoAccess) { + if (genCtx.extraLogType != null) { + final String elTypeName = genCtx.extraLogType.getName(); + body.append(" ").append(elTypeName).append(" _p = (") + .append(elTypeName).append(") h.ctx().extraLog();\n"); + lvtVars.add(new String[]{"_p", + "L" + elTypeName.replace('.', '/') + ";"}); + } + body.append(genCtx.protoVarDecls); + lvtVars.addAll(genCtx.protoLvtVars); + } + + body.append(bodyContent); + body.append("}\n"); + genCtx.privateMethods.add(new LALClassGenerator.PrivateMethod( + body.toString(), lvtVars.toArray(new String[0][]))); + + genCtx.restoreProtoVarState(savedState); + + sb.append(" if (!ctx.shouldAbort()) {\n"); + sb.append(" ").append(methodName).append("(filterSpec.extractor(), h);\n"); + sb.append(" }\n"); + } + + static void generateExtractorBody( + final StringBuilder sb, + final List<? extends LALScriptModel.FilterStatement> stmts, + final LALClassGenerator.GenCtx genCtx) { + for (final LALScriptModel.FilterStatement stmt : stmts) { + if (stmt instanceof LALScriptModel.FieldAssignment) { + final LALScriptModel.FieldAssignment field = + (LALScriptModel.FieldAssignment) stmt; + sb.append(" _e.").append(field.getFieldType().name().toLowerCase()) + .append("(h.ctx(), "); + generateCastedValueAccess(sb, field.getValue(), + field.getCastType(), genCtx); + if (field.getFormatPattern() != null) { + sb.append(", \"") + .append(LALCodegenHelper.escapeJava(field.getFormatPattern())) + .append("\""); + } + sb.append(");\n"); + } else if (stmt instanceof LALScriptModel.TagAssignment) { + generateTagAssignment(sb, (LALScriptModel.TagAssignment) stmt, genCtx); + } else if (stmt instanceof LALScriptModel.IfBlock) { + generateIfBlockInExtractor(sb, (LALScriptModel.IfBlock) stmt, genCtx); + } else if (stmt instanceof LALScriptModel.MetricsBlock) { + generateMetricsInline(sb, (LALScriptModel.MetricsBlock) stmt, genCtx); + } else if (stmt instanceof LALScriptModel.SlowSqlBlock) { + generateSlowSqlInline(sb, (LALScriptModel.SlowSqlBlock) stmt, genCtx); + } else if (stmt instanceof LALScriptModel.SampledTraceBlock) { + generateSampledTraceInline(sb, + (LALScriptModel.SampledTraceBlock) stmt, genCtx); + } + } + } + + static void generateIfBlockInExtractor( + final StringBuilder sb, + final LALScriptModel.IfBlock ifBlock, + final LALClassGenerator.GenCtx genCtx) { + sb.append(" if ("); + generateCondition(sb, ifBlock.getCondition(), genCtx); + sb.append(") {\n"); + generateExtractorBody(sb, ifBlock.getThenBranch(), genCtx); + sb.append(" }\n"); + if (!ifBlock.getElseBranch().isEmpty()) { + sb.append(" else {\n"); + generateExtractorBody(sb, ifBlock.getElseBranch(), genCtx); + sb.append(" }\n"); + } + } + + // ==================== Metrics inline ==================== + + static void generateMetricsInline( + final StringBuilder sb, + final LALScriptModel.MetricsBlock block, + final LALClassGenerator.GenCtx genCtx) { + sb.append(" { ").append(SAMPLE_BUILDER).append(" _b = _e.prepareMetrics(h.ctx());\n"); + sb.append(" if (_b != null) {\n"); + if (block.getName() != null) { + sb.append(" _b.name(\"") + .append(LALCodegenHelper.escapeJava(block.getName())).append("\");\n"); + } + if (block.getTimestampValue() != null) { + sb.append(" _b.timestamp("); + generateCastedValueAccess(sb, block.getTimestampValue(), + block.getTimestampCast(), genCtx); + sb.append(");\n"); + } + if (!block.getLabels().isEmpty()) { + sb.append(" { java.util.Map _labels = new java.util.LinkedHashMap();\n"); + for (final Map.Entry<String, LALScriptModel.TagValue> entry + : block.getLabels().entrySet()) { + sb.append(" _labels.put(\"") + .append(LALCodegenHelper.escapeJava(entry.getKey())).append("\", "); + generateCastedValueAccess(sb, entry.getValue().getValue(), + entry.getValue().getCastType(), genCtx); + sb.append(");\n"); + } + sb.append(" _b.labels(_labels); }\n"); + } + if (block.getValue() != null) { + sb.append(" _b.value("); + if ("Long".equals(block.getValueCast())) { + sb.append("(double) h.toLong("); + generateValueAccess(sb, block.getValue(), genCtx); + sb.append(")"); + } else if ("Integer".equals(block.getValueCast())) { + sb.append("(double) h.toInt("); + generateValueAccess(sb, block.getValue(), genCtx); + sb.append(")"); + } else { + if (block.getValue().isNumberLiteral()) { + sb.append("(double) ").append(block.getValue().getSegments().get(0)); + } else { + sb.append("((Number) "); + generateValueAccess(sb, block.getValue(), genCtx); + sb.append(").doubleValue()"); + } + } + sb.append(");\n"); + } + sb.append(" _e.submitMetrics(h.ctx(), _b);\n"); + sb.append(" } }\n"); + } + + // ==================== SlowSql inline ==================== + + static void generateSlowSqlInline( + final StringBuilder sb, + final LALScriptModel.SlowSqlBlock block, + final LALClassGenerator.GenCtx genCtx) { + sb.append(" _e.prepareSlowSql(h.ctx());\n"); + if (block.getId() != null) { + sb.append(" _e.slowSqlSpec().id(h.ctx(), "); + generateCastedValueAccess(sb, block.getId(), block.getIdCast(), genCtx); + sb.append(");\n"); + } + if (block.getStatement() != null) { + sb.append(" _e.slowSqlSpec().statement(h.ctx(), "); + generateCastedValueAccess(sb, block.getStatement(), + block.getStatementCast(), genCtx); + sb.append(");\n"); + } + if (block.getLatency() != null) { + sb.append(" _e.slowSqlSpec().latency(h.ctx(), Long.valueOf(h.toLong("); + generateValueAccess(sb, block.getLatency(), genCtx); + sb.append(")));\n"); + } + sb.append(" _e.submitSlowSql(h.ctx());\n"); + } + + // ==================== SampledTrace inline ==================== + + static void generateSampledTraceInline( + final StringBuilder sb, + final LALScriptModel.SampledTraceBlock block, + final LALClassGenerator.GenCtx genCtx) { + sb.append(" _e.prepareSampledTrace(h.ctx());\n"); + generateSampledTraceBody(sb, block.getStatements(), genCtx); + sb.append(" _e.submitSampledTrace(h.ctx());\n"); + } + + static void generateSampledTraceBody( + final StringBuilder sb, + final List<LALScriptModel.SampledTraceStatement> stmts, + final LALClassGenerator.GenCtx genCtx) { + for (final LALScriptModel.SampledTraceStatement stmt : stmts) { + if (stmt instanceof LALScriptModel.SampledTraceField) { + generateSampledTraceField(sb, (LALScriptModel.SampledTraceField) stmt, + genCtx); + } else if (stmt instanceof LALScriptModel.IfBlock) { + generateSampledTraceIfBlock(sb, (LALScriptModel.IfBlock) stmt, genCtx); + } + } + } + + static void generateSampledTraceField( + final StringBuilder sb, + final LALScriptModel.SampledTraceField field, + final LALClassGenerator.GenCtx genCtx) { + switch (field.getFieldType()) { + case LATENCY: + sb.append(" _e.sampledTraceSpec().latency(h.ctx(), Long.valueOf(h.toLong("); + generateValueAccess(sb, field.getValue(), genCtx); + sb.append(")));\n"); + return; + case COMPONENT_ID: + sb.append(" _e.sampledTraceSpec().componentId(h.ctx(), h.toInt("); + generateValueAccess(sb, field.getValue(), genCtx); + sb.append("));\n"); + return; + case URI: + sb.append(" _e.sampledTraceSpec().uri(h.ctx(), "); + break; + case REASON: + sb.append(" _e.sampledTraceSpec().reason(h.ctx(), "); + break; + case PROCESS_ID: + sb.append(" _e.sampledTraceSpec().processId(h.ctx(), "); + break; + case DEST_PROCESS_ID: + sb.append(" _e.sampledTraceSpec().destProcessId(h.ctx(), "); + break; + case DETECT_POINT: + sb.append(" _e.sampledTraceSpec().detectPoint(h.ctx(), "); + break; + case REPORT_SERVICE: + sb.append(" _e.sampledTraceSpec().") + .append(field.getFieldType().name().toLowerCase()) + .append("(h.ctx(), "); + break; + default: + return; + } + generateCastedValueAccess(sb, field.getValue(), field.getCastType(), genCtx); + sb.append(");\n"); + } + + static void generateSampledTraceIfBlock( + final StringBuilder sb, + final LALScriptModel.IfBlock ifBlock, + final LALClassGenerator.GenCtx genCtx) { + sb.append(" if ("); + generateCondition(sb, ifBlock.getCondition(), genCtx); + sb.append(") {\n"); + generateSampledTraceBodyFromFilterStmts(sb, ifBlock.getThenBranch(), genCtx); + sb.append(" }\n"); + if (!ifBlock.getElseBranch().isEmpty()) { + sb.append(" else {\n"); + generateSampledTraceBodyFromFilterStmts(sb, ifBlock.getElseBranch(), genCtx); + sb.append(" }\n"); + } + } + + static void generateSampledTraceBodyFromFilterStmts( + final StringBuilder sb, + final List<? extends LALScriptModel.FilterStatement> stmts, + final LALClassGenerator.GenCtx genCtx) { + for (final LALScriptModel.FilterStatement stmt : stmts) { + if (stmt instanceof LALScriptModel.SampledTraceField) { + generateSampledTraceField(sb, + (LALScriptModel.SampledTraceField) stmt, genCtx); + } else if (stmt instanceof LALScriptModel.FieldAssignment) { + generateSampledTraceFieldFromAssignment(sb, + (LALScriptModel.FieldAssignment) stmt, genCtx); + } else if (stmt instanceof LALScriptModel.IfBlock) { + generateSampledTraceIfBlock(sb, (LALScriptModel.IfBlock) stmt, genCtx); + } + } + } + + static void generateSampledTraceFieldFromAssignment( + final StringBuilder sb, + final LALScriptModel.FieldAssignment fa, + final LALClassGenerator.GenCtx genCtx) { + switch (fa.getFieldType()) { + case TIMESTAMP: + sb.append(" _e.sampledTraceSpec().latency(h.ctx(), Long.valueOf(h.toLong("); + generateValueAccess(sb, fa.getValue(), genCtx); + sb.append(")));\n"); + break; + default: + sb.append(" _e.sampledTraceSpec().") + .append(fa.getFieldType().name().toLowerCase()) + .append("(h.ctx(), "); + generateCastedValueAccess(sb, fa.getValue(), fa.getCastType(), genCtx); + sb.append(");\n"); + break; + } + } + + // ==================== Tag assignment ==================== + + static void generateTagAssignment(final StringBuilder sb, + final LALScriptModel.TagAssignment tag, + final LALClassGenerator.GenCtx genCtx) { + for (final Map.Entry<String, LALScriptModel.TagValue> entry + : tag.getTags().entrySet()) { + sb.append(" _e.tag(h.ctx(), \"") + .append(LALCodegenHelper.escapeJava(entry.getKey())).append("\", "); + generateStringValueAccess(sb, entry.getValue().getValue(), + entry.getValue().getCastType(), genCtx); + sb.append(");\n"); + } + } + + // ==================== Sink method generation ==================== + + static void generateSinkMethod(final StringBuilder sb, + final LALScriptModel.SinkBlock sink, + final LALClassGenerator.GenCtx genCtx) { + final String methodName = genCtx.nextMethodName("sink"); + final Object[] savedState = genCtx.saveProtoVarState(); + genCtx.resetProtoVars(); + + // Generate body first to collect proto var declarations + final StringBuilder bodyContent = new StringBuilder(); + final List<LALScriptModel.FilterStatement> sinkStmts = new ArrayList<>(); + for (final LALScriptModel.SinkStatement ss : sink.getStatements()) { + sinkStmts.add((LALScriptModel.FilterStatement) ss); + } + generateSinkBody(bodyContent, sinkStmts, genCtx); + + // Assemble method with declarations before body + final StringBuilder body = new StringBuilder(); + body.append("private void ").append(methodName).append("(") + .append(FILTER_SPEC).append(" _f, ").append(H).append(" h) {\n"); + + final List<String[]> lvtVars = new ArrayList<>(); + lvtVars.add(new String[]{"_f", "L" + FILTER_SPEC.replace('.', '/') + ";"}); + lvtVars.add(new String[]{"h", "L" + H.replace('.', '/') + ";"}); + + if (genCtx.usedProtoAccess) { + if (genCtx.extraLogType != null) { + final String elTypeName = genCtx.extraLogType.getName(); + body.append(" ").append(elTypeName).append(" _p = (") + .append(elTypeName).append(") h.ctx().extraLog();\n"); + lvtVars.add(new String[]{"_p", + "L" + elTypeName.replace('.', '/') + ";"}); + } + body.append(genCtx.protoVarDecls); + lvtVars.addAll(genCtx.protoLvtVars); + } + + body.append(bodyContent); + body.append("}\n"); + genCtx.privateMethods.add(new LALClassGenerator.PrivateMethod( + body.toString(), lvtVars.toArray(new String[0][]))); + + genCtx.restoreProtoVarState(savedState); + + sb.append(" if (!ctx.shouldAbort()) {\n"); + sb.append(" ").append(methodName).append("(filterSpec, h);\n"); + sb.append(" }\n"); + sb.append(" filterSpec.finalizeSink(ctx);\n"); + } + + static void generateSinkBody( + final StringBuilder sb, + final List<? extends LALScriptModel.FilterStatement> stmts, + final LALClassGenerator.GenCtx genCtx) { + for (final LALScriptModel.FilterStatement stmt : stmts) { + if (stmt instanceof LALScriptModel.EnforcerStatement) { + sb.append(" _f.enforcer(h.ctx());\n"); + } else if (stmt instanceof LALScriptModel.DropperStatement) { + sb.append(" _f.dropper(h.ctx());\n"); + } else if (stmt instanceof LALScriptModel.SamplerBlock) { + generateSamplerInline(sb, (LALScriptModel.SamplerBlock) stmt, genCtx); + } else if (stmt instanceof LALScriptModel.IfBlock) { + generateIfBlockInSink(sb, (LALScriptModel.IfBlock) stmt, genCtx); + } + } + } + + static void generateIfBlockInSink( + final StringBuilder sb, + final LALScriptModel.IfBlock ifBlock, + final LALClassGenerator.GenCtx genCtx) { + sb.append(" if ("); + generateCondition(sb, ifBlock.getCondition(), genCtx); + sb.append(") {\n"); + generateSinkBody(sb, ifBlock.getThenBranch(), genCtx); + sb.append(" }\n"); + if (!ifBlock.getElseBranch().isEmpty()) { + sb.append(" else {\n"); + generateSinkBody(sb, ifBlock.getElseBranch(), genCtx); + sb.append(" }\n"); + } + } + + // ==================== Sampler/RateLimit inline ==================== + + static void generateSamplerInline( + final StringBuilder sb, + final LALScriptModel.SamplerBlock block, + final LALClassGenerator.GenCtx genCtx) { + generateSamplerContents(sb, block.getContents(), genCtx); + } + + static void generateSamplerContents( + final StringBuilder sb, + final List<LALScriptModel.SamplerContent> contents, + final LALClassGenerator.GenCtx genCtx) { + for (final LALScriptModel.SamplerContent content : contents) { + if (content instanceof LALScriptModel.RateLimitBlock) { + generateRateLimitInline(sb, (LALScriptModel.RateLimitBlock) content, + genCtx); + } else if (content instanceof LALScriptModel.IfBlock) { + generateSamplerIfBlock(sb, (LALScriptModel.IfBlock) content, genCtx); + } + } + } + + static void generateSamplerIfBlock( + final StringBuilder sb, + final LALScriptModel.IfBlock ifBlock, + final LALClassGenerator.GenCtx genCtx) { + sb.append(" if ("); + generateCondition(sb, ifBlock.getCondition(), genCtx); + sb.append(") {\n"); + generateSamplerContentsFromFilterStmts(sb, ifBlock.getThenBranch(), genCtx); + sb.append(" }\n"); + if (!ifBlock.getElseBranch().isEmpty()) { + sb.append(" else {\n"); + generateSamplerContentsFromFilterStmts(sb, ifBlock.getElseBranch(), genCtx); + sb.append(" }\n"); + } + } + + static void generateSamplerContentsFromFilterStmts( + final StringBuilder sb, + final List<? extends LALScriptModel.FilterStatement> stmts, + final LALClassGenerator.GenCtx genCtx) { + for (final LALScriptModel.FilterStatement stmt : stmts) { + if (stmt instanceof LALScriptModel.SamplerBlock) { + generateSamplerContents(sb, + ((LALScriptModel.SamplerBlock) stmt).getContents(), genCtx); + } else if (stmt instanceof LALScriptModel.IfBlock) { + generateSamplerIfBlock(sb, (LALScriptModel.IfBlock) stmt, genCtx); + } + } + } + + static void generateRateLimitInline( + final StringBuilder sb, + final LALScriptModel.RateLimitBlock block, + final LALClassGenerator.GenCtx genCtx) { + sb.append(" _f.sampler().rateLimit(h.ctx(), "); + if (block.isIdInterpolated()) { + sb.append("\"\""); + for (final LALScriptModel.InterpolationPart part : block.getIdParts()) { + sb.append(" + "); + if (part.isLiteral()) { + sb.append("\"").append(LALCodegenHelper.escapeJava(part.getLiteral())) + .append("\""); + } else { + sb.append("String.valueOf("); + generateValueAccess(sb, part.getExpression(), genCtx); + sb.append(")"); + } + } + } else { + sb.append("\"").append(LALCodegenHelper.escapeJava(block.getId())).append("\""); + } + sb.append(", ").append(block.getRpm()).append(");\n"); + } + + // ==================== Conditions ==================== + + static void generateCondition(final StringBuilder sb, + final LALScriptModel.Condition cond, + final LALClassGenerator.GenCtx genCtx) { + if (cond instanceof LALScriptModel.ComparisonCondition) { + final LALScriptModel.ComparisonCondition cc = + (LALScriptModel.ComparisonCondition) cond; + switch (cc.getOp()) { + case EQ: + sb.append("java.util.Objects.equals("); + generateValueAccessObj(sb, cc.getLeft(), cc.getLeftCast(), genCtx); + sb.append(", "); + generateConditionValue(sb, cc.getRight(), genCtx); + sb.append(")"); + break; + case NEQ: + sb.append("!java.util.Objects.equals("); + generateValueAccessObj(sb, cc.getLeft(), cc.getLeftCast(), genCtx); + sb.append(", "); + generateConditionValue(sb, cc.getRight(), genCtx); + sb.append(")"); + break; + case GT: + generateNumericComparison(sb, cc, " > ", genCtx); + break; + case LT: + generateNumericComparison(sb, cc, " < ", genCtx); + break; + case GTE: + generateNumericComparison(sb, cc, " >= ", genCtx); + break; + case LTE: + generateNumericComparison(sb, cc, " <= ", genCtx); + break; + default: + break; + } + } else if (cond instanceof LALScriptModel.LogicalCondition) { + final LALScriptModel.LogicalCondition lc = + (LALScriptModel.LogicalCondition) cond; + sb.append("("); + generateCondition(sb, lc.getLeft(), genCtx); + sb.append(lc.getOp() == LALScriptModel.LogicalOp.AND + ? " && " : " || "); + generateCondition(sb, lc.getRight(), genCtx); + sb.append(")"); + } else if (cond instanceof LALScriptModel.NotCondition) { + sb.append("!("); + generateCondition(sb, + ((LALScriptModel.NotCondition) cond).getInner(), genCtx); + sb.append(")"); + } else if (cond instanceof LALScriptModel.ExprCondition) { + final String ct = ((LALScriptModel.ExprCondition) cond).getCastType(); + final String method = "Boolean".equals(ct) || "boolean".equals(ct) + ? ".isTrue(" : ".isNotEmpty("; + sb.append("h").append(method); + generateValueAccessObj(sb, + ((LALScriptModel.ExprCondition) cond).getExpr(), + ct, genCtx); + sb.append(")"); + } + } + + static void generateNumericComparison( + final StringBuilder sb, + final LALScriptModel.ComparisonCondition cc, + final String op, + final LALClassGenerator.GenCtx genCtx) { + // Generate left side into buffer to inspect resolved type + final StringBuilder leftBuf = new StringBuilder(); + generateValueAccessObj(leftBuf, cc.getLeft(), null, genCtx); + + final boolean primitiveNumeric = genCtx.lastResolvedType != null + && (genCtx.lastResolvedType == int.class + || genCtx.lastResolvedType == long.class); + + if (primitiveNumeric && genCtx.lastRawChain != null) { + // Direct primitive comparison — no boxing, no h.toLong() + if (genCtx.lastNullChecks != null) { + sb.append("(").append(genCtx.lastNullChecks).append(" ? false : ") + .append(genCtx.lastRawChain).append(op); + generateConditionValueNumeric(sb, cc.getRight(), genCtx); + sb.append(")"); + } else { + sb.append(genCtx.lastRawChain).append(op); + generateConditionValueNumeric(sb, cc.getRight(), genCtx); + } + } else { + // Fallback: h.toLong() conversion + sb.append("h.toLong(").append(leftBuf).append(")").append(op); + generateConditionValueNumeric(sb, cc.getRight(), genCtx); + } + } + + static void generateConditionValue(final StringBuilder sb, + final LALScriptModel.ConditionValue cv, + final LALClassGenerator.GenCtx genCtx) { + if (cv instanceof LALScriptModel.StringConditionValue) { + sb.append('"') + .append(LALCodegenHelper.escapeJava( + ((LALScriptModel.StringConditionValue) cv).getValue())) + .append('"'); + } else if (cv instanceof LALScriptModel.NumberConditionValue) { + final double val = + ((LALScriptModel.NumberConditionValue) cv).getValue(); + sb.append("Long.valueOf(").append((long) val).append("L)"); + } else if (cv instanceof LALScriptModel.BoolConditionValue) { + sb.append("Boolean.valueOf(") + .append(((LALScriptModel.BoolConditionValue) cv).isValue()) + .append(")"); + } else if (cv instanceof LALScriptModel.NullConditionValue) { + sb.append("null"); + } else if (cv instanceof LALScriptModel.ValueAccessConditionValue) { + generateValueAccessObj(sb, + ((LALScriptModel.ValueAccessConditionValue) cv).getValue(), + null, genCtx); + } + } + + static void generateConditionValueNumeric( + final StringBuilder sb, + final LALScriptModel.ConditionValue cv, + final LALClassGenerator.GenCtx genCtx) { + if (cv instanceof LALScriptModel.NumberConditionValue) { + sb.append((long) ((LALScriptModel.NumberConditionValue) cv) + .getValue()).append("L"); + } else if (cv instanceof LALScriptModel.ValueAccessConditionValue) { + sb.append("h.toLong("); + generateValueAccessObj(sb, + ((LALScriptModel.ValueAccessConditionValue) cv).getValue(), + null, genCtx); + sb.append(")"); + } else { + sb.append("0L"); + } + } + + // ==================== Value access ==================== + + static void generateCastedValueAccess(final StringBuilder sb, + final LALScriptModel.ValueAccess value, + final String castType, + final LALClassGenerator.GenCtx genCtx) { + if ("String".equals(castType)) { + sb.append("h.toStr("); + generateValueAccess(sb, value, genCtx); + sb.append(")"); + } else if ("Long".equals(castType)) { + sb.append("h.toLong("); + generateValueAccess(sb, value, genCtx); + sb.append(")"); + } else if ("Integer".equals(castType)) { + sb.append("h.toInt("); + generateValueAccess(sb, value, genCtx); + sb.append(")"); + } else if ("Boolean".equals(castType)) { + sb.append("h.toBool("); + generateValueAccess(sb, value, genCtx); + sb.append(")"); + } else { + generateValueAccess(sb, value, genCtx); + } + } + + static void generateStringValueAccess(final StringBuilder sb, + final LALScriptModel.ValueAccess value, + final String castType, + final LALClassGenerator.GenCtx genCtx) { + if (castType == null || "String".equals(castType)) { + sb.append("h.toStr("); + generateValueAccess(sb, value, genCtx); + sb.append(")"); + } else if ("Long".equals(castType)) { + sb.append("String.valueOf(h.toLong("); + generateValueAccess(sb, value, genCtx); + sb.append("))"); + } else if ("Integer".equals(castType)) { + sb.append("String.valueOf(h.toInt("); + generateValueAccess(sb, value, genCtx); + sb.append("))"); + } else if ("Boolean".equals(castType)) { + sb.append("String.valueOf(h.toBool("); + generateValueAccess(sb, value, genCtx); + sb.append("))"); + } else { + sb.append("h.toStr("); + generateValueAccess(sb, value, genCtx); + sb.append(")"); + } + } + + static void generateValueAccessObj(final StringBuilder sb, + final LALScriptModel.ValueAccess value, + final String castType, + final LALClassGenerator.GenCtx genCtx) { + if ("String".equals(castType)) { + sb.append("h.toStr("); + generateValueAccess(sb, value, genCtx); + sb.append(")"); + } else { + generateValueAccess(sb, value, genCtx); + } + } + + static void generateValueAccess(final StringBuilder sb, + final LALScriptModel.ValueAccess value, + final LALClassGenerator.GenCtx genCtx) { + genCtx.clearExtraLogResult(); + + // Handle string concatenation (term1 + term2 + ...) + if (!value.getConcatParts().isEmpty()) { + sb.append("(\"\" + "); + for (int i = 0; i < value.getConcatParts().size(); i++) { + if (i > 0) { + sb.append(" + "); + } + generateValueAccess(sb, value.getConcatParts().get(i), genCtx); + } + sb.append(")"); + return; + } + + // Handle parenthesized expression: (innerExpr as Type).chain... + if (value.getParenInner() != null) { + generateParenAccess(sb, value, genCtx); + return; + } + + // Handle function call primaries (e.g., tag("LOG_KIND")) + if (value.getFunctionCallName() != null) { + if ("tag".equals(value.getFunctionCallName()) + && !value.getFunctionCallArgs().isEmpty()) { + sb.append("h.tagValue(\""); + final String key = value.getFunctionCallArgs().get(0) + .getValue().getSegments().get(0); + sb.append(LALCodegenHelper.escapeJava(key)).append("\")"); + } else { + sb.append("null"); + } + return; + } + + // Handle string/number literals + if (value.isStringLiteral() && value.getChain().isEmpty()) { + sb.append("\"").append(LALCodegenHelper.escapeJava(value.getSegments().get(0))) + .append("\""); + return; + } + if (value.isNumberLiteral() && value.getChain().isEmpty()) { + final String num = value.getSegments().get(0); + if (num.contains(".")) { + sb.append("Double.valueOf(").append(num).append(")"); + } else { + sb.append("Integer.valueOf(").append(num).append(")"); + } + return; + } + + // Handle ProcessRegistry static calls + if (value.isProcessRegistryRef()) { + generateProcessRegistryCall(sb, value, genCtx); + return; + } + + final List<LALScriptModel.ValueAccessSegment> chain = value.getChain(); + + // Handle log.X.Y direct proto getter chains + if (value.isLogRef()) { + generateLogAccess(sb, chain); + return; + } + + // Handle parsed.X.Y with compile-time type analysis + if (value.isParsedRef()) { + generateParsedAccess(sb, chain, genCtx); + return; + } + + // Fallback for unknown primary + if (chain.isEmpty()) { + sb.append("null"); + return; + } + // Treat as parsed ref + generateParsedAccess(sb, chain, genCtx); + } + + // ==================== Parenthesized expression ==================== + + static void generateParenAccess(final StringBuilder sb, + final LALScriptModel.ValueAccess value, + final LALClassGenerator.GenCtx genCtx) { + // Generate the inner expression with cast + final String castType = value.getParenCast(); + final StringBuilder inner = new StringBuilder(); + if (castType != null) { + generateCastedValueAccess(inner, value.getParenInner(), castType, genCtx); + } else { + generateValueAccess(inner, value.getParenInner(), genCtx); + } + + // Apply chain segments (methods, fields, index access) + String current = inner.toString(); + for (final LALScriptModel.ValueAccessSegment seg : value.getChain()) { + if (seg instanceof LALScriptModel.MethodSegment) { + current = appendMethodSegment(current, + (LALScriptModel.MethodSegment) seg); + } else if (seg instanceof LALScriptModel.IndexSegment) { + current = current + "[" + + ((LALScriptModel.IndexSegment) seg).getIndex() + "]"; + } else if (seg instanceof LALScriptModel.FieldSegment) { + final LALScriptModel.FieldSegment fs = + (LALScriptModel.FieldSegment) seg; + if (fs.isSafeNav()) { + current = "(" + current + " == null ? null : " + + current + "." + fs.getName() + ")"; + } else { + current = current + "." + fs.getName(); + } + } + } + sb.append(current); + } + + // ==================== Log access (direct proto getters) ==================== + + static void generateLogAccess(final StringBuilder sb, + final List<LALScriptModel.ValueAccessSegment> chain) { + if (chain.isEmpty()) { + sb.append("h.ctx().log()"); + return; + } + + String current = "h.ctx().log()"; + boolean needsBoxing = false; + String boxType = null; + + for (int i = 0; i < chain.size(); i++) { + final LALScriptModel.ValueAccessSegment seg = chain.get(i); + if (seg instanceof LALScriptModel.FieldSegment) { + final String name = ((LALScriptModel.FieldSegment) seg).getName(); + if (i == 0 && LALCodegenHelper.LOG_GETTERS.containsKey(name)) { + if ("traceContext".equals(name)) { + current = current + ".getTraceContext()"; + } else { + current = current + "." + + LALCodegenHelper.LOG_GETTERS.get(name) + "()"; + if (LALCodegenHelper.LONG_FIELDS.contains(name)) { + needsBoxing = true; + boxType = "Long"; + } + } + } else if (i == 1 && current.endsWith(".getTraceContext()") + && LALCodegenHelper.TRACE_CONTEXT_GETTERS.containsKey(name)) { + current = current + "." + + LALCodegenHelper.TRACE_CONTEXT_GETTERS.get(name) + "()"; + if (LALCodegenHelper.INT_FIELDS.contains(name)) { + needsBoxing = true; + boxType = "Integer"; + } + } else { + throw new IllegalArgumentException( + "Unknown log field: log." + name + + ". Supported fields: " + + LALCodegenHelper.LOG_GETTERS.keySet() + + ", traceContext." + + LALCodegenHelper.TRACE_CONTEXT_GETTERS.keySet()); + } + } else if (seg instanceof LALScriptModel.MethodSegment) { + current = appendMethodSegment(current, + (LALScriptModel.MethodSegment) seg); + } + } + + if (needsBoxing) { + sb.append(boxType).append(".valueOf(").append(current).append(")"); + } else { + sb.append(current); + } + } + + // ==================== Parsed access (compile-time typed) ==================== + + static void generateParsedAccess( + final StringBuilder sb, + final List<LALScriptModel.ValueAccessSegment> chain, + final LALClassGenerator.GenCtx genCtx) { + if (chain.isEmpty()) { + sb.append("h.ctx().parsed()"); + return; + } + + // Collect leading field segments (stop at method/index) + final List<LALScriptModel.FieldSegment> fieldSegments = new ArrayList<>(); + int methodStart = -1; + for (int i = 0; i < chain.size(); i++) { + final LALScriptModel.ValueAccessSegment seg = chain.get(i); + if (seg instanceof LALScriptModel.FieldSegment) { + fieldSegments.add((LALScriptModel.FieldSegment) seg); + } else { + methodStart = i; + break; + } + } + + final List<String> fieldKeys = new ArrayList<>(); + for (final LALScriptModel.FieldSegment fs : fieldSegments) { + fieldKeys.add(fs.getName()); + } + + String current; + switch (genCtx.parserType) { + case JSON: + case YAML: + current = LALCodegenHelper.generateMapValCall(fieldKeys); + break; + case TEXT: + if (!fieldKeys.isEmpty()) { + current = "h.group(\"" + + LALCodegenHelper.escapeJava(fieldKeys.get(0)) + "\")"; + } else { + current = "h.ctx().parsed()"; + } + break; + case NONE: + if (genCtx.extraLogType != null) { + current = generateExtraLogAccess(fieldSegments, genCtx.extraLogType, + "_p", true, genCtx); + } else { + // No parser and no extraLogType — fall back to LogData proto + current = generateExtraLogAccess(fieldSegments, LogData.Builder.class, + "h.ctx().log()", false, genCtx); + } + break; + default: + current = "null"; + break; + } + + // Apply remaining method/index segments + if (methodStart >= 0) { + for (int i = methodStart; i < chain.size(); i++) { + final LALScriptModel.ValueAccessSegment seg = chain.get(i); + if (seg instanceof LALScriptModel.MethodSegment) { + current = appendMethodSegment(current, + (LALScriptModel.MethodSegment) seg); + } else if (seg instanceof LALScriptModel.IndexSegment) { + current = current + "[" + + ((LALScriptModel.IndexSegment) seg).getIndex() + "]"; + } else if (seg instanceof LALScriptModel.FieldSegment) { + current = current + "." + + ((LALScriptModel.FieldSegment) seg).getName(); + } + } + } + + sb.append(current); + } + + static String generateExtraLogAccess( + final List<LALScriptModel.FieldSegment> fieldSegments, + final Class<?> rootType, + final String rootExpr, + final boolean rootCanBeNull, + final LALClassGenerator.GenCtx genCtx) { + genCtx.usedProtoAccess = true; + + if (fieldSegments.isEmpty()) { + return rootExpr; + } + + final String typeName = rootType.getName(); + final StringBuilder chainKey = new StringBuilder(); + String prevVar = rootExpr; + Class<?> currentType = rootType; + boolean prevCanBeNull = rootCanBeNull; + + for (int i = 0; i < fieldSegments.size(); i++) { + final LALScriptModel.FieldSegment seg = fieldSegments.get(i); + final String field = seg.getName(); + final String getterName = "get" + Character.toUpperCase(field.charAt(0)) + + field.substring(1); + + final java.lang.reflect.Method getter; + try { + getter = currentType.getMethod(getterName); + } catch (NoSuchMethodException e) { + throw new IllegalArgumentException( + "Cannot resolve getter " + currentType.getSimpleName() + + "." + getterName + "() for type " + + typeName + ". Check the field path in the LAL rule."); + } + final Class<?> returnType = getter.getReturnType(); + + if (chainKey.length() > 0) { + chainKey.append("."); + } + chainKey.append(field); + final String key = chainKey.toString(); + final boolean isLast = i == fieldSegments.size() - 1; + + // Primitive final segment: return inline expression, no variable + if (isLast && returnType.isPrimitive()) { + final String rawAccess = prevVar + "." + getterName + "()"; + genCtx.lastResolvedType = returnType; + genCtx.lastRawChain = rawAccess; + final String boxName = LALCodegenHelper.boxTypeName(returnType); + if (seg.isSafeNav() && prevCanBeNull) { + genCtx.lastNullChecks = prevVar + " == null"; + return "(" + prevVar + " == null ? null : " + + boxName + ".valueOf(" + rawAccess + "))"; + } else { + genCtx.lastNullChecks = null; + return boxName + ".valueOf(" + rawAccess + ")"; + } + } + + // Reuse existing variable (dedup) + final String existingVar = genCtx.protoVars.get(key); + if (existingVar != null) { + prevVar = existingVar; + currentType = returnType; + prevCanBeNull = true; + continue; + } + + // Create new local variable declaration + final String newVar = "_t" + genCtx.protoVarCounter++; + final String returnTypeName = returnType.getName(); + if (seg.isSafeNav() && prevCanBeNull) { + genCtx.protoVarDecls.append(" ").append(returnTypeName) + .append(" ").append(newVar).append(" = ") + .append(prevVar).append(" == null ? null : ") + .append(prevVar).append(".").append(getterName).append("();\n"); + prevCanBeNull = true; + } else { + genCtx.protoVarDecls.append(" ").append(returnTypeName) + .append(" ").append(newVar).append(" = ") + .append(prevVar).append(".").append(getterName).append("();\n"); + prevCanBeNull = !returnType.isPrimitive(); + } + genCtx.protoVars.put(key, newVar); + genCtx.protoLvtVars.add(new String[]{ + newVar, "L" + returnTypeName.replace('.', '/') + ";" + }); + + prevVar = newVar; + currentType = returnType; + } + + // Non-primitive final result — null checks are in declarations + genCtx.lastResolvedType = currentType; + genCtx.lastRawChain = prevVar; + genCtx.lastNullChecks = null; + return prevVar; + } + + // ==================== ProcessRegistry ==================== + + static void generateProcessRegistryCall( + final StringBuilder sb, + final LALScriptModel.ValueAccess value, + final LALClassGenerator.GenCtx genCtx) { + final List<LALScriptModel.ValueAccessSegment> chain = value.getChain(); + if (chain.isEmpty()) { + sb.append("null"); + return; + } + final LALScriptModel.ValueAccessSegment seg = chain.get(0); + if (seg instanceof LALScriptModel.MethodSegment) { + final LALScriptModel.MethodSegment ms = + (LALScriptModel.MethodSegment) seg; + sb.append(PROCESS_REGISTRY).append(".") + .append(ms.getName()).append("("); + final List<LALScriptModel.FunctionArg> args = ms.getArguments(); + for (int i = 0; i < args.size(); i++) { + if (i > 0) { + sb.append(", "); + } + generateCastedValueAccess(sb, + args.get(i).getValue(), args.get(i).getCastType(), genCtx); + } + sb.append(")"); + } else { + sb.append("null"); + } + } + + // ==================== Utility methods ==================== + + static String appendMethodSegment(final String current, + final LALScriptModel.MethodSegment ms) { + if (ms.isSafeNav()) { + final String mn = ms.getName(); + if ("toString".equals(mn)) { + return "h.toString(" + current + ")"; + } else if ("trim".equals(mn)) { + return "h.trim(" + current + ")"; + } else { + throw new IllegalArgumentException( + "Unsupported safe-nav method: ?." + mn + "()"); + } + } else { + if (ms.getArguments().isEmpty()) { + return current + "." + ms.getName() + "()"; + } else { + return current + "." + ms.getName() + "(" + + generateMethodArgs(ms.getArguments()) + ")"; + } + } + } + + static String generateMethodArgs( + final List<LALScriptModel.FunctionArg> args) { + final StringBuilder sb = new StringBuilder(); + for (int i = 0; i < args.size(); i++) { + if (i > 0) { + sb.append(", "); + } + final LALScriptModel.FunctionArg arg = args.get(i); + final LALScriptModel.ValueAccess va = arg.getValue(); + if (va.isStringLiteral()) { + sb.append("\"").append(LALCodegenHelper.escapeJava( + va.getSegments().get(0))).append("\""); + } else if (va.isNumberLiteral()) { + sb.append(va.getSegments().get(0)); + } else { + sb.append("null"); + } + } + return sb.toString(); + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALClassGenerator.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALClassGenerator.java new file mode 100644 index 000000000000..bc1c054cfad9 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALClassGenerator.java @@ -0,0 +1,553 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.compiler; + +import java.io.DataOutputStream; +import java.io.File; +import java.io.FileOutputStream; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.atomic.AtomicInteger; +import javassist.ClassPool; +import javassist.CtClass; +import javassist.CtNewConstructor; +import javassist.CtNewMethod; +import lombok.extern.slf4j.Slf4j; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.rt.LalExpressionPackageHolder; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.LalExpression; +import org.apache.skywalking.oap.server.core.WorkPath; +import org.apache.skywalking.oap.server.library.util.StringUtil; + +/** + * Generates {@link LalExpression} implementation classes from + * {@link LALScriptModel} AST using Javassist bytecode generation. + * + * <p>Generates a single class with {@code execute()} and private helper + * methods — no consumer classes or callback indirection. + * + * <p>Block-level code generation (extractor, sink, condition, value access) + * is delegated to {@link LALBlockCodegen}. Static utility constants and + * methods live in {@link LALCodegenHelper}. + */ +@Slf4j +public final class LALClassGenerator { + + private static final AtomicInteger CLASS_COUNTER = new AtomicInteger(0); + + private static final String PACKAGE_PREFIX = + "org.apache.skywalking.oap.log.analyzer.v2.compiler.rt."; + + private static final String FILTER_SPEC = + "org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.filter.FilterSpec"; + private static final String EXEC_CTX = + "org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext"; + private static final String H = + "org.apache.skywalking.oap.log.analyzer.v2.compiler.rt.LalRuntimeHelper"; + + private static final java.util.Set<String> USED_CLASS_NAMES = + java.util.Collections.synchronizedSet(new java.util.HashSet<>()); + + private final ClassPool classPool; + private File classOutputDir; + private String classNameHint; + private Class<?> extraLogType; + private String yamlSource; + + // ==================== Parser type detection ==================== + + enum ParserType { JSON, YAML, TEXT, NONE } + + static class PrivateMethod { + final String source; + final String[][] lvtVars; + + PrivateMethod(final String source, final String[][] lvtVars) { + this.source = source; + this.lvtVars = lvtVars; + } + } + + static class GenCtx { + final ParserType parserType; + final Class<?> extraLogType; + final List<PrivateMethod> privateMethods = new ArrayList<>(); + final Map<String, Integer> methodCounts = new HashMap<>(); + + // Set by generateExtraLogAccess for primitive optimization in callers. + // Reset to null by generateValueAccess at the start of each value access. + Class<?> lastResolvedType; + String lastNullChecks; + String lastRawChain; + + // Per-method proto field variable caching (NONE + extraLogType only). + // Maps chain key ("response", "response.responseCode") to variable name ("_t0", "_t1"). + // Enables dedup: the same chain accessed multiple times reuses the same variable. + final Map<String, String> protoVars = new HashMap<>(); + final List<String[]> protoLvtVars = new ArrayList<>(); + final StringBuilder protoVarDecls = new StringBuilder(); + int protoVarCounter; + boolean usedProtoAccess; + + GenCtx(final ParserType parserType, final Class<?> extraLogType) { + this.parserType = parserType; + this.extraLogType = extraLogType; + } + + String nextMethodName(final String prefix) { + final int count = methodCounts.merge(prefix, 1, Integer::sum); + return count == 1 ? "_" + prefix : "_" + prefix + "_" + count; + } + + void clearExtraLogResult() { + lastResolvedType = null; + lastNullChecks = null; + lastRawChain = null; + } + + void resetProtoVars() { + protoVars.clear(); + protoLvtVars.clear(); + protoVarDecls.setLength(0); + protoVarCounter = 0; + usedProtoAccess = false; + } + + Object[] saveProtoVarState() { + return new Object[]{ + new HashMap<>(protoVars), + new ArrayList<>(protoLvtVars), + protoVarDecls.toString(), + protoVarCounter, + usedProtoAccess + }; + } + + @SuppressWarnings("unchecked") + void restoreProtoVarState(final Object[] state) { + protoVars.clear(); + protoVars.putAll((Map<String, String>) state[0]); + protoLvtVars.clear(); + protoLvtVars.addAll((List<String[]>) state[1]); + protoVarDecls.setLength(0); + protoVarDecls.append((String) state[2]); + protoVarCounter = (Integer) state[3]; + usedProtoAccess = (Boolean) state[4]; + } + } + + public LALClassGenerator() { + this(ClassPool.getDefault()); + if (StringUtil.isNotEmpty(System.getenv("SW_DYNAMIC_CLASS_ENGINE_DEBUG"))) { + classOutputDir = new File(WorkPath.getPath().getParentFile(), "lal-rt"); + } + } + + public LALClassGenerator(final ClassPool classPool) { + this.classPool = classPool; + } + + public void setClassOutputDir(final File dir) { + this.classOutputDir = dir; + } + + public void setClassNameHint(final String hint) { + this.classNameHint = hint; + } + + public void setExtraLogType(final Class<?> extraLogType) { + this.extraLogType = extraLogType; + } + + public void setYamlSource(final String yamlSource) { + this.yamlSource = yamlSource; + } + + private String makeClassName(final String defaultPrefix) { + if (classNameHint != null) { + return dedupClassName( + PACKAGE_PREFIX + LALCodegenHelper.sanitizeName(classNameHint)); + } + return PACKAGE_PREFIX + defaultPrefix + CLASS_COUNTER.getAndIncrement(); + } + + private String dedupClassName(final String base) { + if (USED_CLASS_NAMES.add(base)) { + return base; + } + for (int i = 2; ; i++) { + final String candidate = base + "_" + i; + if (USED_CLASS_NAMES.add(candidate)) { + return candidate; + } + } + } + + private void writeClassFile(final CtClass ctClass) { + if (classOutputDir == null) { + return; + } + if (!classOutputDir.exists()) { + classOutputDir.mkdirs(); + } + final File file = new File(classOutputDir, ctClass.getSimpleName() + ".class"); + try (DataOutputStream out = new DataOutputStream(new FileOutputStream(file))) { + ctClass.toBytecode(out); + } catch (Exception e) { + log.warn("Failed to write class file {}: {}", file, e.getMessage()); + } + } + + /** + * Adds a {@code LineNumberTable} attribute by scanning bytecode for + * store instructions to local variable slots ≥ {@code firstResultSlot}. + */ + private void addLineNumberTable(final javassist.CtMethod method, + final int firstResultSlot) { + try { + final javassist.bytecode.MethodInfo mi = method.getMethodInfo(); + final javassist.bytecode.CodeAttribute code = mi.getCodeAttribute(); + if (code == null) { + return; + } + + final List<int[]> entries = new ArrayList<>(); + int line = 1; + boolean nextIsNewLine = true; + + final javassist.bytecode.CodeIterator ci = code.iterator(); + while (ci.hasNext()) { + final int pc = ci.next(); + if (nextIsNewLine) { + entries.add(new int[]{pc, line++}); + nextIsNewLine = false; + } + final int op = ci.byteAt(pc) & 0xFF; + int slot = -1; + if (op >= 59 && op <= 78) { + slot = (op - 59) % 4; + } else if (op >= 54 && op <= 58) { + slot = ci.byteAt(pc + 1) & 0xFF; + } + if (slot >= firstResultSlot) { + nextIsNewLine = true; + } + } + + if (entries.isEmpty()) { + return; + } + + final javassist.bytecode.ConstPool cp = mi.getConstPool(); + final byte[] info = new byte[2 + entries.size() * 4]; + info[0] = (byte) (entries.size() >> 8); + info[1] = (byte) entries.size(); + for (int i = 0; i < entries.size(); i++) { + final int off = 2 + i * 4; + info[off] = (byte) (entries.get(i)[0] >> 8); + info[off + 1] = (byte) entries.get(i)[0]; + info[off + 2] = (byte) (entries.get(i)[1] >> 8); + info[off + 3] = (byte) entries.get(i)[1]; + } + code.getAttributes().add( + new javassist.bytecode.AttributeInfo(cp, "LineNumberTable", info)); + } catch (Exception e) { + log.warn("Failed to add LineNumberTable: {}", e.getMessage()); + } + } + + private static void setSourceFile(final CtClass ctClass, final String name) { + try { + final javassist.bytecode.ClassFile cf = ctClass.getClassFile(); + final javassist.bytecode.AttributeInfo sf = cf.getAttribute("SourceFile"); + if (sf != null) { + final javassist.bytecode.ConstPool cp = cf.getConstPool(); + final int idx = cp.addUtf8Info(name); + sf.set(new byte[]{(byte) (idx >> 8), (byte) idx}); + } + } catch (Exception e) { + // best-effort + } + } + + /** + * Builds the SourceFile name for a generated class. When YAML source info + * is available, produces {@code "default(ruleName.java)"}; + * otherwise falls back to {@code "ruleName.java"}. + */ + private String formatSourceFileName(final String ruleName) { + final String classFile = ruleName + ".java"; + if (yamlSource != null) { + return "(" + yamlSource + ")" + classFile; + } + return classFile; + } + + private void addLocalVariableTable(final javassist.CtMethod method, + final String className, + final String[][] vars) { + try { + final javassist.bytecode.MethodInfo mi = method.getMethodInfo(); + final javassist.bytecode.CodeAttribute code = mi.getCodeAttribute(); + if (code == null) { + return; + } + final javassist.bytecode.ConstPool cp = mi.getConstPool(); + final int len = code.getCodeLength(); + final javassist.bytecode.LocalVariableAttribute lva = + new javassist.bytecode.LocalVariableAttribute(cp); + lva.addEntry(0, len, + cp.addUtf8Info("this"), + cp.addUtf8Info("L" + className.replace('.', '/') + ";"), 0); + for (int i = 0; i < vars.length; i++) { + lva.addEntry(0, len, + cp.addUtf8Info(vars[i][0]), + cp.addUtf8Info(vars[i][1]), i + 1); + } + code.getAttributes().add(lva); + } catch (Exception e) { + log.warn("Failed to add LocalVariableTable: {}", e.getMessage()); + } + } + + private static ParserType detectParserType( + final List<? extends LALScriptModel.FilterStatement> stmts) { + for (final LALScriptModel.FilterStatement stmt : stmts) { + if (stmt instanceof LALScriptModel.JsonParser) { + return ParserType.JSON; + } + if (stmt instanceof LALScriptModel.YamlParser) { + return ParserType.YAML; + } + if (stmt instanceof LALScriptModel.TextParser) { + return ParserType.TEXT; + } + if (stmt instanceof LALScriptModel.IfBlock) { + final LALScriptModel.IfBlock ifBlock = (LALScriptModel.IfBlock) stmt; + ParserType t = detectParserType(ifBlock.getThenBranch()); + if (t != ParserType.NONE) { + return t; + } + t = detectParserType(ifBlock.getElseBranch()); + if (t != ParserType.NONE) { + return t; + } + } + } + return ParserType.NONE; + } + + // ==================== Compilation ==================== + + /** + * Compiles a LAL DSL script into a LalExpression implementation. + */ + public LalExpression compile(final String dsl) throws Exception { + final LALScriptModel model = LALScriptParser.parse(dsl); + return compileFromModel(model); + } + + /** + * Compiles from a pre-parsed model. Generates a single class with + * execute() and private helper methods. + */ + public LalExpression compileFromModel(final LALScriptModel model) throws Exception { + final String className = makeClassName("LalExpr_"); + final ParserType parserType = detectParserType(model.getStatements()); + final GenCtx genCtx = new GenCtx(parserType, this.extraLogType); + + if (parserType == ParserType.NONE && this.extraLogType != null) { + log.info("LAL rule has no parser — using extraLogType {} for " + + "direct getter calls.", this.extraLogType.getName()); + } + + final String executeBody = generateExecuteMethod(model, genCtx); + + if (log.isDebugEnabled()) { + log.debug("LAL compile AST: {}", model); + log.debug("LAL compile execute():\n{}", executeBody); + for (final PrivateMethod pm : genCtx.privateMethods) { + log.debug("LAL compile private method:\n{}", pm.source); + } + } + + final CtClass ctClass = classPool.makeClass(className); + ctClass.addInterface(classPool.get( + "org.apache.skywalking.oap.log.analyzer.v2.dsl.LalExpression")); + ctClass.addConstructor(CtNewConstructor.defaultConstructor(ctClass)); + + // Add private methods BEFORE execute so Javassist can resolve calls + for (final PrivateMethod pm : genCtx.privateMethods) { + final javassist.CtMethod ctMethod = CtNewMethod.make(pm.source, ctClass); + ctClass.addMethod(ctMethod); + addLocalVariableTable(ctMethod, className, pm.lvtVars); + addLineNumberTable(ctMethod, pm.lvtVars.length + 1); // after this + params + } + + final javassist.CtMethod execMethod = CtNewMethod.make(executeBody, ctClass); + ctClass.addMethod(execMethod); + + // Build LVT for execute(): params + h + optional _p and proto vars + final List<String[]> execLvt = new ArrayList<>(); + execLvt.add(new String[]{"filterSpec", "L" + FILTER_SPEC.replace('.', '/') + ";"}); + execLvt.add(new String[]{"ctx", "L" + EXEC_CTX.replace('.', '/') + ";"}); + execLvt.add(new String[]{"h", "L" + H.replace('.', '/') + ";"}); + if (genCtx.usedProtoAccess) { + if (genCtx.extraLogType != null) { + execLvt.add(new String[]{"_p", + "L" + genCtx.extraLogType.getName().replace('.', '/') + ";"}); + } + execLvt.addAll(genCtx.protoLvtVars); + } + addLocalVariableTable(execMethod, className, + execLvt.toArray(new String[0][])); + addLineNumberTable(execMethod, 3); // slot 0=this, 1=filterSpec, 2=ctx + + setSourceFile(ctClass, formatSourceFileName( + classNameHint != null ? classNameHint : className)); + + writeClassFile(ctClass); + + final Class<?> clazz = ctClass.toClass(LalExpressionPackageHolder.class); + ctClass.detach(); + return (LalExpression) clazz.getDeclaredConstructor().newInstance(); + } + + private static boolean hasParsedAccess( + final List<? extends LALScriptModel.FilterStatement> stmts) { + for (final LALScriptModel.FilterStatement stmt : stmts) { + if (stmt instanceof LALScriptModel.ExtractorBlock) { + return true; + } + if (stmt instanceof LALScriptModel.IfBlock) { + final LALScriptModel.IfBlock ifBlock = (LALScriptModel.IfBlock) stmt; + if (hasParsedAccess(ifBlock.getThenBranch()) + || hasParsedAccess(ifBlock.getElseBranch())) { + return true; + } + } + } + return false; + } + + // ==================== Execute method generation ==================== + + private String generateExecuteMethod(final LALScriptModel model, + final GenCtx genCtx) { + genCtx.resetProtoVars(); + + // Generate body first so proto var declarations are collected + final StringBuilder bodyContent = new StringBuilder(); + for (final LALScriptModel.FilterStatement stmt : model.getStatements()) { + generateFilterStatement(bodyContent, stmt, genCtx); + } + + final StringBuilder sb = new StringBuilder(); + sb.append("public void execute(").append(FILTER_SPEC) + .append(" filterSpec, ").append(EXEC_CTX).append(" ctx) {\n"); + sb.append(" ").append(H).append(" h = new ").append(H).append("(ctx);\n"); + + // Insert _p + proto var declarations if any proto field access was used + if (genCtx.usedProtoAccess) { + if (genCtx.extraLogType != null) { + final String elTypeName = genCtx.extraLogType.getName(); + sb.append(" ").append(elTypeName).append(" _p = (") + .append(elTypeName).append(") h.ctx().extraLog();\n"); + } + sb.append(genCtx.protoVarDecls); + } + + sb.append(bodyContent); + sb.append("}\n"); + return sb.toString(); + } + + private void generateFilterStatement(final StringBuilder sb, + final LALScriptModel.FilterStatement stmt, + final GenCtx genCtx) { + if (stmt instanceof LALScriptModel.TextParser) { + final LALScriptModel.TextParser tp = (LALScriptModel.TextParser) stmt; + if (tp.getRegexpPattern() != null) { + sb.append(" filterSpec.textWithRegexp(ctx, \"") + .append(LALCodegenHelper.escapeJava(tp.getRegexpPattern())) + .append("\");\n"); + } else { + sb.append(" filterSpec.text(ctx);\n"); + } + } else if (stmt instanceof LALScriptModel.JsonParser) { + sb.append(" filterSpec.json(ctx);\n"); + } else if (stmt instanceof LALScriptModel.YamlParser) { + sb.append(" filterSpec.yaml(ctx);\n"); + } else if (stmt instanceof LALScriptModel.AbortStatement) { + sb.append(" filterSpec.abort(ctx);\n"); + } else if (stmt instanceof LALScriptModel.ExtractorBlock) { + LALBlockCodegen.generateExtractorMethod( + sb, (LALScriptModel.ExtractorBlock) stmt, genCtx); + } else if (stmt instanceof LALScriptModel.SinkBlock) { + final LALScriptModel.SinkBlock sink = (LALScriptModel.SinkBlock) stmt; + if (sink.getStatements().isEmpty()) { + sb.append(" filterSpec.sink(ctx);\n"); + } else { + LALBlockCodegen.generateSinkMethod(sb, sink, genCtx); + } + } else if (stmt instanceof LALScriptModel.IfBlock) { + generateTopLevelIfBlock(sb, (LALScriptModel.IfBlock) stmt, genCtx); + } + } + + private void generateTopLevelIfBlock(final StringBuilder sb, + final LALScriptModel.IfBlock ifBlock, + final GenCtx genCtx) { + sb.append(" if ("); + LALBlockCodegen.generateCondition(sb, ifBlock.getCondition(), genCtx); + sb.append(") {\n"); + for (final LALScriptModel.FilterStatement s : ifBlock.getThenBranch()) { + generateFilterStatement(sb, s, genCtx); + } + sb.append(" }\n"); + if (!ifBlock.getElseBranch().isEmpty()) { + sb.append(" else {\n"); + for (final LALScriptModel.FilterStatement s : ifBlock.getElseBranch()) { + generateFilterStatement(sb, s, genCtx); + } + sb.append(" }\n"); + } + } + + // ==================== Source generation (for testing) ==================== + + /** + * Generates the Java source of execute() + private methods for + * debugging/testing. + */ + public String generateSource(final String dsl) { + final LALScriptModel model = LALScriptParser.parse(dsl); + final GenCtx genCtx = new GenCtx( + detectParserType(model.getStatements()), this.extraLogType); + final String execute = generateExecuteMethod(model, genCtx); + if (genCtx.privateMethods.isEmpty()) { + return execute; + } + final StringBuilder all = new StringBuilder(execute); + for (final PrivateMethod m : genCtx.privateMethods) { + all.append("\n").append(m.source); + } + return all.toString(); + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALCodegenHelper.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALCodegenHelper.java new file mode 100644 index 000000000000..0947ee263c30 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALCodegenHelper.java @@ -0,0 +1,107 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.compiler; + +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; + +/** + * Static utility constants and methods extracted from {@link LALClassGenerator} + * for reuse by {@link LALBlockCodegen}. + */ +final class LALCodegenHelper { + + static final Map<String, String> LOG_GETTERS = new HashMap<>(); + static final Map<String, String> TRACE_CONTEXT_GETTERS = new HashMap<>(); + static final Set<String> LONG_FIELDS = new HashSet<>(); + static final Set<String> INT_FIELDS = new HashSet<>(); + + static { + LOG_GETTERS.put("service", "getService"); + LOG_GETTERS.put("serviceInstance", "getServiceInstance"); + LOG_GETTERS.put("endpoint", "getEndpoint"); + LOG_GETTERS.put("timestamp", "getTimestamp"); + LOG_GETTERS.put("body", "getBody"); + LOG_GETTERS.put("traceContext", "getTraceContext"); + LOG_GETTERS.put("tags", "getTags"); + LOG_GETTERS.put("layer", "getLayer"); + + TRACE_CONTEXT_GETTERS.put("traceId", "getTraceId"); + TRACE_CONTEXT_GETTERS.put("traceSegmentId", "getTraceSegmentId"); + TRACE_CONTEXT_GETTERS.put("spanId", "getSpanId"); + + LONG_FIELDS.add("timestamp"); + INT_FIELDS.add("spanId"); + } + + private LALCodegenHelper() { + // utility class + } + + static String escapeJava(final String s) { + return s.replace("\\", "\\\\") + .replace("\"", "\\\"") + .replace("\n", "\\n") + .replace("\r", "\\r") + .replace("\t", "\\t"); + } + + static String boxTypeName(final Class<?> primitiveType) { + if (primitiveType == int.class) { + return "Integer"; + } else if (primitiveType == long.class) { + return "Long"; + } else if (primitiveType == boolean.class) { + return "Boolean"; + } else if (primitiveType == double.class) { + return "Double"; + } else if (primitiveType == float.class) { + return "Float"; + } + return null; + } + + static String sanitizeName(final String name) { + final StringBuilder sb = new StringBuilder(name.length()); + for (int i = 0; i < name.length(); i++) { + final char c = name.charAt(i); + sb.append(i == 0 + ? (Character.isJavaIdentifierStart(c) ? c : '_') + : (Character.isJavaIdentifierPart(c) ? c : '_')); + } + return sb.length() == 0 ? "Generated" : sb.toString(); + } + + static String generateMapValCall(final List<String> keys) { + if (keys.isEmpty()) { + return "h.ctx().parsed()"; + } + final StringBuilder call = new StringBuilder("h.mapVal("); + for (int i = 0; i < keys.size(); i++) { + if (i > 0) { + call.append(", "); + } + call.append("\"").append(escapeJava(keys.get(i))).append("\""); + } + call.append(")"); + return call.toString(); + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALScriptModel.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALScriptModel.java new file mode 100644 index 000000000000..762e9cbd0b04 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALScriptModel.java @@ -0,0 +1,563 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.compiler; + +import java.util.Collections; +import java.util.List; +import java.util.Map; +import lombok.Getter; + +/** + * Immutable AST model for LAL (Log Analysis Language) scripts. + * + * <p>Represents parsed scripts like: + * <pre> + * filter { + * json {} + * extractor { service parsed.service as String } + * sink { sampler { rateLimit("id") { rpm 6000 } } } + * } + * </pre> + */ +public final class LALScriptModel { + + @Getter + private final List<FilterStatement> statements; + + public LALScriptModel(final List<FilterStatement> statements) { + this.statements = Collections.unmodifiableList(statements); + } + + // ==================== Filter statements ==================== + + public interface FilterStatement { + } + + // ==================== Parser blocks ==================== + + @Getter + public static final class TextParser implements FilterStatement { + private final String regexpPattern; + private final boolean abortOnFailure; + + public TextParser(final String regexpPattern, final boolean abortOnFailure) { + this.regexpPattern = regexpPattern; + this.abortOnFailure = abortOnFailure; + } + } + + @Getter + public static final class JsonParser implements FilterStatement { + private final boolean abortOnFailure; + + public JsonParser(final boolean abortOnFailure) { + this.abortOnFailure = abortOnFailure; + } + } + + @Getter + public static final class YamlParser implements FilterStatement { + private final boolean abortOnFailure; + + public YamlParser(final boolean abortOnFailure) { + this.abortOnFailure = abortOnFailure; + } + } + + public static final class AbortStatement implements FilterStatement { + } + + // ==================== Extractor block ==================== + + @Getter + public static final class ExtractorBlock implements FilterStatement { + private final List<ExtractorStatement> statements; + + public ExtractorBlock(final List<ExtractorStatement> statements) { + this.statements = Collections.unmodifiableList(statements); + } + } + + public interface ExtractorStatement { + } + + @Getter + public static final class FieldAssignment implements ExtractorStatement, FilterStatement { + private final FieldType fieldType; + private final ValueAccess value; + private final String castType; + private final String formatPattern; + + public FieldAssignment(final FieldType fieldType, + final ValueAccess value, + final String castType, + final String formatPattern) { + this.fieldType = fieldType; + this.value = value; + this.castType = castType; + this.formatPattern = formatPattern; + } + } + + public enum FieldType { + SERVICE, INSTANCE, ENDPOINT, LAYER, + TRACE_ID, SEGMENT_ID, SPAN_ID, TIMESTAMP + } + + @Getter + public static final class TagAssignment implements ExtractorStatement, FilterStatement { + private final Map<String, TagValue> tags; + + public TagAssignment(final Map<String, TagValue> tags) { + this.tags = Collections.unmodifiableMap(tags); + } + } + + @Getter + public static final class TagValue { + private final ValueAccess value; + private final String castType; + + public TagValue(final ValueAccess value, final String castType) { + this.value = value; + this.castType = castType; + } + } + + @Getter + public static final class MetricsBlock implements ExtractorStatement, FilterStatement { + private final String name; + private final ValueAccess timestampValue; + private final String timestampCast; + private final Map<String, TagValue> labels; + private final ValueAccess value; + private final String valueCast; + + public MetricsBlock(final String name, + final ValueAccess timestampValue, + final String timestampCast, + final Map<String, TagValue> labels, + final ValueAccess value, + final String valueCast) { + this.name = name; + this.timestampValue = timestampValue; + this.timestampCast = timestampCast; + this.labels = labels != null ? Collections.unmodifiableMap(labels) : Collections.emptyMap(); + this.value = value; + this.valueCast = valueCast; + } + } + + @Getter + public static final class SlowSqlBlock implements ExtractorStatement, FilterStatement { + private final ValueAccess id; + private final String idCast; + private final ValueAccess statement; + private final String statementCast; + private final ValueAccess latency; + private final String latencyCast; + + public SlowSqlBlock(final ValueAccess id, final String idCast, + final ValueAccess statement, final String statementCast, + final ValueAccess latency, final String latencyCast) { + this.id = id; + this.idCast = idCast; + this.statement = statement; + this.statementCast = statementCast; + this.latency = latency; + this.latencyCast = latencyCast; + } + } + + @Getter + public static final class SampledTraceBlock implements ExtractorStatement, FilterStatement { + private final List<SampledTraceStatement> statements; + + public SampledTraceBlock(final List<SampledTraceStatement> statements) { + this.statements = Collections.unmodifiableList(statements); + } + } + + public interface SampledTraceStatement { + } + + @Getter + public static final class SampledTraceField implements SampledTraceStatement, FilterStatement { + private final SampledTraceFieldType fieldType; + private final ValueAccess value; + private final String castType; + + public SampledTraceField(final SampledTraceFieldType fieldType, + final ValueAccess value, + final String castType) { + this.fieldType = fieldType; + this.value = value; + this.castType = castType; + } + } + + public enum SampledTraceFieldType { + LATENCY, URI, REASON, PROCESS_ID, DEST_PROCESS_ID, + DETECT_POINT, COMPONENT_ID, REPORT_SERVICE + } + + // ==================== Sink block ==================== + + @Getter + public static final class SinkBlock implements FilterStatement { + private final List<SinkStatement> statements; + + public SinkBlock(final List<SinkStatement> statements) { + this.statements = Collections.unmodifiableList(statements); + } + } + + public interface SinkStatement { + } + + @Getter + public static final class SamplerBlock implements SinkStatement, FilterStatement { + private final List<SamplerContent> contents; + + public SamplerBlock(final List<SamplerContent> contents) { + this.contents = Collections.unmodifiableList(contents); + } + } + + public interface SamplerContent { + } + + @Getter + public static final class RateLimitBlock implements SamplerContent { + private final String id; + private final List<InterpolationPart> idParts; + private final long rpm; + + public RateLimitBlock(final String id, + final List<InterpolationPart> idParts, + final long rpm) { + this.id = id; + this.idParts = idParts != null + ? Collections.unmodifiableList(idParts) : Collections.emptyList(); + this.rpm = rpm; + } + + public boolean isIdInterpolated() { + return !idParts.isEmpty(); + } + } + + @Getter + public static final class InterpolationPart { + private final String literal; + private final ValueAccess expression; + + private InterpolationPart(final String literal, final ValueAccess expression) { + this.literal = literal; + this.expression = expression; + } + + public static InterpolationPart ofLiteral(final String text) { + return new InterpolationPart(text, null); + } + + public static InterpolationPart ofExpression(final ValueAccess expr) { + return new InterpolationPart(null, expr); + } + + public boolean isLiteral() { + return literal != null; + } + } + + public static final class EnforcerStatement implements SinkStatement, FilterStatement { + } + + public static final class DropperStatement implements SinkStatement, FilterStatement { + } + + // ==================== Control flow ==================== + + @Getter + public static final class IfBlock implements FilterStatement, ExtractorStatement, + SinkStatement, SampledTraceStatement, SamplerContent { + private final Condition condition; + private final List<FilterStatement> thenBranch; + private final List<FilterStatement> elseBranch; + + public IfBlock(final Condition condition, + final List<FilterStatement> thenBranch, + final List<FilterStatement> elseBranch) { + this.condition = condition; + this.thenBranch = Collections.unmodifiableList(thenBranch); + this.elseBranch = elseBranch != null + ? Collections.unmodifiableList(elseBranch) : Collections.emptyList(); + } + } + + // ==================== Conditions ==================== + + public interface Condition { + } + + @Getter + public static final class ComparisonCondition implements Condition { + private final ValueAccess left; + private final String leftCast; + private final CompareOp op; + private final ConditionValue right; + + public ComparisonCondition(final ValueAccess left, + final String leftCast, + final CompareOp op, + final ConditionValue right) { + this.left = left; + this.leftCast = leftCast; + this.op = op; + this.right = right; + } + } + + @Getter + public static final class LogicalCondition implements Condition { + private final Condition left; + private final LogicalOp op; + private final Condition right; + + public LogicalCondition(final Condition left, final LogicalOp op, final Condition right) { + this.left = left; + this.op = op; + this.right = right; + } + } + + @Getter + public static final class NotCondition implements Condition { + private final Condition inner; + + public NotCondition(final Condition inner) { + this.inner = inner; + } + } + + @Getter + public static final class ExprCondition implements Condition { + private final ValueAccess expr; + private final String castType; + + public ExprCondition(final ValueAccess expr, final String castType) { + this.expr = expr; + this.castType = castType; + } + } + + // ==================== Value access ==================== + + @Getter + public static final class ValueAccess { + private final List<String> segments; + private final boolean parsedRef; + private final boolean logRef; + private final boolean processRegistryRef; + private final boolean stringLiteral; + private final boolean numberLiteral; + private final List<ValueAccessSegment> chain; + private final String functionCallName; + private final List<FunctionArg> functionCallArgs; + private final List<ValueAccess> concatParts; + private final ValueAccess parenInner; + private final String parenCast; + + public ValueAccess(final List<String> segments, + final boolean parsedRef, + final boolean logRef, + final List<ValueAccessSegment> chain) { + this(segments, parsedRef, logRef, false, false, false, + chain, null, Collections.emptyList(), + Collections.emptyList(), null, null); + } + + public ValueAccess(final List<String> segments, + final boolean parsedRef, + final boolean logRef, + final boolean processRegistryRef, + final boolean stringLiteral, + final boolean numberLiteral, + final List<ValueAccessSegment> chain, + final String functionCallName, + final List<FunctionArg> functionCallArgs) { + this(segments, parsedRef, logRef, processRegistryRef, + stringLiteral, numberLiteral, chain, + functionCallName, functionCallArgs, + Collections.emptyList(), null, null); + } + + public ValueAccess(final List<String> segments, + final boolean parsedRef, + final boolean logRef, + final boolean processRegistryRef, + final boolean stringLiteral, + final boolean numberLiteral, + final List<ValueAccessSegment> chain, + final String functionCallName, + final List<FunctionArg> functionCallArgs, + final List<ValueAccess> concatParts, + final ValueAccess parenInner, + final String parenCast) { + this.segments = Collections.unmodifiableList(segments); + this.parsedRef = parsedRef; + this.logRef = logRef; + this.processRegistryRef = processRegistryRef; + this.stringLiteral = stringLiteral; + this.numberLiteral = numberLiteral; + this.chain = chain != null + ? Collections.unmodifiableList(chain) : Collections.emptyList(); + this.functionCallName = functionCallName; + this.functionCallArgs = functionCallArgs != null + ? Collections.unmodifiableList(functionCallArgs) : Collections.emptyList(); + this.concatParts = concatParts != null + ? Collections.unmodifiableList(concatParts) : Collections.emptyList(); + this.parenInner = parenInner; + this.parenCast = parenCast; + } + + public String toPathString() { + return String.join(".", segments); + } + } + + @Getter + public static final class FunctionArg { + private final ValueAccess value; + private final String castType; + + public FunctionArg(final ValueAccess value, final String castType) { + this.value = value; + this.castType = castType; + } + } + + public interface ValueAccessSegment { + } + + @Getter + public static final class FieldSegment implements ValueAccessSegment { + private final String name; + private final boolean safeNav; + + public FieldSegment(final String name, final boolean safeNav) { + this.name = name; + this.safeNav = safeNav; + } + } + + @Getter + public static final class MethodSegment implements ValueAccessSegment { + private final String name; + private final List<FunctionArg> arguments; + private final boolean safeNav; + + public MethodSegment(final String name, final List<FunctionArg> arguments, + final boolean safeNav) { + this.name = name; + this.arguments = arguments != null + ? Collections.unmodifiableList(arguments) : Collections.emptyList(); + this.safeNav = safeNav; + } + } + + @Getter + public static final class IndexSegment implements ValueAccessSegment { + private final int index; + + public IndexSegment(final int index) { + this.index = index; + } + } + + // ==================== Condition values ==================== + + public interface ConditionValue { + } + + @Getter + public static final class StringConditionValue implements ConditionValue { + private final String value; + + public StringConditionValue(final String value) { + this.value = value; + } + } + + @Getter + public static final class NumberConditionValue implements ConditionValue { + private final double value; + + public NumberConditionValue(final double value) { + this.value = value; + } + } + + @Getter + public static final class BoolConditionValue implements ConditionValue { + private final boolean value; + + public BoolConditionValue(final boolean value) { + this.value = value; + } + } + + public static final class NullConditionValue implements ConditionValue { + } + + @Getter + public static final class ValueAccessConditionValue implements ConditionValue { + private final ValueAccess value; + private final String castType; + + public ValueAccessConditionValue(final ValueAccess value, final String castType) { + this.value = value; + this.castType = castType; + } + } + + @Getter + public static final class FunctionCallConditionValue implements ConditionValue { + private final String functionName; + private final List<String> arguments; + + public FunctionCallConditionValue(final String functionName, final List<String> arguments) { + this.functionName = functionName; + this.arguments = Collections.unmodifiableList(arguments); + } + } + + // ==================== Enums ==================== + + public enum CompareOp { + EQ, NEQ, GT, LT, GTE, LTE + } + + public enum LogicalOp { + AND, OR + } + + private LALScriptModel() { + this.statements = Collections.emptyList(); + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALScriptParser.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALScriptParser.java new file mode 100644 index 000000000000..55644c7fe5c1 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALScriptParser.java @@ -0,0 +1,947 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.compiler; + +import java.util.ArrayList; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import org.antlr.v4.runtime.BaseErrorListener; +import org.antlr.v4.runtime.CharStreams; +import org.antlr.v4.runtime.CommonTokenStream; +import org.antlr.v4.runtime.RecognitionException; +import org.antlr.v4.runtime.Recognizer; +import org.apache.skywalking.lal.rt.grammar.LALLexer; +import org.apache.skywalking.lal.rt.grammar.LALParser; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.AbortStatement; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.InterpolationPart; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.CompareOp; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.ComparisonCondition; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.Condition; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.DropperStatement; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.EnforcerStatement; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.ExprCondition; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.ExtractorBlock; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.ExtractorStatement; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.FieldAssignment; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.FieldSegment; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.FieldType; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.FilterStatement; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.IfBlock; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.JsonParser; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.LogicalCondition; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.LogicalOp; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.MetricsBlock; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.NotCondition; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.NullConditionValue; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.NumberConditionValue; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.RateLimitBlock; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.SamplerBlock; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.SamplerContent; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.SinkBlock; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.SinkStatement; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.SlowSqlBlock; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.StringConditionValue; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.TagAssignment; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.TagValue; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.TextParser; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.ValueAccess; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.ValueAccessConditionValue; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.ValueAccessSegment; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.IndexSegment; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALScriptModel.YamlParser; + +/** + * Facade: parses LAL DSL script strings into {@link LALScriptModel}. + * + * <pre> + * LALScriptModel model = LALScriptParser.parse( + * "filter { json {} extractor { service parsed.service as String } sink {} }"); + * </pre> + */ +public final class LALScriptParser { + + private LALScriptParser() { + } + + public static LALScriptModel parse(final String dsl) { + final LALLexer lexer = new LALLexer(CharStreams.fromString(dsl)); + final CommonTokenStream tokens = new CommonTokenStream(lexer); + final LALParser parser = new LALParser(tokens); + + final List<String> errors = new ArrayList<>(); + parser.removeErrorListeners(); + parser.addErrorListener(new BaseErrorListener() { + @Override + public void syntaxError(final Recognizer<?, ?> recognizer, + final Object offendingSymbol, + final int line, + final int charPositionInLine, + final String msg, + final RecognitionException e) { + errors.add(line + ":" + charPositionInLine + " " + msg); + } + }); + + final LALParser.RootContext tree = parser.root(); + if (!errors.isEmpty()) { + throw new IllegalArgumentException( + "LAL script parsing failed: " + String.join("; ", errors) + + " in script: " + truncate(dsl, 200)); + } + + final List<FilterStatement> stmts = visitFilterContent( + tree.filterBlock().filterContent()); + return new LALScriptModel(stmts); + } + + // ==================== Filter content ==================== + + private static List<FilterStatement> visitFilterContent( + final LALParser.FilterContentContext ctx) { + final List<FilterStatement> stmts = new ArrayList<>(); + for (final LALParser.FilterStatementContext fsc : ctx.filterStatement()) { + stmts.add(visitFilterStatement(fsc)); + } + return stmts; + } + + private static FilterStatement visitFilterStatement( + final LALParser.FilterStatementContext ctx) { + if (ctx.parserBlock() != null) { + return visitParserBlock(ctx.parserBlock()); + } + if (ctx.extractorBlock() != null) { + return visitExtractorBlock(ctx.extractorBlock()); + } + if (ctx.sinkBlock() != null) { + return visitSinkBlock(ctx.sinkBlock()); + } + if (ctx.abortBlock() != null) { + return new AbortStatement(); + } + // ifStatement + return visitIfStatement(ctx.ifStatement()); + } + + // ==================== Parser blocks ==================== + + private static FilterStatement visitParserBlock(final LALParser.ParserBlockContext ctx) { + if (ctx.textBlock() != null) { + String pattern = null; + boolean abortOnFail = false; + if (ctx.textBlock().textContent() != null) { + for (final LALParser.RegexpStatementContext regCtx : + ctx.textBlock().textContent().regexpStatement()) { + pattern = stripQuotes(regCtx.regexpPattern().getText()); + } + for (final LALParser.AbortOnFailureStatementContext abfCtx : + ctx.textBlock().textContent().abortOnFailureStatement()) { + abortOnFail = "true".equals(abfCtx.boolValue().getText()); + } + } + return new TextParser(pattern, abortOnFail); + } + if (ctx.jsonBlock() != null) { + boolean abortOnFail = false; + if (ctx.jsonBlock().jsonContent() != null + && ctx.jsonBlock().jsonContent().abortOnFailureStatement() != null) { + abortOnFail = "true".equals( + ctx.jsonBlock().jsonContent().abortOnFailureStatement() + .boolValue().getText()); + } + return new JsonParser(abortOnFail); + } + // yaml + boolean abortOnFail = false; + if (ctx.yamlBlock().yamlContent() != null + && ctx.yamlBlock().yamlContent().abortOnFailureStatement() != null) { + abortOnFail = "true".equals( + ctx.yamlBlock().yamlContent().abortOnFailureStatement() + .boolValue().getText()); + } + return new YamlParser(abortOnFail); + } + + // ==================== Extractor block ==================== + + private static ExtractorBlock visitExtractorBlock( + final LALParser.ExtractorBlockContext ctx) { + final List<ExtractorStatement> stmts = new ArrayList<>(); + for (final LALParser.ExtractorStatementContext esc : ctx.extractorContent().extractorStatement()) { + stmts.add(visitExtractorStatement(esc)); + } + return new ExtractorBlock(stmts); + } + + private static ExtractorStatement visitExtractorStatement( + final LALParser.ExtractorStatementContext ctx) { + if (ctx.serviceStatement() != null) { + return visitFieldAssignment(FieldType.SERVICE, ctx.serviceStatement().valueAccess(), + ctx.serviceStatement().typeCast()); + } + if (ctx.instanceStatement() != null) { + return visitFieldAssignment(FieldType.INSTANCE, ctx.instanceStatement().valueAccess(), + ctx.instanceStatement().typeCast()); + } + if (ctx.endpointStatement() != null) { + return visitFieldAssignment(FieldType.ENDPOINT, ctx.endpointStatement().valueAccess(), + ctx.endpointStatement().typeCast()); + } + if (ctx.layerStatement() != null) { + return visitFieldAssignment(FieldType.LAYER, ctx.layerStatement().valueAccess(), + ctx.layerStatement().typeCast()); + } + if (ctx.traceIdStatement() != null) { + return visitFieldAssignment(FieldType.TRACE_ID, ctx.traceIdStatement().valueAccess(), + ctx.traceIdStatement().typeCast()); + } + if (ctx.timestampStatement() != null) { + final ValueAccess va = visitValueAccess(ctx.timestampStatement().valueAccess()); + final String cast = ctx.timestampStatement().typeCast() != null + ? extractCastType(ctx.timestampStatement().typeCast()) : null; + String format = null; + if (ctx.timestampStatement().STRING() != null) { + format = stripQuotes(ctx.timestampStatement().STRING().getText()); + } + return new FieldAssignment(FieldType.TIMESTAMP, va, cast, format); + } + if (ctx.tagStatement() != null) { + return visitTagStatement(ctx.tagStatement()); + } + if (ctx.metricsBlock() != null) { + return visitMetricsBlock(ctx.metricsBlock()); + } + if (ctx.slowSqlBlock() != null) { + return visitSlowSqlBlock(ctx.slowSqlBlock()); + } + if (ctx.sampledTraceBlock() != null) { + return visitSampledTraceBlock(ctx.sampledTraceBlock()); + } + // ifStatement + return (ExtractorStatement) visitIfStatement(ctx.ifStatement()); + } + + private static FieldAssignment visitFieldAssignment( + final FieldType type, + final LALParser.ValueAccessContext vaCtx, + final LALParser.TypeCastContext tcCtx) { + final ValueAccess va = visitValueAccess(vaCtx); + final String cast = tcCtx != null ? extractCastType(tcCtx) : null; + return new FieldAssignment(type, va, cast, null); + } + + // ==================== Tag statement ==================== + + private static TagAssignment visitTagStatement(final LALParser.TagStatementContext ctx) { + final Map<String, TagValue> tags = new LinkedHashMap<>(); + if (ctx.tagMap() != null) { + for (int i = 0; i < ctx.tagMap().anyIdentifier().size(); i++) { + final String key = ctx.tagMap().anyIdentifier(i).getText(); + final ValueAccess va = visitValueAccess(ctx.tagMap().valueAccess(i)); + final String cast = ctx.tagMap().typeCast(i) != null + ? extractCastType(ctx.tagMap().typeCast(i)) : null; + tags.put(key, new TagValue(va, cast)); + } + } else if (ctx.STRING() != null) { + final String key = stripQuotes(ctx.STRING().getText()); + final ValueAccess va = visitValueAccess(ctx.valueAccess()); + final String cast = ctx.typeCast() != null ? extractCastType(ctx.typeCast()) : null; + tags.put(key, new TagValue(va, cast)); + } + return new TagAssignment(tags); + } + + // ==================== Metrics block ==================== + + private static MetricsBlock visitMetricsBlock(final LALParser.MetricsBlockContext ctx) { + String name = null; + ValueAccess timestampValue = null; + String timestampCast = null; + final Map<String, TagValue> labels = new LinkedHashMap<>(); + ValueAccess value = null; + String valueCast = null; + + for (final LALParser.MetricsStatementContext msc : ctx.metricsContent().metricsStatement()) { + if (msc.metricsNameStatement() != null) { + name = resolveValueAsString(msc.metricsNameStatement().valueAccess()); + } + if (msc.metricsTimestampStatement() != null) { + timestampValue = visitValueAccess(msc.metricsTimestampStatement().valueAccess()); + timestampCast = msc.metricsTimestampStatement().typeCast() != null + ? extractCastType(msc.metricsTimestampStatement().typeCast()) : null; + } + if (msc.metricsLabelsStatement() != null) { + for (final LALParser.LabelEntryContext lec : + msc.metricsLabelsStatement().labelMap().labelEntry()) { + final String key = lec.anyIdentifier().getText(); + final ValueAccess va = visitValueAccess(lec.valueAccess()); + final String cast = lec.typeCast() != null + ? extractCastType(lec.typeCast()) : null; + labels.put(key, new TagValue(va, cast)); + } + } + if (msc.metricsValueStatement() != null) { + value = visitValueAccess(msc.metricsValueStatement().valueAccess()); + valueCast = msc.metricsValueStatement().typeCast() != null + ? extractCastType(msc.metricsValueStatement().typeCast()) : null; + } + } + + return new MetricsBlock(name, timestampValue, timestampCast, labels, value, valueCast); + } + + // ==================== Slow SQL block ==================== + + private static SlowSqlBlock visitSlowSqlBlock(final LALParser.SlowSqlBlockContext ctx) { + ValueAccess id = null; + String idCast = null; + ValueAccess statement = null; + String statementCast = null; + ValueAccess latency = null; + String latencyCast = null; + + for (final LALParser.SlowSqlStatementContext ssc : + ctx.slowSqlContent().slowSqlStatement()) { + if (ssc.slowSqlIdStatement() != null) { + id = visitValueAccess(ssc.slowSqlIdStatement().valueAccess()); + idCast = ssc.slowSqlIdStatement().typeCast() != null + ? extractCastType(ssc.slowSqlIdStatement().typeCast()) : null; + } + if (ssc.slowSqlStatementStatement() != null) { + statement = visitValueAccess(ssc.slowSqlStatementStatement().valueAccess()); + statementCast = ssc.slowSqlStatementStatement().typeCast() != null + ? extractCastType(ssc.slowSqlStatementStatement().typeCast()) : null; + } + if (ssc.slowSqlLatencyStatement() != null) { + latency = visitValueAccess(ssc.slowSqlLatencyStatement().valueAccess()); + latencyCast = ssc.slowSqlLatencyStatement().typeCast() != null + ? extractCastType(ssc.slowSqlLatencyStatement().typeCast()) : null; + } + } + + return new SlowSqlBlock(id, idCast, statement, statementCast, latency, latencyCast); + } + + // ==================== Sampled trace block ==================== + + private static LALScriptModel.SampledTraceBlock visitSampledTraceBlock( + final LALParser.SampledTraceBlockContext ctx) { + final List<LALScriptModel.SampledTraceStatement> stmts = new ArrayList<>(); + for (final LALParser.SampledTraceStatementContext stc : + ctx.sampledTraceContent().sampledTraceStatement()) { + if (stc.ifStatement() != null) { + stmts.add((LALScriptModel.SampledTraceStatement) visitIfStatement( + stc.ifStatement())); + } else { + stmts.add(visitSampledTraceField(stc)); + } + } + return new LALScriptModel.SampledTraceBlock(stmts); + } + + private static LALScriptModel.SampledTraceField visitSampledTraceField( + final LALParser.SampledTraceStatementContext ctx) { + if (ctx.sampledTraceLatencyStatement() != null) { + return makeSampledField(LALScriptModel.SampledTraceFieldType.LATENCY, + ctx.sampledTraceLatencyStatement().valueAccess(), + ctx.sampledTraceLatencyStatement().typeCast()); + } + if (ctx.sampledTraceUriStatement() != null) { + return makeSampledField(LALScriptModel.SampledTraceFieldType.URI, + ctx.sampledTraceUriStatement().valueAccess(), + ctx.sampledTraceUriStatement().typeCast()); + } + if (ctx.sampledTraceReasonStatement() != null) { + return makeSampledField(LALScriptModel.SampledTraceFieldType.REASON, + ctx.sampledTraceReasonStatement().valueAccess(), + ctx.sampledTraceReasonStatement().typeCast()); + } + if (ctx.sampledTraceProcessIdStatement() != null) { + return makeSampledField(LALScriptModel.SampledTraceFieldType.PROCESS_ID, + ctx.sampledTraceProcessIdStatement().valueAccess(), + ctx.sampledTraceProcessIdStatement().typeCast()); + } + if (ctx.sampledTraceDestProcessIdStatement() != null) { + return makeSampledField(LALScriptModel.SampledTraceFieldType.DEST_PROCESS_ID, + ctx.sampledTraceDestProcessIdStatement().valueAccess(), + ctx.sampledTraceDestProcessIdStatement().typeCast()); + } + if (ctx.sampledTraceDetectPointStatement() != null) { + return makeSampledField(LALScriptModel.SampledTraceFieldType.DETECT_POINT, + ctx.sampledTraceDetectPointStatement().valueAccess(), + ctx.sampledTraceDetectPointStatement().typeCast()); + } + if (ctx.sampledTraceComponentIdStatement() != null) { + return makeSampledField(LALScriptModel.SampledTraceFieldType.COMPONENT_ID, + ctx.sampledTraceComponentIdStatement().valueAccess(), + ctx.sampledTraceComponentIdStatement().typeCast()); + } + // reportService + return makeSampledField(LALScriptModel.SampledTraceFieldType.REPORT_SERVICE, + ctx.reportServiceStatement().valueAccess(), + ctx.reportServiceStatement().typeCast()); + } + + private static LALScriptModel.SampledTraceField makeSampledField( + final LALScriptModel.SampledTraceFieldType type, + final LALParser.ValueAccessContext vaCtx, + final LALParser.TypeCastContext tcCtx) { + return new LALScriptModel.SampledTraceField( + type, visitValueAccess(vaCtx), + tcCtx != null ? extractCastType(tcCtx) : null); + } + + // ==================== Sink block ==================== + + private static SinkBlock visitSinkBlock(final LALParser.SinkBlockContext ctx) { + final List<SinkStatement> stmts = new ArrayList<>(); + for (final LALParser.SinkStatementContext ssc : ctx.sinkContent().sinkStatement()) { + if (ssc.samplerBlock() != null) { + stmts.add(visitSamplerBlock(ssc.samplerBlock())); + } else if (ssc.enforcerStatement() != null) { + stmts.add(new EnforcerStatement()); + } else if (ssc.dropperStatement() != null) { + stmts.add(new DropperStatement()); + } else { + stmts.add((SinkStatement) visitIfStatement(ssc.ifStatement())); + } + } + return new SinkBlock(stmts); + } + + private static SamplerBlock visitSamplerBlock(final LALParser.SamplerBlockContext ctx) { + final List<SamplerContent> contents = new ArrayList<>(); + for (final LALParser.RateLimitBlockContext rlc : ctx.samplerContent().rateLimitBlock()) { + final String id = stripQuotes(rlc.rateLimitId().getText()); + final long rpm = Long.parseLong(rlc.rateLimitContent().NUMBER().getText()); + final List<InterpolationPart> idParts = parseInterpolation(id); + contents.add(new RateLimitBlock(id, idParts, rpm)); + } + for (final LALParser.IfStatementContext isc : ctx.samplerContent().ifStatement()) { + contents.add((SamplerContent) visitIfStatement(isc)); + } + return new SamplerBlock(contents); + } + + // ==================== If statement ==================== + + private static IfBlock visitIfStatement(final LALParser.IfStatementContext ctx) { + final int condCount = ctx.condition().size(); + final int bodyCount = ctx.ifBody().size(); + // Whether there is a trailing else (no condition) block + final boolean hasElse = bodyCount > condCount; + + // Build the chain from the last else-if backwards. + // For: if(A){b0} else if(B){b1} else if(C){b2} else{b3} + // condCount=3, bodyCount=4, hasElse=true + // Result: IfBlock(A, b0, IfBlock(B, b1, IfBlock(C, b2, b3))) + + // Start from the innermost else-if (last condition) + List<FilterStatement> trailingElse = hasElse + ? visitIfBody(ctx.ifBody(bodyCount - 1)) : null; + + // Build from the last condition backwards to index 1 + IfBlock nested = null; + for (int i = condCount - 1; i >= 1; i--) { + final Condition cond = visitCondition(ctx.condition(i)); + final List<FilterStatement> body = visitIfBody(ctx.ifBody(i)); + final List<FilterStatement> elsePart; + if (nested != null) { + elsePart = List.of(nested); + } else { + elsePart = trailingElse; + } + nested = new IfBlock(cond, body, elsePart); + } + + // Build the outermost if block (index 0) + final Condition topCond = visitCondition(ctx.condition(0)); + final List<FilterStatement> topBody = visitIfBody(ctx.ifBody(0)); + final List<FilterStatement> topElse; + if (nested != null) { + topElse = List.of(nested); + } else { + topElse = trailingElse; + } + + return new IfBlock(topCond, topBody, topElse); + } + + private static List<FilterStatement> visitIfBody(final LALParser.IfBodyContext ctx) { + final List<FilterStatement> stmts = new ArrayList<>(); + for (final LALParser.FilterStatementContext fsc : ctx.filterStatement()) { + stmts.add(visitFilterStatement(fsc)); + } + for (final LALParser.ExtractorStatementContext esc : ctx.extractorStatement()) { + stmts.add((FilterStatement) visitExtractorStatement(esc)); + } + for (final LALParser.SinkStatementContext ssc : ctx.sinkStatement()) { + if (ssc.samplerBlock() != null) { + stmts.add((FilterStatement) visitSamplerBlock(ssc.samplerBlock())); + } else if (ssc.enforcerStatement() != null) { + stmts.add((FilterStatement) new EnforcerStatement()); + } else if (ssc.dropperStatement() != null) { + stmts.add((FilterStatement) new DropperStatement()); + } + } + for (final LALParser.SampledTraceStatementContext stc : + ctx.sampledTraceStatement()) { + if (stc.ifStatement() != null) { + stmts.add((FilterStatement) visitIfStatement(stc.ifStatement())); + } else { + stmts.add((FilterStatement) visitSampledTraceField(stc)); + } + } + // Handle samplerContent alternative (rateLimit blocks inside if within sampler) + final LALParser.SamplerContentContext sc = ctx.samplerContent(); + if (sc != null) { + final List<SamplerContent> samplerItems = new ArrayList<>(); + for (final LALParser.RateLimitBlockContext rlc : sc.rateLimitBlock()) { + final String id = stripQuotes(rlc.rateLimitId().getText()); + final long rpm = Long.parseLong( + rlc.rateLimitContent().NUMBER().getText()); + final List<InterpolationPart> idParts = parseInterpolation(id); + samplerItems.add(new RateLimitBlock(id, idParts, rpm)); + } + for (final LALParser.IfStatementContext isc : sc.ifStatement()) { + samplerItems.add((SamplerContent) visitIfStatement(isc)); + } + if (!samplerItems.isEmpty()) { + stmts.add((FilterStatement) new SamplerBlock(samplerItems)); + } + } + return stmts; + } + + // ==================== Conditions ==================== + + private static Condition visitCondition(final LALParser.ConditionContext ctx) { + if (ctx instanceof LALParser.CondAndContext) { + final LALParser.CondAndContext and = (LALParser.CondAndContext) ctx; + return new LogicalCondition( + visitCondition(and.condition(0)), + LogicalOp.AND, + visitCondition(and.condition(1))); + } + if (ctx instanceof LALParser.CondOrContext) { + final LALParser.CondOrContext or = (LALParser.CondOrContext) ctx; + return new LogicalCondition( + visitCondition(or.condition(0)), + LogicalOp.OR, + visitCondition(or.condition(1))); + } + if (ctx instanceof LALParser.CondNotContext) { + return new NotCondition( + visitCondition(((LALParser.CondNotContext) ctx).condition())); + } + if (ctx instanceof LALParser.CondEqContext) { + final LALParser.CondEqContext eq = (LALParser.CondEqContext) ctx; + return makeComparison(eq.conditionExpr(0), CompareOp.EQ, eq.conditionExpr(1)); + } + if (ctx instanceof LALParser.CondNeqContext) { + final LALParser.CondNeqContext neq = (LALParser.CondNeqContext) ctx; + return makeComparison(neq.conditionExpr(0), CompareOp.NEQ, neq.conditionExpr(1)); + } + if (ctx instanceof LALParser.CondGtContext) { + final LALParser.CondGtContext gt = (LALParser.CondGtContext) ctx; + return makeComparison(gt.conditionExpr(0), CompareOp.GT, gt.conditionExpr(1)); + } + if (ctx instanceof LALParser.CondLtContext) { + final LALParser.CondLtContext lt = (LALParser.CondLtContext) ctx; + return makeComparison(lt.conditionExpr(0), CompareOp.LT, lt.conditionExpr(1)); + } + if (ctx instanceof LALParser.CondGteContext) { + final LALParser.CondGteContext gte = (LALParser.CondGteContext) ctx; + return makeComparison(gte.conditionExpr(0), CompareOp.GTE, gte.conditionExpr(1)); + } + if (ctx instanceof LALParser.CondLteContext) { + final LALParser.CondLteContext lte = (LALParser.CondLteContext) ctx; + return makeComparison(lte.conditionExpr(0), CompareOp.LTE, lte.conditionExpr(1)); + } + // condSingle + final LALParser.CondSingleContext single = (LALParser.CondSingleContext) ctx; + return visitConditionExprAsCondition(single.conditionExpr()); + } + + private static Condition makeComparison( + final LALParser.ConditionExprContext leftCtx, + final CompareOp op, + final LALParser.ConditionExprContext rightCtx) { + if (leftCtx instanceof LALParser.CondValueAccessContext) { + final LALParser.CondValueAccessContext lva = + (LALParser.CondValueAccessContext) leftCtx; + final ValueAccess left = visitValueAccess(lva.valueAccess()); + final String leftCast = lva.typeCast() != null + ? extractCastType(lva.typeCast()) : null; + return new ComparisonCondition(left, leftCast, op, + visitConditionExprAsValue(rightCtx)); + } + if (leftCtx instanceof LALParser.CondFunctionCallContext) { + final LALParser.FunctionInvocationContext fi = + ((LALParser.CondFunctionCallContext) leftCtx).functionInvocation(); + final String funcName = fi.functionName().getText(); + final List<LALScriptModel.FunctionArg> funcArgs = visitFunctionArgs(fi); + final ValueAccess left = new ValueAccess( + List.of(fi.getText()), false, false, false, false, false, + List.of(), funcName, funcArgs); + return new ComparisonCondition(left, null, op, + visitConditionExprAsValue(rightCtx)); + } + // For other forms, wrap as expression condition + return new ExprCondition( + new ValueAccess(List.of(leftCtx.getText()), false, false, List.of()), null); + } + + private static LALScriptModel.ConditionValue visitConditionExprAsValue( + final LALParser.ConditionExprContext ctx) { + if (ctx instanceof LALParser.CondStringContext) { + return new StringConditionValue( + stripQuotes(((LALParser.CondStringContext) ctx).STRING().getText())); + } + if (ctx instanceof LALParser.CondNumberContext) { + return new NumberConditionValue( + Double.parseDouble(((LALParser.CondNumberContext) ctx).NUMBER().getText())); + } + if (ctx instanceof LALParser.CondNullContext) { + return new NullConditionValue(); + } + if (ctx instanceof LALParser.CondValueAccessContext) { + final LALParser.CondValueAccessContext va = + (LALParser.CondValueAccessContext) ctx; + // ANTLR grammar routes NUMBER/NULL/STRING/bool through condValueAccess + // (since valueAccessPrimary includes them and condValueAccess has priority). + // Detect standalone literals and create proper ConditionValue types. + final LALParser.ValueAccessContext vaCtx = va.valueAccess(); + if (va.typeCast() == null && vaCtx.valueAccessTerm().size() == 1 + && vaCtx.valueAccessTerm(0).valueAccessSegment().isEmpty()) { + final LALParser.ValueAccessPrimaryContext primary = + vaCtx.valueAccessTerm(0).valueAccessPrimary(); + if (primary instanceof LALParser.ValueNumberContext) { + return new NumberConditionValue(Double.parseDouble( + ((LALParser.ValueNumberContext) primary).NUMBER().getText())); + } + if (primary instanceof LALParser.ValueNullContext) { + return new NullConditionValue(); + } + } + final String cast = va.typeCast() != null ? extractCastType(va.typeCast()) : null; + return new ValueAccessConditionValue(visitValueAccess(vaCtx), cast); + } + if (ctx instanceof LALParser.CondParenGroupContext) { + // (condition) used as a value — e.g. in: if ((x == y)) { ... } + // Wrap as a ValueAccess containing the paren expression text + return new ValueAccessConditionValue( + new ValueAccess(List.of(ctx.getText()), false, false, List.of()), null); + } + // condBool, condFunctionCall + return new StringConditionValue(ctx.getText()); + } + + private static Condition visitConditionExprAsCondition( + final LALParser.ConditionExprContext ctx) { + if (ctx instanceof LALParser.CondValueAccessContext) { + final LALParser.CondValueAccessContext va = + (LALParser.CondValueAccessContext) ctx; + final String cast = va.typeCast() != null ? extractCastType(va.typeCast()) : null; + return new ExprCondition(visitValueAccess(va.valueAccess()), cast); + } + if (ctx instanceof LALParser.CondFunctionCallContext) { + final LALParser.FunctionInvocationContext fi = + ((LALParser.CondFunctionCallContext) ctx).functionInvocation(); + final String funcName = fi.functionName().getText(); + final List<LALScriptModel.FunctionArg> funcArgs = visitFunctionArgs(fi); + final ValueAccess va = new ValueAccess( + List.of(fi.getText()), false, false, false, false, false, + List.of(), funcName, funcArgs); + return new ExprCondition(va, null); + } + if (ctx instanceof LALParser.CondParenGroupContext) { + return visitCondition( + ((LALParser.CondParenGroupContext) ctx).condition()); + } + return new ExprCondition( + new ValueAccess(List.of(ctx.getText()), false, false, List.of()), null); + } + + // ==================== Value access ==================== + + private static ValueAccess visitValueAccess(final LALParser.ValueAccessContext ctx) { + final List<LALParser.ValueAccessTermContext> terms = ctx.valueAccessTerm(); + if (terms.size() == 1) { + return visitValueAccessTerm(terms.get(0)); + } + // Multiple terms joined by PLUS — string concatenation + final List<ValueAccess> parts = new ArrayList<>(); + for (final LALParser.ValueAccessTermContext term : terms) { + parts.add(visitValueAccessTerm(term)); + } + return new ValueAccess( + List.of("concat"), false, false, false, false, false, + List.of(), null, null, + parts, null, null); + } + + private static ValueAccess visitValueAccessTerm( + final LALParser.ValueAccessTermContext ctx) { + final List<String> segments = new ArrayList<>(); + boolean parsedRef = false; + boolean logRef = false; + boolean processRegistryRef = false; + boolean stringLiteral = false; + boolean numberLiteral = false; + String functionCallName = null; + List<LALScriptModel.FunctionArg> functionCallArgs = null; + ValueAccess parenInner = null; + String parenCast = null; + + final LALParser.ValueAccessPrimaryContext primary = ctx.valueAccessPrimary(); + if (primary instanceof LALParser.ValueParsedContext) { + parsedRef = true; + segments.add("parsed"); + } else if (primary instanceof LALParser.ValueLogContext) { + logRef = true; + segments.add("log"); + } else if (primary instanceof LALParser.ValueProcessRegistryContext) { + processRegistryRef = true; + segments.add("ProcessRegistry"); + } else if (primary instanceof LALParser.ValueIdentifierContext) { + segments.add(((LALParser.ValueIdentifierContext) primary).IDENTIFIER().getText()); + } else if (primary instanceof LALParser.ValueStringContext) { + stringLiteral = true; + segments.add(stripQuotes( + ((LALParser.ValueStringContext) primary).STRING().getText())); + } else if (primary instanceof LALParser.ValueNumberContext) { + numberLiteral = true; + segments.add(((LALParser.ValueNumberContext) primary).NUMBER().getText()); + } else if (primary instanceof LALParser.ValueFunctionCallContext) { + final LALParser.FunctionInvocationContext fi = + ((LALParser.ValueFunctionCallContext) primary).functionInvocation(); + functionCallName = fi.functionName().getText(); + functionCallArgs = visitFunctionArgs(fi); + segments.add(fi.getText()); + } else if (primary instanceof LALParser.ValueParenContext) { + final LALParser.ValueParenContext parenCtx = + (LALParser.ValueParenContext) primary; + parenInner = visitValueAccess(parenCtx.valueAccess()); + parenCast = parenCtx.typeCast() != null + ? extractCastType(parenCtx.typeCast()) : null; + segments.add("paren"); + } else { + segments.add(primary.getText()); + } + + final List<ValueAccessSegment> chain = new ArrayList<>(); + for (final LALParser.ValueAccessSegmentContext seg : ctx.valueAccessSegment()) { + if (seg instanceof LALParser.SegmentFieldContext) { + final String name = + ((LALParser.SegmentFieldContext) seg).anyIdentifier().getText(); + segments.add(name); + chain.add(new FieldSegment(name, false)); + } else if (seg instanceof LALParser.SegmentSafeFieldContext) { + final String name = + ((LALParser.SegmentSafeFieldContext) seg).anyIdentifier().getText(); + segments.add(name); + chain.add(new FieldSegment(name, true)); + } else if (seg instanceof LALParser.SegmentMethodContext) { + final LALParser.FunctionInvocationContext fi = + ((LALParser.SegmentMethodContext) seg).functionInvocation(); + segments.add(fi.functionName().getText() + "()"); + chain.add(new LALScriptModel.MethodSegment( + fi.functionName().getText(), visitFunctionArgs(fi), false)); + } else if (seg instanceof LALParser.SegmentSafeMethodContext) { + final LALParser.FunctionInvocationContext fi = + ((LALParser.SegmentSafeMethodContext) seg).functionInvocation(); + segments.add(fi.functionName().getText() + "()"); + chain.add(new LALScriptModel.MethodSegment( + fi.functionName().getText(), visitFunctionArgs(fi), true)); + } else if (seg instanceof LALParser.SegmentIndexContext) { + final int index = Integer.parseInt( + ((LALParser.SegmentIndexContext) seg).NUMBER().getText()); + segments.add("[" + index + "]"); + chain.add(new IndexSegment(index)); + } + } + + return new ValueAccess(segments, parsedRef, logRef, + processRegistryRef, stringLiteral, numberLiteral, + chain, functionCallName, functionCallArgs, + List.of(), parenInner, parenCast); + } + + private static List<LALScriptModel.FunctionArg> visitFunctionArgs( + final LALParser.FunctionInvocationContext fi) { + if (fi.functionArgList() == null) { + return List.of(); + } + final List<LALScriptModel.FunctionArg> args = new ArrayList<>(); + for (final LALParser.FunctionArgContext fac : fi.functionArgList().functionArg()) { + if (fac.valueAccess() != null) { + final ValueAccess va = visitValueAccess(fac.valueAccess()); + final String cast = fac.typeCast() != null + ? extractCastType(fac.typeCast()) : null; + args.add(new LALScriptModel.FunctionArg(va, cast)); + } else if (fac.STRING() != null) { + final String val = stripQuotes(fac.STRING().getText()); + final ValueAccess va = new ValueAccess( + List.of(val), false, false, true, true, false, + List.of(), null, null); + args.add(new LALScriptModel.FunctionArg(va, null)); + } else if (fac.NUMBER() != null) { + final ValueAccess va = new ValueAccess( + List.of(fac.NUMBER().getText()), false, false, + false, false, true, List.of(), null, null); + args.add(new LALScriptModel.FunctionArg(va, null)); + } else if (fac.boolValue() != null) { + final ValueAccess va = new ValueAccess( + List.of(fac.boolValue().getText()), false, false, + false, false, false, List.of(), null, null); + args.add(new LALScriptModel.FunctionArg(va, null)); + } else { + // NULL + final ValueAccess va = new ValueAccess( + List.of("null"), false, false, List.of()); + args.add(new LALScriptModel.FunctionArg(va, null)); + } + } + return args; + } + + private static String resolveValueAsString(final LALParser.ValueAccessContext ctx) { + final LALParser.ValueAccessPrimaryContext primary = + ctx.valueAccessTerm(0).valueAccessPrimary(); + if (primary instanceof LALParser.ValueStringContext) { + return stripQuotes(((LALParser.ValueStringContext) primary).STRING().getText()); + } + return primary.getText(); + } + + // ==================== Utilities ==================== + + private static String extractCastType(final LALParser.TypeCastContext ctx) { + if (ctx.STRING_TYPE() != null) { + return "String"; + } + if (ctx.LONG_TYPE() != null) { + return "Long"; + } + if (ctx.INTEGER_TYPE() != null) { + return "Integer"; + } + if (ctx.BOOLEAN_TYPE() != null) { + return "Boolean"; + } + return null; + } + + static String stripQuotes(final String s) { + if (s == null || s.length() < 2) { + return s; + } + final char first = s.charAt(0); + if ((first == '\'' || first == '"') && s.charAt(s.length() - 1) == first) { + return s.substring(1, s.length() - 1); + } + // Handle slashy strings: $/ ... /$ + if (s.startsWith("$/") && s.endsWith("/$")) { + return s.substring(2, s.length() - 2); + } + return s; + } + + private static String truncate(final String s, final int maxLen) { + if (s.length() <= maxLen) { + return s; + } + return s.substring(0, maxLen) + "..."; + } + + // ==================== GString interpolation ==================== + + /** + * Parses Groovy-style GString interpolation in a string. + * E.g. {@code "${log.service}:${parsed?.field}"} produces + * [expr(log.service), literal(":"), expr(parsed?.field)]. + * + * @return list of parts, or {@code null} if no interpolation found + */ + static List<InterpolationPart> parseInterpolation(final String s) { + if (!s.contains("${")) { + return null; + } + final List<InterpolationPart> parts = new ArrayList<>(); + int pos = 0; + while (pos < s.length()) { + final int start = s.indexOf("${", pos); + if (start < 0) { + // Remaining literal text + if (pos < s.length()) { + parts.add(InterpolationPart.ofLiteral(s.substring(pos))); + } + break; + } + // Literal text before ${ + if (start > pos) { + parts.add(InterpolationPart.ofLiteral(s.substring(pos, start))); + } + // Find matching closing brace, respecting nesting + int depth = 1; + int i = start + 2; + while (i < s.length() && depth > 0) { + final char c = s.charAt(i); + if (c == '{') { + depth++; + } else if (c == '}') { + depth--; + } + i++; + } + if (depth != 0) { + throw new IllegalArgumentException( + "Unclosed interpolation in: " + s); + } + final String expr = s.substring(start + 2, i - 1); + // Parse the expression as a valueAccess through ANTLR + parts.add(InterpolationPart.ofExpression(parseValueAccessExpr(expr))); + pos = i; + } + return parts; + } + + /** + * Parses a standalone valueAccess expression string by wrapping it in + * a minimal LAL script and extracting the parsed ValueAccess. + */ + private static ValueAccess parseValueAccessExpr(final String expr) { + // Wrap in: filter { if (EXPR) { sink {} } } + // The expression becomes a condition, parsed as ExprCondition + // whose ValueAccess is what we want. + final String wrapper = "filter { if (" + expr + ") { sink {} } }"; + final LALScriptModel model = parse(wrapper); + final IfBlock ifBlock = (IfBlock) model.getStatements().get(0); + final LALScriptModel.Condition cond = ifBlock.getCondition(); + if (cond instanceof ExprCondition) { + return ((ExprCondition) cond).getExpr(); + } + if (cond instanceof ComparisonCondition) { + return ((ComparisonCondition) cond).getLeft(); + } + throw new IllegalArgumentException( + "Cannot parse interpolation expression: " + expr); + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/rt/LalExpressionPackageHolder.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/rt/LalExpressionPackageHolder.java new file mode 100644 index 000000000000..ef74d9cab178 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/rt/LalExpressionPackageHolder.java @@ -0,0 +1,26 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.compiler.rt; + +/** + * Empty marker class used as the class loading anchor for Javassist + * {@code CtClass.toClass(Class)} on JDK 16+. + * Generated LAL expression classes are loaded in this package. + */ +public class LalExpressionPackageHolder { +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/rt/LalRuntimeHelper.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/rt/LalRuntimeHelper.java new file mode 100644 index 000000000000..f9e22118e3af --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/rt/LalRuntimeHelper.java @@ -0,0 +1,316 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.compiler.rt; + +import java.util.List; +import java.util.Map; +import org.apache.skywalking.apm.network.common.v3.KeyStringValuePair; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext; + +/** + * Runtime helper for compiled LAL expressions. + * + * <p>Created once per {@code execute()} call, holds the {@link ExecutionContext} + * and provides data-source-specific access and type conversion methods. + * + * <h2>Data Source Methods</h2> + * + * <p><b>1. JSON/YAML map data</b> — used when the LAL script has a {@code json {}} or + * {@code yaml {}} parser. The parsed body is stored in {@code ctx.parsed().getMap()}. + * <pre> + * // LAL: parsed.service as String + * // Generated: h.toStr(h.mapVal("service")) + * + * // LAL: parsed.client_process.address as String (nested map) + * // Generated: h.toStr(h.mapVal("client_process", "address")) + * </pre> + * + * <p><b>2. Text regexp matcher data</b> — used when the LAL script has a + * {@code text { regexp '...' }} parser. The parsed body is stored as a + * {@link java.util.regex.Matcher} in {@code ctx.parsed().getMatcher()}. + * <pre> + * // LAL: parsed.level as String + * // Generated: h.toStr(h.group("level")) + * </pre> + * + * <p><b>3. Tag data</b> — accesses log tags (protobuf {@code KeyStringValuePair} list). + * Available regardless of parser type. + * <pre> + * // LAL: tag("LOG_KIND") + * // Generated: h.tagValue("LOG_KIND") + * </pre> + * + * <p><b>4. Log proto data</b> — direct access to {@code LogData.Builder} fields. + * Not accessed through this helper; the compiler generates direct getter chains + * like {@code h.ctx().log().getService()}. + * + * <p><b>5. ExtraLog proto data</b> — direct access to typed protobuf extraLog. + * Not accessed through this helper; the compiler generates typed cast + getter + * chains like {@code ((HTTPAccessLogEntry) h.ctx().extraLog()).getResponse()}. + * + * <h2>Type Conversion Methods</h2> + * + * <p>Convert parsed values (typically {@code Object}) to typed values for + * spec method calls. + * <pre> + * // LAL: parsed.service as String → h.toStr(h.mapVal("service")) + * // LAL: parsed.latency as Long → Long.valueOf(h.toLong(h.mapVal("latency"))) + * // LAL: parsed.ssl as Boolean → Boolean.valueOf(h.toBool(h.mapVal("ssl"))) + * // LAL: parsed.code as Integer → Integer.valueOf(h.toInt(h.mapVal("code"))) + * </pre> + * + * <h2>Safe Navigation Methods</h2> + * <pre> + * // LAL: parsed?.x?.toString() → h.toString(h.mapVal("x")) + * // LAL: parsed?.x?.trim() → h.trim(h.mapVal("x")) + * </pre> + * + * <h2>Boolean Evaluation Methods</h2> + * <pre> + * // LAL if-condition: if (parsed.flag) → h.isTrue(h.mapVal("flag")) + * // LAL if-condition: if (parsed.name) → h.isNotEmpty(h.mapVal("name")) + * </pre> + */ +public final class LalRuntimeHelper { + + private final ExecutionContext ctx; + + public LalRuntimeHelper(final ExecutionContext ctx) { + this.ctx = ctx; + } + + public ExecutionContext ctx() { + return ctx; + } + + // ==================== Data source: JSON/YAML map ==================== + // Used when LAL has json{} or yaml{} parser. + // Returns raw Object from the parsed Map<String,Object>. + + /** + * Single-key map access. + * <pre> + * // LAL: parsed.service → h.mapVal("service") + * </pre> + */ + public Object mapVal(final String key) { + return ctx.parsed().getMap().get(key); + } + + /** + * Two-level nested map access. + * <pre> + * // LAL: parsed.a.b → h.mapVal("a", "b") + * </pre> + */ + public Object mapVal(final String k1, final String k2) { + return mapGet(mapVal(k1), k2); + } + + /** + * Three-level nested map access. + * <pre> + * // LAL: parsed.a.b.c → h.mapVal("a", "b", "c") + * </pre> + */ + public Object mapVal(final String k1, final String k2, final String k3) { + return mapGet(mapVal(k1, k2), k3); + } + + private static Object mapGet(final Object obj, final String key) { + if (obj == null) { + return null; + } + if (obj instanceof Map) { + return ((Map) obj).get(key); + } + return null; + } + + // ==================== Data source: Text regexp matcher ==================== + // Used when LAL has text { regexp '...' } parser. + // Returns String from named matcher group. + + /** + * Named matcher group access. + * <pre> + * // LAL: parsed.level → h.group("level") + * </pre> + */ + public String group(final String name) { + return ctx.parsed().getMatcher().group(name); + } + + // ==================== Data source: Log tags ==================== + // Available for all LAL scripts. + + /** + * Log tag lookup by key name. + * <pre> + * // LAL: tag("LOG_KIND") → h.tagValue("LOG_KIND") + * </pre> + */ + public String tagValue(final String key) { + final List dl = ctx.log().getTags().getDataList(); + for (int i = 0; i < dl.size(); i++) { + final KeyStringValuePair kv = (KeyStringValuePair) dl.get(i); + if (key.equals(kv.getKey())) { + return kv.getValue(); + } + } + return ""; + } + + // ==================== Type conversion ==================== + + /** + * {@code as String} cast — null-safe, returns null for null input. + * <pre> + * // LAL: parsed.service as String → h.toStr(h.mapVal("service")) + * </pre> + */ + public String toStr(final Object obj) { + return obj == null ? null : String.valueOf(obj); + } + + /** + * {@code as Long} cast — Number or String to long. + * <pre> + * // LAL: parsed.latency as Long → Long.valueOf(h.toLong(h.mapVal("latency"))) + * </pre> + */ + public long toLong(final Object obj) { + if (obj instanceof Number) { + return ((Number) obj).longValue(); + } + if (obj instanceof String) { + return Long.parseLong((String) obj); + } + return 0L; + } + + /** + * {@code as Integer} cast — Number or String to int. + * <pre> + * // LAL: parsed.code as Integer → Integer.valueOf(h.toInt(h.mapVal("code"))) + * </pre> + */ + public int toInt(final Object obj) { + if (obj instanceof Number) { + return ((Number) obj).intValue(); + } + if (obj instanceof String) { + return Integer.parseInt((String) obj); + } + return 0; + } + + /** + * {@code as Boolean} cast — Boolean, String, or non-null to boolean. + * <pre> + * // LAL: parsed.ssl as Boolean → Boolean.valueOf(h.toBool(h.mapVal("ssl"))) + * </pre> + */ + public boolean toBool(final Object obj) { + if (obj instanceof Boolean) { + return ((Boolean) obj).booleanValue(); + } + if (obj instanceof String) { + return Boolean.parseBoolean((String) obj); + } + return obj != null; + } + + // ==================== Boolean evaluation ==================== + + /** + * Boolean truthiness for if-conditions: null is false, Boolean delegates, + * String parses, anything else is true. + * <pre> + * // LAL: if (parsed.flag) → h.isTrue(h.mapVal("flag")) + * </pre> + */ + public boolean isTrue(final Object obj) { + if (obj == null) { + return false; + } + if (obj instanceof Boolean) { + return ((Boolean) obj).booleanValue(); + } + if (obj instanceof String) { + return Boolean.parseBoolean((String) obj); + } + return true; + } + + /** + * String non-emptiness for if-conditions: null is false, otherwise checks + * that toString() is non-empty. + * <pre> + * // LAL: if (parsed.name) → h.isNotEmpty(h.mapVal("name")) + * </pre> + */ + public boolean isNotEmpty(final Object obj) { + if (obj == null) { + return false; + } + if (obj instanceof String) { + return !((String) obj).isEmpty(); + } + return !obj.toString().isEmpty(); + } + + /** + * Primitive boolean overload — needed when chained methods (e.g. + * {@code .endsWith()}) return primitive {@code boolean} which Javassist + * cannot auto-box to match {@code isNotEmpty(Object)}. + */ + public boolean isNotEmpty(final boolean value) { + return value; + } + + /** + * Primitive boolean overload for {@link #isTrue(Object)}. + */ + public boolean isTrue(final boolean value) { + return value; + } + + // ==================== Safe navigation ==================== + + /** + * Null-safe {@code ?.toString()}: returns null when input is null. + * <pre> + * // LAL: parsed?.x?.toString() → h.toString(h.mapVal("x")) + * </pre> + */ + public String toString(final Object obj) { + return obj == null ? null : obj.toString(); + } + + /** + * Null-safe {@code ?.trim()}: returns null when input is null. + * <pre> + * // LAL: parsed?.x?.trim() → h.trim(h.mapVal("x")) + * </pre> + */ + public String trim(final Object obj) { + return obj == null ? null : obj.toString().trim(); + } + +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/DSL.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/DSL.java new file mode 100644 index 000000000000..24e6936dfac4 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/DSL.java @@ -0,0 +1,90 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl; + +import lombok.AccessLevel; +import lombok.RequiredArgsConstructor; +import lombok.extern.slf4j.Slf4j; +import org.apache.skywalking.apm.network.logging.v3.LogData; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALClassGenerator; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.filter.FilterSpec; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.server.library.module.ModuleManager; +import org.apache.skywalking.oap.server.library.module.ModuleStartException; + +/** + * DSL compiles a LAL (Log Analysis Language) expression string into a + * {@link LalExpression} object and wraps it with runtime state management. + * + * <p>One DSL instance is created per LAL rule entry defined in a {@code .yaml} + * config file under {@code lal/}. Instances are compiled once at startup and + * reused for every incoming log. This class is immutable and thread-safe — + * per-log state is passed as a parameter to {@link #evaluate(ExecutionContext)}. + */ +@Slf4j +@RequiredArgsConstructor(access = AccessLevel.PRIVATE) +public class DSL { + private final String ruleName; + private final LalExpression expression; + private final FilterSpec filterSpec; + + public static DSL of(final ModuleManager moduleManager, + final LogAnalyzerModuleConfig config, + final String dsl) throws ModuleStartException { + return of(moduleManager, config, dsl, null, "unknown", null); + } + + public static DSL of(final ModuleManager moduleManager, + final LogAnalyzerModuleConfig config, + final String dsl, + final Class<?> extraLogType, + final String ruleName) throws ModuleStartException { + return of(moduleManager, config, dsl, extraLogType, ruleName, null); + } + + public static DSL of(final ModuleManager moduleManager, + final LogAnalyzerModuleConfig config, + final String dsl, + final Class<?> extraLogType, + final String ruleName, + final String yamlSource) throws ModuleStartException { + try { + final LALClassGenerator generator = new LALClassGenerator(); + generator.setExtraLogType(extraLogType); + generator.setClassNameHint(ruleName); + generator.setYamlSource(yamlSource); + final LalExpression expression = generator.compile(dsl); + final FilterSpec filterSpec = new FilterSpec(moduleManager, config); + return new DSL(ruleName, expression, filterSpec); + } catch (Exception e) { + throw new ModuleStartException( + "Failed to compile LAL expression: " + dsl, e); + } + } + + public void evaluate(final ExecutionContext ctx) { + if (log.isDebugEnabled()) { + final LogData.Builder logData = ctx.log(); + log.debug("[LAL] rule={}, class={}, service={}, instance={}, endpoint={}, bodyType={}", + ruleName, expression.getClass().getName(), + logData.getService(), logData.getServiceInstance(), + logData.getEndpoint(), logData.getBody().getContentCase()); + } + expression.execute(filterSpec, ctx); + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/ExecutionContext.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/ExecutionContext.java new file mode 100644 index 000000000000..449a13cc80f7 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/ExecutionContext.java @@ -0,0 +1,182 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl; + +import com.google.protobuf.Message; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.concurrent.atomic.AtomicReference; +import java.util.regex.Matcher; +import lombok.Getter; +import org.apache.skywalking.apm.network.logging.v3.LogData; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily; +import org.apache.skywalking.oap.server.analyzer.provider.trace.parser.listener.DatabaseSlowStatementBuilder; +import org.apache.skywalking.oap.server.analyzer.provider.trace.parser.listener.SampledTraceBuilder; +import org.apache.skywalking.oap.server.core.source.Log; + +/** + * Mutable property storage for a single LAL script execution cycle. + * + * <p>A new ExecutionContext is created for each incoming log. It carries all + * per-log state through the compiled LAL pipeline: + * <ul> + * <li>{@code log} — the incoming {@code LogData.Builder}</li> + * <li>{@code parsed} — structured data extracted by json/text/yaml parsers</li> + * <li>{@code save}/{@code abort} — control flags set by extractor/sink logic</li> + * <li>{@code metrics_container} — optional list for LAL-extracted metrics (log-MAL)</li> + * <li>{@code log_container} — optional container for the built {@code Log} source object</li> + * </ul> + */ +public class ExecutionContext { + public static final String KEY_LOG = "log"; + public static final String KEY_PARSED = "parsed"; + public static final String KEY_SAVE = "save"; + public static final String KEY_ABORT = "abort"; + public static final String KEY_METRICS_CONTAINER = "metrics_container"; + public static final String KEY_LOG_CONTAINER = "log_container"; + public static final String KEY_DATABASE_SLOW_STATEMENT = "database_slow_statement"; + public static final String KEY_SAMPLED_TRACE = "sampled_trace"; + + private final Map<String, Object> properties = new HashMap<>(); + + public ExecutionContext() { + setProperty(KEY_PARSED, new Parsed()); + } + + public void setProperty(final String name, final Object value) { + properties.put(name, value); + } + + public Object getProperty(final String name) { + return properties.get(name); + } + + public ExecutionContext log(final LogData.Builder log) { + setProperty(KEY_LOG, log); + setProperty(KEY_SAVE, true); + setProperty(KEY_ABORT, false); + setProperty(KEY_METRICS_CONTAINER, null); + setProperty(KEY_LOG_CONTAINER, null); + return this; + } + + public ExecutionContext log(final LogData log) { + return log(log.toBuilder()); + } + + public LogData.Builder log() { + return (LogData.Builder) getProperty(KEY_LOG); + } + + public ExecutionContext extraLog(final Message extraLog) { + parsed().extraLog = extraLog; + return this; + } + + public Message extraLog() { + return parsed().getExtraLog(); + } + + public ExecutionContext parsed(final Matcher parsed) { + parsed().matcher = parsed; + return this; + } + + public ExecutionContext parsed(final Map<String, Object> parsed) { + parsed().map = parsed; + return this; + } + + public Parsed parsed() { + return (Parsed) getProperty(KEY_PARSED); + } + + public DatabaseSlowStatementBuilder databaseSlowStatement() { + return (DatabaseSlowStatementBuilder) getProperty(KEY_DATABASE_SLOW_STATEMENT); + } + + public ExecutionContext databaseSlowStatement(final DatabaseSlowStatementBuilder databaseSlowStatementBuilder) { + setProperty(KEY_DATABASE_SLOW_STATEMENT, databaseSlowStatementBuilder); + return this; + } + + public SampledTraceBuilder sampledTraceBuilder() { + return (SampledTraceBuilder) getProperty(KEY_SAMPLED_TRACE); + } + + public ExecutionContext sampledTrace(final SampledTraceBuilder sampledTraceBuilder) { + setProperty(KEY_SAMPLED_TRACE, sampledTraceBuilder); + return this; + } + + public ExecutionContext save() { + setProperty(KEY_SAVE, true); + return this; + } + + public ExecutionContext drop() { + setProperty(KEY_SAVE, false); + return this; + } + + public boolean shouldSave() { + return (boolean) getProperty(KEY_SAVE); + } + + public ExecutionContext abort() { + setProperty(KEY_ABORT, true); + return this; + } + + public boolean shouldAbort() { + return (boolean) getProperty(KEY_ABORT); + } + + public ExecutionContext metricsContainer(final List<SampleFamily> container) { + setProperty(KEY_METRICS_CONTAINER, container); + return this; + } + + @SuppressWarnings("unchecked") + public Optional<List<SampleFamily>> metricsContainer() { + return Optional.ofNullable((List<SampleFamily>) getProperty(KEY_METRICS_CONTAINER)); + } + + public ExecutionContext logContainer(final AtomicReference<Log> container) { + setProperty(KEY_LOG_CONTAINER, container); + return this; + } + + @SuppressWarnings("unchecked") + public Optional<AtomicReference<Log>> logContainer() { + return Optional.ofNullable((AtomicReference<Log>) getProperty(KEY_LOG_CONTAINER)); + } + + public static class Parsed { + @Getter + private Matcher matcher; + + @Getter + private Map<String, Object> map; + + @Getter + private Message extraLog; + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/LalExpression.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/LalExpression.java new file mode 100644 index 000000000000..1d77c5622ac8 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/LalExpression.java @@ -0,0 +1,35 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl; + +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.filter.FilterSpec; + +/** + * Functional interface implemented by each compiled LAL class. + * + * <p>Generated at startup by + * {@link org.apache.skywalking.oap.log.analyzer.v2.compiler.LALClassGenerator} + * via ANTLR4 parsing and Javassist bytecode generation. + * The generated {@code execute} method calls {@link FilterSpec} methods + * (json/text/yaml, extractor, sink) in the order defined by the LAL script. + */ +@FunctionalInterface +public interface LalExpression { + void execute(FilterSpec filterSpec, ExecutionContext ctx); +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/AbstractSpec.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/AbstractSpec.java new file mode 100644 index 000000000000..cf466b072d73 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/AbstractSpec.java @@ -0,0 +1,35 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl.spec; + +import lombok.Getter; +import lombok.RequiredArgsConstructor; +import lombok.experimental.Accessors; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.server.library.module.ModuleManager; + +@Getter +@RequiredArgsConstructor +@Accessors(fluent = true) +public abstract class AbstractSpec { + private final ModuleManager moduleManager; + + private final LogAnalyzerModuleConfig moduleConfig; + +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/extractor/ExtractorSpec.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/extractor/ExtractorSpec.java new file mode 100644 index 000000000000..c4156acf1041 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/extractor/ExtractorSpec.java @@ -0,0 +1,338 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.extractor; + +import com.google.common.collect.ImmutableMap; +import java.text.ParseException; +import java.text.SimpleDateFormat; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Optional; +import java.util.stream.Collectors; +import lombok.experimental.Delegate; +import org.apache.commons.lang3.StringUtils; +import org.apache.skywalking.apm.network.common.v3.KeyStringValuePair; +import org.apache.skywalking.apm.network.logging.v3.LogData; +import org.apache.skywalking.apm.network.logging.v3.TraceContext; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.AbstractSpec; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.extractor.sampledtrace.SampledTraceSpec; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.extractor.slowsql.SlowSqlSpec; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleProvider; +import org.apache.skywalking.oap.meter.analyzer.v2.MetricConvert; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyBuilder; +import org.apache.skywalking.oap.server.analyzer.provider.trace.parser.listener.DatabaseSlowStatementBuilder; +import org.apache.skywalking.oap.server.analyzer.provider.trace.parser.listener.SampledTraceBuilder; +import org.apache.skywalking.oap.server.core.CoreModule; +import org.apache.skywalking.oap.server.core.analysis.DownSampling; +import org.apache.skywalking.oap.server.core.analysis.Layer; +import org.apache.skywalking.oap.server.core.analysis.TimeBucket; +import org.apache.skywalking.oap.server.core.analysis.record.Record; +import org.apache.skywalking.oap.server.core.analysis.worker.RecordStreamProcessor; +import org.apache.skywalking.oap.server.core.config.NamingControl; +import org.apache.skywalking.oap.server.core.source.ISource; +import org.apache.skywalking.oap.server.core.source.ServiceMeta; +import org.apache.skywalking.oap.server.core.source.SourceReceiver; +import org.apache.skywalking.oap.server.library.module.ModuleManager; +import org.apache.skywalking.oap.server.library.module.ModuleStartException; +import org.apache.skywalking.oap.server.library.util.StringUtil; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import static java.util.Objects.nonNull; +import static org.apache.skywalking.oap.server.library.util.StringUtil.isNotBlank; + +public class ExtractorSpec extends AbstractSpec { + private static final Logger LOGGER = LoggerFactory.getLogger(ExtractorSpec.class); + + private final List<MetricConvert> metricConverts; + + private final SlowSqlSpec slowSql; + private final SampledTraceSpec sampledTrace; + + private final NamingControl namingControl; + + private final SourceReceiver sourceReceiver; + + public ExtractorSpec(final ModuleManager moduleManager, + final LogAnalyzerModuleConfig moduleConfig) throws ModuleStartException { + super(moduleManager, moduleConfig); + + LogAnalyzerModuleProvider provider = (LogAnalyzerModuleProvider) moduleManager + .find(LogAnalyzerModule.NAME).provider(); + + metricConverts = provider.getMetricConverts(); + + slowSql = new SlowSqlSpec(moduleManager(), moduleConfig()); + sampledTrace = new SampledTraceSpec(moduleManager(), moduleConfig()); + + namingControl = moduleManager.find(CoreModule.NAME) + .provider() + .getService(NamingControl.class); + + sourceReceiver = moduleManager.find(CoreModule.NAME).provider().getService(SourceReceiver.class); + } + + public void service(final ExecutionContext ctx, final String service) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(service)) { + ctx.log().setService(service); + } + } + + public void instance(final ExecutionContext ctx, final String instance) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(instance)) { + ctx.log().setServiceInstance(instance); + } + } + + public void endpoint(final ExecutionContext ctx, final String endpoint) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(endpoint)) { + ctx.log().setEndpoint(endpoint); + } + } + + public void tag(final ExecutionContext ctx, final String key, final String value) { + if (ctx.shouldAbort()) { + return; + } + if (isNotBlank(key) && isNotBlank(value)) { + ctx.log().setTags( + ctx.log().getTags().toBuilder() + .addData(KeyStringValuePair.newBuilder() + .setKey(key).setValue(value).build()) + ); + } + } + + public void traceId(final ExecutionContext ctx, final String traceId) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(traceId)) { + final LogData.Builder logData = ctx.log(); + final TraceContext.Builder traceContext = logData.getTraceContext().toBuilder(); + traceContext.setTraceId(traceId); + logData.setTraceContext(traceContext); + } + } + + public void segmentId(final ExecutionContext ctx, final String segmentId) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(segmentId)) { + final LogData.Builder logData = ctx.log(); + final TraceContext.Builder traceContext = logData.getTraceContext().toBuilder(); + traceContext.setTraceSegmentId(segmentId); + logData.setTraceContext(traceContext); + } + } + + public void spanId(final ExecutionContext ctx, final String spanId) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(spanId)) { + final LogData.Builder logData = ctx.log(); + final TraceContext.Builder traceContext = logData.getTraceContext().toBuilder(); + traceContext.setSpanId(Integer.parseInt(spanId)); + logData.setTraceContext(traceContext); + } + } + + public void timestamp(final ExecutionContext ctx, final String timestamp) { + timestamp(ctx, timestamp, null); + } + + public void timestamp(final ExecutionContext ctx, final String timestamp, + final String formatPattern) { + if (ctx.shouldAbort()) { + return; + } + if (StringUtil.isEmpty(timestamp)) { + return; + } + + if (StringUtil.isEmpty(formatPattern)) { + if (StringUtils.isNumeric(timestamp)) { + ctx.log().setTimestamp(Long.parseLong(timestamp)); + } + } else { + SimpleDateFormat format = new SimpleDateFormat(formatPattern); + try { + ctx.log().setTimestamp(format.parse(timestamp).getTime()); + } catch (ParseException e) { + // ignore + } + } + } + + public void layer(final ExecutionContext ctx, final String layer) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(layer)) { + ctx.log().setLayer(layer); + } + } + + public SampleBuilder prepareMetrics(final ExecutionContext ctx) { + if (ctx.shouldAbort()) { + return null; + } + return new SampleBuilder(); + } + + public void submitMetrics(final ExecutionContext ctx, final SampleBuilder builder) { + if (ctx.shouldAbort() || builder == null) { + return; + } + final Sample sample = builder.build(); + final SampleFamily sampleFamily = SampleFamilyBuilder.newBuilder(sample).build(); + + final Optional<List<SampleFamily>> possibleMetricsContainer = ctx.metricsContainer(); + + if (possibleMetricsContainer.isPresent()) { + possibleMetricsContainer.get().add(sampleFamily); + } else { + metricConverts.forEach(it -> it.toMeter( + ImmutableMap.<String, SampleFamily>builder() + .put(sample.getName(), sampleFamily) + .build() + )); + } + } + + public SampledTraceSpec sampledTraceSpec() { + return sampledTrace; + } + + public SlowSqlSpec slowSqlSpec() { + return slowSql; + } + + public void prepareSampledTrace(final ExecutionContext ctx) { + if (ctx.shouldAbort()) { + return; + } + final LogData.Builder log = ctx.log(); + final SampledTraceBuilder builder = new SampledTraceBuilder(namingControl); + builder.setLayer(log.getLayer()); + builder.setTimestamp(log.getTimestamp()); + builder.setServiceName(log.getService()); + builder.setServiceInstanceName(log.getServiceInstance()); + builder.setTraceId(log.getTraceContext().getTraceId()); + ctx.sampledTrace(builder); + } + + public void submitSampledTrace(final ExecutionContext ctx) { + if (ctx.shouldAbort()) { + return; + } + final SampledTraceBuilder builder = ctx.sampledTraceBuilder(); + if (builder == null) { + return; + } + builder.validate(); + final Record record = builder.toRecord(); + final ISource entity = builder.toEntity(); + RecordStreamProcessor.getInstance().in(record); + sourceReceiver.receive(entity); + } + + public void prepareSlowSql(final ExecutionContext ctx) { + if (ctx.shouldAbort()) { + return; + } + final LogData.Builder log = ctx.log(); + if (log.getLayer() == null + || log.getService() == null + || log.getTimestamp() < 1) { + LOGGER.warn("SlowSql extracts failed, maybe something is not configured."); + return; + } + final DatabaseSlowStatementBuilder builder = new DatabaseSlowStatementBuilder(namingControl); + builder.setLayer(Layer.nameOf(log.getLayer())); + builder.setServiceName(log.getService()); + ctx.databaseSlowStatement(builder); + } + + public void submitSlowSql(final ExecutionContext ctx) { + if (ctx.shouldAbort()) { + return; + } + final DatabaseSlowStatementBuilder builder = ctx.databaseSlowStatement(); + if (builder == null) { + return; + } + if (builder.getId() == null + || builder.getLatency() < 1 + || builder.getStatement() == null) { + LOGGER.warn("SlowSql extracts failed, maybe something is not configured."); + return; + } + final LogData.Builder log = ctx.log(); + final long timeBucketForDB = TimeBucket.getTimeBucket(log.getTimestamp(), DownSampling.Second); + builder.setTimeBucket(timeBucketForDB); + builder.setTimestamp(log.getTimestamp()); + builder.prepare(); + sourceReceiver.receive(builder.toDatabaseSlowStatement()); + + final ServiceMeta serviceMeta = new ServiceMeta(); + serviceMeta.setName(builder.getServiceName()); + serviceMeta.setLayer(builder.getLayer()); + final long timeBucket = TimeBucket.getTimeBucket(log.getTimestamp(), DownSampling.Minute); + serviceMeta.setTimeBucket(timeBucket); + sourceReceiver.receive(serviceMeta); + } + + public static class SampleBuilder { + @Delegate + private final Sample.SampleBuilder sampleBuilder = Sample.builder(); + + @SuppressWarnings("unused") + public Sample.SampleBuilder labels(final Map<String, ?> labels) { + final Map<String, String> filtered = + labels.entrySet() + .stream() + .filter(it -> isNotBlank(it.getKey()) && nonNull(it.getValue())) + .collect( + Collectors.toMap( + Map.Entry::getKey, + it -> Objects.toString(it.getValue()) + ) + ); + return sampleBuilder.labels(ImmutableMap.copyOf(filtered)); + } + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/extractor/sampledtrace/SampledTraceSpec.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/extractor/sampledtrace/SampledTraceSpec.java new file mode 100644 index 000000000000..43a2b7af884e --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/extractor/sampledtrace/SampledTraceSpec.java @@ -0,0 +1,100 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.extractor.sampledtrace; + +import org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.AbstractSpec; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.server.analyzer.provider.trace.parser.listener.SampledTraceBuilder; +import org.apache.skywalking.oap.server.core.source.DetectPoint; +import org.apache.skywalking.oap.server.library.module.ModuleManager; + +import static java.util.Objects.nonNull; + +public class SampledTraceSpec extends AbstractSpec { + public SampledTraceSpec(ModuleManager moduleManager, LogAnalyzerModuleConfig moduleConfig) { + super(moduleManager, moduleConfig); + } + + public void latency(final ExecutionContext ctx, final Long latency) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(latency)) { + ctx.sampledTraceBuilder().setLatency(latency); + } + } + + public void uri(final ExecutionContext ctx, final String uri) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(uri)) { + ctx.sampledTraceBuilder().setUri(uri); + } + } + + public void reason(final ExecutionContext ctx, final String reason) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(reason)) { + ctx.sampledTraceBuilder().setReason( + SampledTraceBuilder.Reason.valueOf(reason.toUpperCase())); + } + } + + public void processId(final ExecutionContext ctx, final String id) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(id)) { + ctx.sampledTraceBuilder().setProcessId(id); + } + } + + public void destProcessId(final ExecutionContext ctx, final String id) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(id)) { + ctx.sampledTraceBuilder().setDestProcessId(id); + } + } + + public void detectPoint(final ExecutionContext ctx, final String detectPoint) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(detectPoint)) { + final DetectPoint point = DetectPoint.valueOf(detectPoint.toUpperCase()); + ctx.sampledTraceBuilder().setDetectPoint(point); + } + } + + public void componentId(final ExecutionContext ctx, final int id) { + if (ctx.shouldAbort()) { + return; + } + if (id > 0) { + ctx.sampledTraceBuilder().setComponentId(id); + } + } + +} \ No newline at end of file diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/extractor/slowsql/SlowSqlSpec.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/extractor/slowsql/SlowSqlSpec.java new file mode 100644 index 000000000000..1dee2e5eccf5 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/extractor/slowsql/SlowSqlSpec.java @@ -0,0 +1,63 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.extractor.slowsql; + +import org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.AbstractSpec; + +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.server.library.module.ModuleManager; + +import static java.util.Objects.nonNull; + +public class SlowSqlSpec extends AbstractSpec { + + public SlowSqlSpec(final ModuleManager moduleManager, + final LogAnalyzerModuleConfig moduleConfig) { + super(moduleManager, moduleConfig); + } + + public void latency(final ExecutionContext ctx, final Long latency) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(latency)) { + ctx.databaseSlowStatement().setLatency(latency); + } + } + + public void statement(final ExecutionContext ctx, final String statement) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(statement)) { + ctx.databaseSlowStatement().setStatement(statement); + } + } + + public void id(final ExecutionContext ctx, final String id) { + if (ctx.shouldAbort()) { + return; + } + if (nonNull(id)) { + ctx.databaseSlowStatement().setId(id); + } + } + +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/filter/FilterSpec.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/filter/FilterSpec.java new file mode 100644 index 000000000000..f63f99492b18 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/filter/FilterSpec.java @@ -0,0 +1,256 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.filter; + +import com.fasterxml.jackson.core.type.TypeReference; +import com.google.protobuf.Message; +import com.google.protobuf.TextFormat; +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.concurrent.atomic.AtomicReference; +import org.apache.skywalking.apm.network.logging.v3.LogData; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.AbstractSpec; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.extractor.ExtractorSpec; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.parser.JsonParserSpec; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.parser.TextParserSpec; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.parser.YamlParserSpec; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.sink.SamplerSpec; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.sink.SinkSpec; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener.LogSinkListenerFactory; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener.RecordSinkListener; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener.TrafficSinkListener; +import org.apache.skywalking.oap.server.core.source.Log; +import org.apache.skywalking.oap.server.library.module.ModuleManager; +import org.apache.skywalking.oap.server.library.module.ModuleStartException; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The top-level runtime API that compiled LAL expressions invoke. + * + * <p>A compiled {@link org.apache.skywalking.oap.log.analyzer.v2.dsl.LalExpression} + * calls methods on this class in the order defined by the LAL script. + * All methods receive an explicit {@link ExecutionContext} parameter — no ThreadLocal state. + */ +public class FilterSpec extends AbstractSpec { + private static final Logger LOGGER = LoggerFactory.getLogger(FilterSpec.class); + + private final List<LogSinkListenerFactory> sinkListenerFactories; + + private final TextParserSpec textParser; + + private final JsonParserSpec jsonParser; + + private final YamlParserSpec yamlParser; + + private final ExtractorSpec extractor; + + private final SinkSpec sink; + + private final TypeReference<Map<String, Object>> parsedType; + + public FilterSpec(final ModuleManager moduleManager, + final LogAnalyzerModuleConfig moduleConfig) throws ModuleStartException { + super(moduleManager, moduleConfig); + + parsedType = new TypeReference<Map<String, Object>>() { + }; + + sinkListenerFactories = Arrays.asList( + new RecordSinkListener.Factory(moduleManager(), moduleConfig()), + new TrafficSinkListener.Factory(moduleManager(), moduleConfig()) + ); + + textParser = new TextParserSpec(moduleManager(), moduleConfig()); + jsonParser = new JsonParserSpec(moduleManager(), moduleConfig()); + yamlParser = new YamlParserSpec(moduleManager(), moduleConfig()); + + extractor = new ExtractorSpec(moduleManager(), moduleConfig()); + + sink = new SinkSpec(moduleManager(), moduleConfig()); + } + + /** + * LAL {@code text {}} — no-op body parser, body is available as raw text. + * Parsed data is not populated; use {@code log.body} to access raw content. + */ + public void text(final ExecutionContext ctx) { + if (ctx.shouldAbort()) { + return; + } + } + + /** + * LAL {@code text { regexp '...' }} — applies a named-group regexp to the + * log body text. Matched groups are stored in {@code ctx.parsed().getMatcher()} + * and accessed via {@code parsed.groupName} in the LAL script. + */ + public void textWithRegexp(final ExecutionContext ctx, final String regexp) { + if (ctx.shouldAbort()) { + return; + } + textParser.regexp(ctx, regexp); + } + + /** + * LAL {@code json {}} — parses {@code LogData.body.json.json} into a + * {@code Map<String, Object>} and stores it in {@code ctx.parsed()}. + * LogData proto fields (service, serviceInstance, endpoint, layer, timestamp) + * are also added to the map via {@code putIfAbsent}, so body values take + * priority while proto fields serve as fallback — matching v1 Groovy + * {@code Binding.Parsed.getAt(key)} behavior. + */ + public void json(final ExecutionContext ctx) { + if (ctx.shouldAbort()) { + return; + } + final LogData.Builder logData = ctx.log(); + try { + final Map<String, Object> parsed = jsonParser.create().readValue( + logData.getBody().getJson().getJson(), parsedType + ); + addLogDataFields(parsed, logData); + ctx.parsed(parsed); + } catch (final Exception e) { + if (jsonParser.abortOnFailure()) { + ctx.abort(); + } + } + } + + /** + * LAL {@code yaml {}} — parses {@code LogData.body.yaml.yaml} into a + * {@code Map<String, Object>} and stores it in {@code ctx.parsed()}. + * LogData proto fields are added the same way as {@link #json(ExecutionContext)}. + */ + public void yaml(final ExecutionContext ctx) { + if (ctx.shouldAbort()) { + return; + } + final LogData.Builder logData = ctx.log(); + try { + final Map<String, Object> parsed = yamlParser.create().load( + logData.getBody().getYaml().getYaml() + ); + addLogDataFields(parsed, logData); + ctx.parsed(parsed); + } catch (final Exception e) { + if (yamlParser.abortOnFailure()) { + ctx.abort(); + } + } + } + + /** + * LAL {@code sink {}} — persists the log via sink listeners if the log + * was not dropped or aborted. + */ + public void sink(final ExecutionContext ctx) { + if (ctx.shouldAbort()) { + return; + } + doSink(ctx); + } + + private void doSink(final ExecutionContext ctx) { + final LogData.Builder logData = ctx.log(); + final Message extraLog = ctx.extraLog(); + + if (!ctx.shouldSave()) { + if (LOGGER.isDebugEnabled()) { + LOGGER.debug("Log is dropped: {}", TextFormat.shortDebugString(logData)); + } + return; + } + + final Optional<AtomicReference<Log>> container = ctx.logContainer(); + if (container.isPresent()) { + sinkListenerFactories.stream() + .map(LogSinkListenerFactory::create) + .filter(it -> it instanceof RecordSinkListener) + .map(it -> it.parse(logData, extraLog)) + .map(it -> (RecordSinkListener) it) + .map(RecordSinkListener::getLog) + .findFirst() + .ifPresent(log -> container.get().set(log)); + } else { + sinkListenerFactories.stream() + .map(LogSinkListenerFactory::create) + .forEach(it -> it.parse(logData, extraLog).build()); + } + } + + // ==================== Direct-access APIs for flattened generated code ==================== + + public ExtractorSpec extractor() { + return extractor; + } + + public SamplerSpec sampler() { + return sink.sampler(); + } + + public void abort(final ExecutionContext ctx) { + ctx.abort(); + } + + public void enforcer(final ExecutionContext ctx) { + sink.enforcer(ctx); + } + + public void dropper(final ExecutionContext ctx) { + sink.dropper(ctx); + } + + public void finalizeSink(final ExecutionContext ctx) { + if (ctx.shouldAbort()) { + return; + } + doSink(ctx); + } + + /** + * Add LogData proto fields to the parsed map so that {@code parsed.service}, + * {@code parsed.serviceInstance}, etc. resolve correctly — matching v1 Groovy + * {@code Binding.Parsed.getAt(key)} fallback behavior. + * Uses {@code putIfAbsent} so body-parsed values take priority. + */ + private static void addLogDataFields(final Map<String, Object> parsed, + final LogData.Builder logData) { + putIfNotEmpty(parsed, "service", logData.getService()); + putIfNotEmpty(parsed, "serviceInstance", logData.getServiceInstance()); + putIfNotEmpty(parsed, "endpoint", logData.getEndpoint()); + putIfNotEmpty(parsed, "layer", logData.getLayer()); + final long ts = logData.getTimestamp(); + if (ts > 0) { + parsed.putIfAbsent("timestamp", ts); + } + } + + private static void putIfNotEmpty(final Map<String, Object> parsed, + final String key, final String value) { + if (value != null && !value.isEmpty()) { + parsed.putIfAbsent(key, value); + } + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/parser/AbstractParserSpec.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/parser/AbstractParserSpec.java new file mode 100644 index 000000000000..d27e4d42dc52 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/parser/AbstractParserSpec.java @@ -0,0 +1,49 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.parser; + +import lombok.experimental.Accessors; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.AbstractSpec; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.server.library.module.ModuleManager; + +@Accessors +public class AbstractParserSpec extends AbstractSpec { + /** + * Whether the filter chain should abort when parsing the logs failed. + * + * Failing to parse the logs means either parsing throws exceptions or the logs not matching the + * desired patterns. + */ + private boolean abortOnFailure = true; + + public AbstractParserSpec(final ModuleManager moduleManager, + final LogAnalyzerModuleConfig moduleConfig) { + super(moduleManager, moduleConfig); + } + + @SuppressWarnings("unused") // used in user LAL scripts + public void abortOnFailure(final boolean abortOnFailure) { + this.abortOnFailure = abortOnFailure; + } + + public boolean abortOnFailure() { + return this.abortOnFailure; + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/parser/JsonParserSpec.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/parser/JsonParserSpec.java new file mode 100644 index 000000000000..47795f992dea --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/parser/JsonParserSpec.java @@ -0,0 +1,40 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.parser; + +import com.fasterxml.jackson.databind.ObjectMapper; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.server.library.module.ModuleManager; + +public class JsonParserSpec extends AbstractParserSpec { + private final ObjectMapper mapper; + + public JsonParserSpec(final ModuleManager moduleManager, + final LogAnalyzerModuleConfig moduleConfig) { + super(moduleManager, moduleConfig); + + // We just create a mapper instance in advance for now (for the sake of performance), + // when we want to provide some extra options, we'll move this into method "create" then. + mapper = new ObjectMapper(); + } + + public ObjectMapper create() { + return mapper; + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/parser/TextParserSpec.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/parser/TextParserSpec.java new file mode 100644 index 000000000000..8d57a0c2ad02 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/parser/TextParserSpec.java @@ -0,0 +1,52 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.parser; + +import java.util.regex.Matcher; +import java.util.regex.Pattern; +import org.apache.skywalking.apm.network.logging.v3.LogData; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.server.library.module.ModuleManager; + +public class TextParserSpec extends AbstractParserSpec { + public TextParserSpec(final ModuleManager moduleManager, + final LogAnalyzerModuleConfig moduleConfig) { + super(moduleManager, moduleConfig); + } + + public void regexp(final ExecutionContext ctx, final String regexp) { + regexp(ctx, Pattern.compile(regexp)); + } + + public void regexp(final ExecutionContext ctx, final Pattern pattern) { + if (ctx.shouldAbort()) { + return; + } + final LogData.Builder log = ctx.log(); + final Matcher matcher = pattern.matcher(log.getBody().getText().getText()); + final boolean matched = matcher.find(); + if (matched) { + ctx.parsed(matcher); + } else if (abortOnFailure()) { + ctx.abort(); + } + } + +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/parser/YamlParserSpec.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/parser/YamlParserSpec.java new file mode 100644 index 000000000000..5f4b26ae0d68 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/parser/YamlParserSpec.java @@ -0,0 +1,46 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.parser; + +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.server.library.module.ModuleManager; +import org.yaml.snakeyaml.DumperOptions; +import org.yaml.snakeyaml.LoaderOptions; +import org.yaml.snakeyaml.Yaml; +import org.yaml.snakeyaml.constructor.SafeConstructor; +import org.yaml.snakeyaml.representer.Representer; + +public class YamlParserSpec extends AbstractParserSpec { + private final LoaderOptions loaderOptions; + + public YamlParserSpec(final ModuleManager moduleManager, + final LogAnalyzerModuleConfig moduleConfig) { + super(moduleManager, moduleConfig); + + loaderOptions = new LoaderOptions(); + } + + public Yaml create() { + final var dumperOptions = new DumperOptions(); + return new Yaml( + new SafeConstructor(loaderOptions), + new Representer(dumperOptions), + dumperOptions, loaderOptions); + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/SamplerSpec.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/SamplerSpec.java new file mode 100644 index 000000000000..b8079c977b3f --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/SamplerSpec.java @@ -0,0 +1,68 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.sink; + +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.AbstractSpec; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.sink.sampler.RateLimitingSampler; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.sink.sampler.Sampler; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.server.library.module.ModuleManager; + +public class SamplerSpec extends AbstractSpec { + private final Map<String, Sampler> rateLimitSamplersByString; + private final Map<Integer, Sampler> possibilitySamplers; + private final RateLimitingSampler.ResetHandler rlsResetHandler; + + public SamplerSpec(final ModuleManager moduleManager, + final LogAnalyzerModuleConfig moduleConfig) { + super(moduleManager, moduleConfig); + + rateLimitSamplersByString = new ConcurrentHashMap<>(); + possibilitySamplers = new ConcurrentHashMap<>(); + rlsResetHandler = new RateLimitingSampler.ResetHandler(); + } + + public void rateLimit(final ExecutionContext ctx, final String id, final int rpm) { + if (ctx.shouldAbort()) { + return; + } + + final Sampler sampler = rateLimitSamplersByString.computeIfAbsent( + id, $ -> new RateLimitingSampler(rlsResetHandler).start()); + + ((RateLimitingSampler) sampler).rpm(rpm); + + sampleWith(ctx, sampler); + } + + private void sampleWith(final ExecutionContext ctx, final Sampler sampler) { + if (ctx.shouldAbort()) { + return; + } + if (sampler.sample()) { + ctx.save(); + } else { + ctx.drop(); + } + } + +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/SinkSpec.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/SinkSpec.java new file mode 100644 index 000000000000..46ac06139505 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/SinkSpec.java @@ -0,0 +1,55 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.sink; + +import org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.AbstractSpec; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.server.library.module.ModuleManager; + +public class SinkSpec extends AbstractSpec { + + private final SamplerSpec sampler; + + public SinkSpec(final ModuleManager moduleManager, + final LogAnalyzerModuleConfig moduleConfig) { + super(moduleManager, moduleConfig); + + sampler = new SamplerSpec(moduleManager(), moduleConfig()); + } + + public SamplerSpec sampler() { + return sampler; + } + + public void enforcer(final ExecutionContext ctx) { + if (ctx.shouldAbort()) { + return; + } + ctx.save(); + } + + public void dropper(final ExecutionContext ctx) { + if (ctx.shouldAbort()) { + return; + } + ctx.drop(); + } + +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/sampler/PossibilitySampler.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/sampler/PossibilitySampler.java new file mode 100644 index 000000000000..d5d3c73be5fa --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/sampler/PossibilitySampler.java @@ -0,0 +1,54 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.sink.sampler; + +import java.util.concurrent.ThreadLocalRandom; +import lombok.EqualsAndHashCode; +import lombok.Getter; +import lombok.RequiredArgsConstructor; +import lombok.experimental.Accessors; + +@RequiredArgsConstructor +@Accessors(fluent = true) +@EqualsAndHashCode(of = {"percentage"}) +public class PossibilitySampler implements Sampler { + @Getter + private final int percentage; + + private final ThreadLocalRandom random = ThreadLocalRandom.current(); + + @Override + public PossibilitySampler start() { + return this; + } + + @Override + public void close() { + } + + @Override + public boolean sample() { + return random.nextInt(100) < percentage; + } + + @Override + public PossibilitySampler reset() { + return this; + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/sampler/RateLimitingSampler.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/sampler/RateLimitingSampler.java new file mode 100644 index 000000000000..888ef1cfd1b2 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/sampler/RateLimitingSampler.java @@ -0,0 +1,107 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.sink.sampler; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.Executors; +import java.util.concurrent.ScheduledFuture; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; +import lombok.EqualsAndHashCode; +import lombok.Getter; +import lombok.Setter; +import lombok.experimental.Accessors; +import lombok.extern.slf4j.Slf4j; + +@Accessors(fluent = true) +@EqualsAndHashCode(of = {"rpm"}) +public class RateLimitingSampler implements Sampler { + @Getter + @Setter + private volatile int rpm; + + private final AtomicInteger factor = new AtomicInteger(); + + private final ResetHandler resetHandler; + + public RateLimitingSampler(final ResetHandler resetHandler) { + this.resetHandler = resetHandler; + } + + @Override + public RateLimitingSampler start() { + resetHandler.start(this); + return this; + } + + @Override + public void close() { + resetHandler.close(this); + } + + @Override + public boolean sample() { + return factor.getAndIncrement() < rpm; + } + + @Override + public RateLimitingSampler reset() { + factor.set(0); + return this; + } + + @Slf4j + public static class ResetHandler { + private final List<Sampler> samplers = new ArrayList<>(); + + private volatile ScheduledFuture<?> future; + + private volatile boolean started = false; + + private synchronized void start(final Sampler sampler) { + samplers.add(sampler); + + if (!started) { + future = Executors.newSingleThreadScheduledExecutor() + .scheduleAtFixedRate(this::reset, 1, 1, TimeUnit.MINUTES); + started = true; + } + } + + private synchronized void close(final Sampler sampler) { + samplers.remove(sampler); + + if (samplers.isEmpty() && future != null) { + future.cancel(true); + started = false; + } + } + + private synchronized void reset() { + samplers.forEach(sampler -> { + try { + sampler.reset(); + } catch (final Exception e) { + log.error("Failed to reset sampler {}.", sampler, e); + } + }); + } + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/sampler/Sampler.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/sampler/Sampler.java new file mode 100644 index 000000000000..3ef82fce65a6 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/spec/sink/sampler/Sampler.java @@ -0,0 +1,42 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.sink.sampler; + +public interface Sampler extends AutoCloseable { + Sampler NOOP = new Sampler() { + @Override + public boolean sample() { + return false; + } + + @Override + public void close() { + } + }; + + boolean sample(); + + default Sampler start() { + return this; + } + + default Sampler reset() { + return this; + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/module/LogAnalyzerModule.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/module/LogAnalyzerModule.java new file mode 100644 index 000000000000..39faa0826e0f --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/module/LogAnalyzerModule.java @@ -0,0 +1,36 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.module; + +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.ILogAnalyzerService; +import org.apache.skywalking.oap.server.library.module.ModuleDefine; + +public class LogAnalyzerModule extends ModuleDefine { + public static final String NAME = "log-analyzer"; + + public LogAnalyzerModule() { + super(NAME); + } + + @Override + public Class[] services() { + return new Class[] { + ILogAnalyzerService.class + }; + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/LALConfig.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/LALConfig.java new file mode 100644 index 000000000000..27b35c525a5e --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/LALConfig.java @@ -0,0 +1,38 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider; + +import lombok.Data; + +@Data +public class LALConfig { + private String name; + + private String dsl; + + private String layer; + + private String extraLogType; + + /** + * Source YAML file name (without extension), set during loading by + * {@link LALConfigs}. Used for informative stack traces in generated code. + */ + private transient String sourceName; +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/LALConfigs.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/LALConfigs.java new file mode 100644 index 000000000000..9b3ec8ec49a3 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/LALConfigs.java @@ -0,0 +1,83 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider; + +import java.io.File; +import java.io.FileNotFoundException; +import java.io.FileReader; +import java.io.IOException; +import java.io.Reader; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.Objects; +import java.util.stream.Collectors; +import lombok.Data; +import lombok.extern.slf4j.Slf4j; +import org.apache.skywalking.oap.server.library.module.ModuleStartException; +import org.apache.skywalking.oap.server.library.util.ResourceUtils; +import org.yaml.snakeyaml.Yaml; + +import static com.google.common.base.Preconditions.checkArgument; +import static com.google.common.io.Files.getNameWithoutExtension; +import static org.apache.skywalking.oap.server.library.util.StringUtil.isNotBlank; +import static org.apache.skywalking.oap.server.library.util.CollectionUtils.isEmpty; + +@Data +@Slf4j +public class LALConfigs { + private List<LALConfig> rules; + + public static List<LALConfigs> load(final String path, final List<String> files) throws Exception { + if (isEmpty(files)) { + return Collections.emptyList(); + } + + checkArgument(isNotBlank(path), "path cannot be blank"); + + try { + final File[] rules = ResourceUtils.getPathFiles(path); + + return Arrays.stream(rules) + .filter(File::isFile) + .filter(it -> { + //noinspection UnstableApiUsage + return files.contains(getNameWithoutExtension(it.getName())); + }) + .map(f -> { + try (final Reader r = new FileReader(f)) { + final LALConfigs configs = + new Yaml().<LALConfigs>loadAs(r, LALConfigs.class); + if (configs != null && configs.getRules() != null) { + final String src = f.getName(); + configs.getRules().forEach(c -> c.setSourceName(src)); + } + return configs; + } catch (IOException e) { + log.debug("Failed to read file {}", f, e); + } + return null; + }) + .filter(Objects::nonNull) + .collect(Collectors.toList()); + } catch (FileNotFoundException e) { + throw new ModuleStartException("Failed to load LAL config rules", e); + } + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/LogAnalyzerModuleConfig.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/LogAnalyzerModuleConfig.java new file mode 100644 index 000000000000..da62297804da --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/LogAnalyzerModuleConfig.java @@ -0,0 +1,74 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider; + +import com.google.common.base.Splitter; +import com.google.common.base.Strings; + +import java.io.IOException; +import java.util.List; +import lombok.EqualsAndHashCode; +import lombok.Getter; +import lombok.Setter; +import org.apache.skywalking.oap.meter.analyzer.v2.prometheus.rule.Rule; +import org.apache.skywalking.oap.meter.analyzer.v2.prometheus.rule.Rules; +import org.apache.skywalking.oap.server.library.module.ModuleConfig; +import org.apache.skywalking.oap.server.library.module.ModuleStartException; + +import static java.util.Objects.nonNull; + +@EqualsAndHashCode(callSuper = false) +public class LogAnalyzerModuleConfig extends ModuleConfig { + @Getter + @Setter + private String lalPath = "lal"; + + @Getter + @Setter + private String malPath = "log-mal-rules"; + + @Getter + @Setter + private String lalFiles = "default.yaml"; + + @Getter + @Setter + private String malFiles; + + private List<Rule> meterConfigs; + + public List<String> lalFiles() { + return Splitter.on(",").omitEmptyStrings().trimResults().splitToList(Strings.nullToEmpty(getLalFiles())); + } + + public List<Rule> malConfigs() throws ModuleStartException { + if (nonNull(meterConfigs)) { + return meterConfigs; + } + final List<String> files = Splitter.on(",") + .omitEmptyStrings() + .splitToList(Strings.nullToEmpty(getMalFiles())); + try { + meterConfigs = Rules.loadRules(getMalPath(), files); + } catch (IOException e) { + throw new ModuleStartException("Failed to load MAL rules", e); + } + + return meterConfigs; + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/LogAnalyzerModuleProvider.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/LogAnalyzerModuleProvider.java new file mode 100644 index 000000000000..f2612cfff80e --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/LogAnalyzerModuleProvider.java @@ -0,0 +1,102 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider; + +import java.util.List; +import java.util.stream.Collectors; +import lombok.Getter; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.ILogAnalyzerService; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.LogAnalyzerServiceImpl; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener.LogFilterListener; +import org.apache.skywalking.oap.meter.analyzer.v2.MetricConvert; +import org.apache.skywalking.oap.server.configuration.api.ConfigurationModule; +import org.apache.skywalking.oap.server.core.CoreModule; +import org.apache.skywalking.oap.server.core.analysis.meter.MeterSystem; +import org.apache.skywalking.oap.server.library.module.ModuleDefine; +import org.apache.skywalking.oap.server.library.module.ModuleProvider; +import org.apache.skywalking.oap.server.library.module.ModuleStartException; +import org.apache.skywalking.oap.server.library.module.ServiceNotProvidedException; + +public class LogAnalyzerModuleProvider extends ModuleProvider { + @Getter + private LogAnalyzerModuleConfig moduleConfig; + + @Getter + private List<MetricConvert> metricConverts; + + private LogAnalyzerServiceImpl logAnalyzerService; + + @Override + public String name() { + return "default"; + } + + @Override + public Class<? extends ModuleDefine> module() { + return LogAnalyzerModule.class; + } + + @Override + public ConfigCreator newConfigCreator() { + return new ConfigCreator<LogAnalyzerModuleConfig>() { + @Override + public Class type() { + return LogAnalyzerModuleConfig.class; + } + + @Override + public void onInitialized(final LogAnalyzerModuleConfig initialized) { + moduleConfig = initialized; + } + }; + } + + @Override + public void prepare() throws ServiceNotProvidedException, ModuleStartException { + logAnalyzerService = new LogAnalyzerServiceImpl(getManager(), moduleConfig); + this.registerServiceImplementation(ILogAnalyzerService.class, logAnalyzerService); + } + + @Override + public void start() throws ServiceNotProvidedException, ModuleStartException { + MeterSystem meterSystem = getManager().find(CoreModule.NAME).provider().getService(MeterSystem.class); + metricConverts = moduleConfig.malConfigs() + .stream() + .map(it -> new MetricConvert(it, meterSystem)) + .collect(Collectors.toList()); + try { + logAnalyzerService.addListenerFactory(new LogFilterListener.Factory(getManager(), moduleConfig)); + } catch (final Exception e) { + throw new ModuleStartException("Failed to create LAL listener.", e); + } + } + + @Override + public void notifyAfterCompleted() throws ServiceNotProvidedException { + + } + + @Override + public String[] requiredModules() { + return new String[] { + CoreModule.NAME, + ConfigurationModule.NAME + }; + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/ILogAnalysisListenerManager.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/ILogAnalysisListenerManager.java new file mode 100644 index 000000000000..421ed6edbf0f --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/ILogAnalysisListenerManager.java @@ -0,0 +1,33 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider.log; + +import java.util.List; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener.LogAnalysisListenerFactory; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener.LogSinkListenerFactory; + +public interface ILogAnalysisListenerManager { + + void addListenerFactory(LogAnalysisListenerFactory factory); + + List<LogAnalysisListenerFactory> getLogAnalysisListenerFactories(); + + void addSinkListenerFactory(LogSinkListenerFactory factory); + + List<LogSinkListenerFactory> getSinkListenerFactory(); +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/ILogAnalyzerService.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/ILogAnalyzerService.java new file mode 100644 index 000000000000..8498f73aa6ed --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/ILogAnalyzerService.java @@ -0,0 +1,35 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider.log; + +import com.google.protobuf.Message; +import org.apache.skywalking.apm.network.logging.v3.LogData; +import org.apache.skywalking.oap.server.library.module.Service; + +/** + * Analyze the collected log data. + */ +public interface ILogAnalyzerService extends Service { + + void doAnalysis(LogData.Builder log, Message extraLog); + + default void doAnalysis(LogData logData, Message extraLog) { + doAnalysis(logData.toBuilder(), extraLog); + } + +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/LogAnalyzer.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/LogAnalyzer.java new file mode 100644 index 000000000000..6ae2c312ec19 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/LogAnalyzer.java @@ -0,0 +1,107 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider.log; + +import com.google.protobuf.Message; +import java.util.ArrayList; +import java.util.List; +import java.util.Objects; + +import lombok.RequiredArgsConstructor; +import lombok.extern.slf4j.Slf4j; +import org.apache.skywalking.apm.network.logging.v3.LogData; +import org.apache.skywalking.oap.server.core.UnexpectedException; +import org.apache.skywalking.oap.server.core.analysis.Layer; +import org.apache.skywalking.oap.server.library.util.StringUtil; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener.LogAnalysisListener; +import org.apache.skywalking.oap.server.library.module.ModuleManager; + +/** + * Entry point for log analysis. Created per-request by the log receiver. + * + * <p>Runtime execution ({@link #doAnalysis}): + * <ol> + * <li>Validates the incoming log (service name must be non-empty, layer must be valid).</li> + * <li>Calls {@code createAnalysisListeners(layer)} — asks all registered + * {@link org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener.LogAnalysisListenerFactory} + * instances to create listeners for the log's layer. For LAL, this is + * {@link org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener.LogFilterListener.Factory}, + * which returns a listener wrapping all compiled {@link org.apache.skywalking.oap.log.analyzer.v2.dsl.DSL} + * instances for that layer.</li> + * <li>{@code notifyAnalysisListener(builder, extraLog)} — calls + * {@link org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener.LogAnalysisListener#parse} + * on each listener, which binds the log data to the compiled LAL scripts.</li> + * <li>{@code notifyAnalysisListenerToBuild()} — calls + * {@link org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener.LogAnalysisListener#build} + * on each listener, which evaluates the compiled LAL scripts (extractors, sinks).</li> + * </ol> + */ +@Slf4j +@RequiredArgsConstructor +public class LogAnalyzer { + private final ModuleManager moduleManager; + private final LogAnalyzerModuleConfig moduleConfig; + private final ILogAnalysisListenerManager factoryManager; + + private final List<LogAnalysisListener> listeners = new ArrayList<>(); + + public void doAnalysis(LogData.Builder builder, Message extraLog) { + if (StringUtil.isEmpty(builder.getService())) { + // If the service name is empty, the log will be ignored. + log.debug("The log is ignored because the Service name is empty"); + return; + } + Layer layer; + if ("".equals(builder.getLayer())) { + layer = Layer.GENERAL; + } else { + try { + layer = Layer.nameOf(builder.getLayer()); + } catch (UnexpectedException e) { + log.warn("The Layer {} is not found, abandon the log.", builder.getLayer()); + return; + } + } + + createAnalysisListeners(layer); + if (builder.getTimestamp() == 0) { + // If no timestamp, OAP server would use the received timestamp as log's timestamp + builder.setTimestamp(System.currentTimeMillis()); + } + + notifyAnalysisListener(builder, extraLog); + notifyAnalysisListenerToBuild(); + } + + private void notifyAnalysisListener(LogData.Builder builder, final Message extraLog) { + listeners.forEach(listener -> listener.parse(builder, extraLog)); + } + + private void notifyAnalysisListenerToBuild() { + listeners.forEach(LogAnalysisListener::build); + } + + private void createAnalysisListeners(Layer layer) { + factoryManager.getLogAnalysisListenerFactories() + .stream() + .map(factory -> factory.create(layer)) + .filter(Objects::nonNull) + .forEach(listeners::add); + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/LogAnalyzerServiceImpl.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/LogAnalyzerServiceImpl.java new file mode 100644 index 000000000000..8ab5dc0db7a5 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/LogAnalyzerServiceImpl.java @@ -0,0 +1,62 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider.log; + +import com.google.protobuf.Message; +import java.util.ArrayList; +import java.util.List; +import lombok.RequiredArgsConstructor; +import org.apache.skywalking.apm.network.logging.v3.LogData; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener.LogAnalysisListenerFactory; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener.LogSinkListenerFactory; +import org.apache.skywalking.oap.server.library.module.ModuleManager; + +@RequiredArgsConstructor +public class LogAnalyzerServiceImpl implements ILogAnalyzerService, ILogAnalysisListenerManager { + private final ModuleManager moduleManager; + private final LogAnalyzerModuleConfig moduleConfig; + private final List<LogAnalysisListenerFactory> analysisListenerFactories = new ArrayList<>(); + private final List<LogSinkListenerFactory> sinkListenerFactories = new ArrayList<>(); + + @Override + public void doAnalysis(final LogData.Builder log, Message extraLog) { + LogAnalyzer analyzer = new LogAnalyzer(moduleManager, moduleConfig, this); + analyzer.doAnalysis(log, extraLog); + } + + @Override + public void addListenerFactory(final LogAnalysisListenerFactory factory) { + analysisListenerFactories.add(factory); + } + + @Override + public List<LogAnalysisListenerFactory> getLogAnalysisListenerFactories() { + return analysisListenerFactories; + } + + @Override + public void addSinkListenerFactory(LogSinkListenerFactory factory) { + sinkListenerFactories.add(factory); + } + + @Override + public List<LogSinkListenerFactory> getSinkListenerFactory() { + return sinkListenerFactories; + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/analyzer/LogAnalyzerFactory.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/analyzer/LogAnalyzerFactory.java new file mode 100644 index 000000000000..55dfd8ba2b69 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/analyzer/LogAnalyzerFactory.java @@ -0,0 +1,22 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider.log.analyzer; + +public class LogAnalyzerFactory { + +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogAnalysisListener.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogAnalysisListener.java new file mode 100644 index 000000000000..9e7fd833b5b7 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogAnalysisListener.java @@ -0,0 +1,37 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener; + +import com.google.protobuf.Message; +import org.apache.skywalking.apm.network.logging.v3.LogData; + +/** + * LogAnalysisListener represents the callback when OAP does the log data analysis. + */ +public interface LogAnalysisListener { + /** + * The last step of the analysis process. Typically, the implementations execute corresponding DSL. + */ + void build(); + + /** + * Parse the raw data from the probe. + * @return {@code this} for chaining. + */ + LogAnalysisListener parse(LogData.Builder logData, final Message extraLog); +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogAnalysisListenerFactory.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogAnalysisListenerFactory.java new file mode 100644 index 000000000000..d2d9f03964a7 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogAnalysisListenerFactory.java @@ -0,0 +1,29 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener; + +import org.apache.skywalking.oap.server.core.analysis.Layer; + +/** + * LogAnalysisListenerFactory implementation creates the listener instance when required. + * Every LogAnalysisListener could have its own creation factory. + */ +public interface LogAnalysisListenerFactory { + + LogAnalysisListener create(Layer layer); +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogFilterListener.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogFilterListener.java new file mode 100644 index 000000000000..81c997e5f625 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogFilterListener.java @@ -0,0 +1,155 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener; + +import com.google.protobuf.Message; +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.Map; +import java.util.ServiceLoader; +import java.util.stream.Collectors; +import lombok.extern.slf4j.Slf4j; + +import java.util.HashMap; +import org.apache.skywalking.apm.network.logging.v3.LogData; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.DSL; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LALConfig; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LALConfigs; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider; + +import org.apache.skywalking.oap.server.core.analysis.Layer; +import org.apache.skywalking.oap.server.library.module.ModuleManager; +import org.apache.skywalking.oap.server.library.module.ModuleStartException; + +/** + * Runtime listener that executes compiled LAL rules against incoming log data. + * + * <p>Each instance wraps a collection of {@link DSL} objects — one per LAL rule + * defined for a specific {@link Layer}. Created per-log by {@link Factory#create(Layer)}. + * + * <p>Two-phase execution (called by {@link org.apache.skywalking.oap.log.analyzer.v2.provider.log.LogAnalyzer}): + * <ol> + * <li>{@link #parse} — creates a fresh {@link ExecutionContext} with the current log data + * and binds it to every DSL instance (sets the ThreadLocal in each Spec).</li> + * <li>{@link #build} — calls {@link DSL#evaluate(ExecutionContext)} on every DSL instance, + * which invokes the compiled {@link org.apache.skywalking.oap.log.analyzer.v2.dsl.LalExpression} + * to run the filter/extractor/sink pipeline.</li> + * </ol> + * + * <p>The inner {@link Factory} is created once at startup by + * {@link org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleProvider#start()}. + * It loads all {@code .yaml} LAL config files, compiles each rule's DSL string + * into a {@link DSL} instance via + * {@link DSL#of(org.apache.skywalking.oap.server.library.module.ModuleManager, + * org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleConfig, String)}, + * and organizes them by {@link Layer}. + */ +@Slf4j +public class LogFilterListener implements LogAnalysisListener { + private final List<DSL> dsls; + private List<ExecutionContext> contexts; + + LogFilterListener(final Collection<DSL> dsls) { + this.dsls = new ArrayList<>(dsls); + } + + @Override + public void build() { + for (int i = 0; i < dsls.size(); i++) { + try { + dsls.get(i).evaluate(contexts.get(i)); + } catch (final Exception e) { + log.warn("Failed to evaluate dsl: {}", dsls.get(i), e); + } + } + } + + @Override + public LogAnalysisListener parse(final LogData.Builder logData, + final Message extraLog) { + final LogData log = logData.build(); + contexts = new ArrayList<>(dsls.size()); + for (int i = 0; i < dsls.size(); i++) { + contexts.add(new ExecutionContext().log(log).extraLog(extraLog)); + } + return this; + } + + public static class Factory implements LogAnalysisListenerFactory { + private final Map<Layer, Map<String, DSL>> dsls; + + public Factory(final ModuleManager moduleManager, final LogAnalyzerModuleConfig config) throws Exception { + dsls = new HashMap<>(); + + // Scan SPI providers for default extraLogType per layer + final Map<Layer, Class<?>> spiTypes = new HashMap<>(); + for (final LALSourceTypeProvider p : ServiceLoader.load(LALSourceTypeProvider.class)) { + spiTypes.put(p.layer(), p.extraLogType()); + log.info("LALSourceTypeProvider: layer={} -> {}", + p.layer().name(), p.extraLogType().getName()); + } + + final List<LALConfig> configList = LALConfigs.load(config.getLalPath(), config.lalFiles()) + .stream() + .flatMap(it -> it.getRules().stream()) + .collect(Collectors.toList()); + for (final LALConfig c : configList) { + final Layer layer = Layer.nameOf(c.getLayer()); + + // Per-rule resolution: explicit YAML > SPI > null + Class<?> resolvedType = resolveExtraLogType(c, spiTypes.get(layer)); + + final Map<String, DSL> layerDsls = this.dsls.computeIfAbsent(layer, k -> new HashMap<>()); + if (layerDsls.put(c.getName(), DSL.of(moduleManager, config, c.getDsl(), resolvedType, c.getName(), c.getSourceName())) != null) { + throw new ModuleStartException("Layer " + layer.name() + " has already set " + c.getName() + " rule."); + } + } + } + + private static Class<?> resolveExtraLogType(final LALConfig config, + final Class<?> spiType) throws ModuleStartException { + final String yamlType = config.getExtraLogType(); + if (yamlType != null && !yamlType.isEmpty()) { + try { + return Class.forName(yamlType); + } catch (ClassNotFoundException e) { + throw new ModuleStartException( + "LAL rule '" + config.getName() + "' declares extraLogType '" + + yamlType + "' but the class was not found.", e); + } + } + return spiType; + } + + @Override + public LogAnalysisListener create(Layer layer) { + if (layer == null) { + return null; + } + final Map<String, DSL> dsl = dsls.get(layer); + if (dsl == null) { + return null; + } + return new LogFilterListener(dsl.values()); + } + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogSinkListener.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogSinkListener.java new file mode 100644 index 000000000000..69830edd1053 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogSinkListener.java @@ -0,0 +1,35 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener; + +import com.google.protobuf.Message; +import org.apache.skywalking.apm.network.logging.v3.LogData; + +public interface LogSinkListener { + /** + * The last step of the sink process. Typically, the implementations forward the results to the source + * receiver. + */ + void build(); + + /** + * Parse the raw data from the probe. + * @return {@code this} for chaining. + */ + LogSinkListener parse(LogData.Builder logData, final Message extraLog); +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogSinkListenerFactory.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogSinkListenerFactory.java new file mode 100644 index 000000000000..a19f51845dba --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/LogSinkListenerFactory.java @@ -0,0 +1,26 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener; + +/** + * LogSinkListenerFactory implementation creates the listener instance when required. + * Every LogSinkListener could have its own creation factory. + */ +public interface LogSinkListenerFactory { + LogSinkListener create(); +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/RecordSinkListener.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/RecordSinkListener.java new file mode 100644 index 000000000000..cce46468fd6f --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/RecordSinkListener.java @@ -0,0 +1,178 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener; + +import com.google.protobuf.Message; +import java.util.Arrays; +import java.util.Collection; +import java.util.HashSet; +import java.util.List; +import java.util.UUID; +import lombok.Getter; +import lombok.RequiredArgsConstructor; +import lombok.SneakyThrows; +import org.apache.skywalking.apm.network.logging.v3.LogData; +import org.apache.skywalking.apm.network.logging.v3.LogDataBody; +import org.apache.skywalking.apm.network.logging.v3.TraceContext; + +import org.apache.skywalking.oap.server.core.analysis.manual.searchtag.TagType; +import org.apache.skywalking.oap.server.core.source.TagAutocomplete; +import org.apache.skywalking.oap.server.library.util.StringUtil; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.server.core.Const; +import org.apache.skywalking.oap.server.core.CoreModule; +import org.apache.skywalking.oap.server.core.analysis.IDManager; +import org.apache.skywalking.oap.server.core.analysis.TimeBucket; +import org.apache.skywalking.oap.server.core.analysis.manual.searchtag.Tag; +import org.apache.skywalking.oap.server.core.config.ConfigService; +import org.apache.skywalking.oap.server.core.config.NamingControl; +import org.apache.skywalking.oap.server.core.query.type.ContentType; +import org.apache.skywalking.oap.server.core.source.Log; +import org.apache.skywalking.oap.server.core.source.SourceReceiver; +import org.apache.skywalking.oap.server.library.module.ModuleManager; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import static org.apache.skywalking.oap.server.library.util.ProtoBufJsonUtils.toJSON; + +/** + * RecordSinkListener forwards the log data to the persistence layer with the query required conditions. + */ +@RequiredArgsConstructor +public class RecordSinkListener implements LogSinkListener { + private static final Logger LOGGER = LoggerFactory.getLogger(RecordSinkListener.class); + private final SourceReceiver sourceReceiver; + private final NamingControl namingControl; + private final List<String> searchableTagKeys; + @Getter + private final Log log = new Log(); + + @Override + public void build() { + sourceReceiver.receive(log); + addAutocompleteTags(); + } + + @Override + @SneakyThrows + public LogSinkListener parse(final LogData.Builder logData, + final Message extraLog) { + LogDataBody body = logData.getBody(); + log.setUniqueId(UUID.randomUUID().toString().replace("-", "")); + // timestamp + log.setTimestamp(logData.getTimestamp()); + log.setTimeBucket(TimeBucket.getRecordTimeBucket(logData.getTimestamp())); + + // service + String serviceName = namingControl.formatServiceName(logData.getService()); + String serviceId = IDManager.ServiceID.buildId(serviceName, true); + log.setServiceId(serviceId); + // service instance + if (StringUtil.isNotEmpty(logData.getServiceInstance())) { + log.setServiceInstanceId(IDManager.ServiceInstanceID.buildId( + serviceId, + namingControl.formatInstanceName(logData.getServiceInstance()) + )); + } + // endpoint + if (StringUtil.isNotEmpty(logData.getEndpoint())) { + String endpointName = namingControl.formatEndpointName(serviceName, logData.getEndpoint()); + log.setEndpointId(IDManager.EndpointID.buildId(serviceId, endpointName)); + } + // trace + TraceContext traceContext = logData.getTraceContext(); + if (StringUtil.isNotEmpty(traceContext.getTraceId())) { + log.setTraceId(traceContext.getTraceId()); + } + if (StringUtil.isNotEmpty(traceContext.getTraceSegmentId())) { + log.setTraceSegmentId(traceContext.getTraceSegmentId()); + log.setSpanId(traceContext.getSpanId()); + } + // content + if (body.hasText()) { + log.setContentType(ContentType.TEXT); + log.setContent(body.getText().getText()); + } else if (body.hasYaml()) { + log.setContentType(ContentType.YAML); + log.setContent(body.getYaml().getYaml()); + } else if (body.hasJson()) { + log.setContentType(ContentType.JSON); + log.setContent(body.getJson().getJson()); + } else if (extraLog != null) { + log.setContentType(ContentType.JSON); + log.setContent(toJSON(extraLog)); + } + if (logData.getTags().getDataCount() > 0) { + log.setTagsRawData(logData.getTags().toByteArray()); + } + log.getTags().addAll(appendSearchableTags(logData)); + return this; + } + + private Collection<Tag> appendSearchableTags(LogData.Builder logData) { + HashSet<Tag> logTags = new HashSet<>(); + logData.getTags().getDataList().forEach(tag -> { + if (searchableTagKeys.contains(tag.getKey())) { + final Tag logTag = new Tag(tag.getKey(), tag.getValue()); + if (tag.getValue().length() > Tag.TAG_LENGTH || logTag.toString().length() > Tag.TAG_LENGTH) { + if (LOGGER.isDebugEnabled()) { + LOGGER.debug("Log tag : {} length > : {}, dropped", logTag, Tag.TAG_LENGTH); + } + return; + } + logTags.add(logTag); + } + }); + return logTags; + } + + private void addAutocompleteTags() { + log.getTags().forEach(tag -> { + TagAutocomplete tagAutocomplete = new TagAutocomplete(); + tagAutocomplete.setTagKey(tag.getKey()); + tagAutocomplete.setTagValue(tag.getValue()); + tagAutocomplete.setTagType(TagType.LOG); + tagAutocomplete.setTimeBucket(TimeBucket.getMinuteTimeBucket(log.getTimestamp())); + sourceReceiver.receive(tagAutocomplete); + }); + } + + public static class Factory implements LogSinkListenerFactory { + private final SourceReceiver sourceReceiver; + private final NamingControl namingControl; + private final List<String> searchableTagKeys; + + public Factory(ModuleManager moduleManager, LogAnalyzerModuleConfig moduleConfig) { + this.sourceReceiver = moduleManager.find(CoreModule.NAME) + .provider() + .getService(SourceReceiver.class); + this.namingControl = moduleManager.find(CoreModule.NAME) + .provider() + .getService(NamingControl.class); + ConfigService configService = moduleManager.find(CoreModule.NAME) + .provider() + .getService(ConfigService.class); + this.searchableTagKeys = Arrays.asList(configService.getSearchableLogsTags().split(Const.COMMA)); + } + + @Override + public RecordSinkListener create() { + return new RecordSinkListener(sourceReceiver, namingControl, searchableTagKeys); + } + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/TrafficSinkListener.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/TrafficSinkListener.java new file mode 100644 index 000000000000..49d426d60176 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/provider/log/listener/TrafficSinkListener.java @@ -0,0 +1,118 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.provider.log.listener; + +import com.google.protobuf.Message; +import lombok.RequiredArgsConstructor; +import org.apache.skywalking.apm.network.logging.v3.LogData; +import org.apache.skywalking.oap.server.core.analysis.Layer; +import org.apache.skywalking.oap.server.library.util.StringUtil; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.server.core.CoreModule; +import org.apache.skywalking.oap.server.core.analysis.DownSampling; +import org.apache.skywalking.oap.server.core.analysis.IDManager; +import org.apache.skywalking.oap.server.core.analysis.TimeBucket; +import org.apache.skywalking.oap.server.core.config.NamingControl; +import org.apache.skywalking.oap.server.core.source.EndpointMeta; +import org.apache.skywalking.oap.server.core.source.ServiceInstanceUpdate; +import org.apache.skywalking.oap.server.core.source.ServiceMeta; +import org.apache.skywalking.oap.server.core.source.SourceReceiver; +import org.apache.skywalking.oap.server.library.module.ModuleManager; + +import static java.util.Objects.nonNull; + +/** + * Generate service, service instance and endpoint traffic by log data. + */ +@RequiredArgsConstructor +public class TrafficSinkListener implements LogSinkListener { + private final SourceReceiver sourceReceiver; + private final NamingControl namingControl; + + private ServiceMeta serviceMeta; + private ServiceInstanceUpdate instanceMeta; + private EndpointMeta endpointMeta; + + @Override + public void build() { + if (nonNull(serviceMeta)) { + sourceReceiver.receive(serviceMeta); + } + if (nonNull(instanceMeta)) { + sourceReceiver.receive(instanceMeta); + } + if (nonNull(endpointMeta)) { + sourceReceiver.receive(endpointMeta); + } + } + + @Override + public LogSinkListener parse(final LogData.Builder logData, + final Message extraLog) { + Layer layer; + if (StringUtil.isNotEmpty(logData.getLayer())) { + layer = Layer.valueOf(logData.getLayer()); + } else { + layer = Layer.GENERAL; + } + final long timeBucket = TimeBucket.getTimeBucket(System.currentTimeMillis(), DownSampling.Minute); + // to service traffic + String serviceName = namingControl.formatServiceName(logData.getService()); + String serviceId = IDManager.ServiceID.buildId(serviceName, layer.isNormal()); + serviceMeta = new ServiceMeta(); + serviceMeta.setName(namingControl.formatServiceName(logData.getService())); + serviceMeta.setLayer(layer); + serviceMeta.setTimeBucket(timeBucket); + // to service instance traffic + if (StringUtil.isNotEmpty(logData.getServiceInstance())) { + instanceMeta = new ServiceInstanceUpdate(); + instanceMeta.setServiceId(serviceId); + instanceMeta.setName(namingControl.formatInstanceName(logData.getServiceInstance())); + instanceMeta.setTimeBucket(timeBucket); + + } + // to endpoint traffic + if (StringUtil.isNotEmpty(logData.getEndpoint())) { + endpointMeta = new EndpointMeta(); + endpointMeta.setServiceName(serviceName); + endpointMeta.setServiceNormal(true); + endpointMeta.setEndpoint(namingControl.formatEndpointName(serviceName, logData.getEndpoint())); + endpointMeta.setTimeBucket(timeBucket); + } + return this; + } + + public static class Factory implements LogSinkListenerFactory { + private final SourceReceiver sourceReceiver; + private final NamingControl namingControl; + + public Factory(ModuleManager moduleManager, LogAnalyzerModuleConfig moduleConfig) { + this.sourceReceiver = moduleManager.find(CoreModule.NAME) + .provider() + .getService(SourceReceiver.class); + this.namingControl = moduleManager.find(CoreModule.NAME) + .provider() + .getService(NamingControl.class); + } + + @Override + public LogSinkListener create() { + return new TrafficSinkListener(sourceReceiver, namingControl); + } + } +} diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/spi/LALSourceTypeProvider.java b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/spi/LALSourceTypeProvider.java new file mode 100644 index 000000000000..bc5a3e672657 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/v2/spi/LALSourceTypeProvider.java @@ -0,0 +1,53 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.spi; + +import org.apache.skywalking.oap.server.core.analysis.Layer; + +/** + * SPI for receiver plugins to declare the Java type of the {@code extraLog} + * they pass to LAL via {@code ILogAnalyzerService.doAnalysis(LogData, Message)}. + * + * <p>The LAL compiler uses this at compile time to generate optimized direct + * getter calls instead of runtime reflection. Implementations are discovered + * via {@link java.util.ServiceLoader} and matched by {@link Layer}. + * + * <p>Per-rule type resolution order: + * <ol> + * <li>DSL parser ({@code json{}}, {@code yaml{}}, {@code text{}}) — parser wins</li> + * <li>Explicit {@code extraLogType} declared in the YAML rule config</li> + * <li>This SPI — acts as the default {@code extraLogType} for a layer</li> + * <li>Compile error if none of the above and the rule accesses {@code parsed.*}</li> + * </ol> + * + * <p>Receiver plugins register implementations in + * {@code META-INF/services/org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider}. + */ +public interface LALSourceTypeProvider { + /** + * The layer this provider supplies type information for. + */ + Layer layer(); + + /** + * The Java type passed as {@code extraLog} by the receiver plugin for + * this layer. The compiler resolves getter chains on this type at + * compile time. + */ + Class<?> extraLogType(); +} diff --git a/oap-server/analyzer/log-analyzer/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.library.module.ModuleDefine b/oap-server/analyzer/log-analyzer/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.library.module.ModuleDefine index 54d5a91d08b4..ac8e56dc7f81 100644 --- a/oap-server/analyzer/log-analyzer/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.library.module.ModuleDefine +++ b/oap-server/analyzer/log-analyzer/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.library.module.ModuleDefine @@ -16,4 +16,4 @@ # # -org.apache.skywalking.oap.log.analyzer.module.LogAnalyzerModule \ No newline at end of file +org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule \ No newline at end of file diff --git a/oap-server/analyzer/log-analyzer/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.library.module.ModuleProvider b/oap-server/analyzer/log-analyzer/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.library.module.ModuleProvider index 8f00b261f68d..752d6b44bba5 100644 --- a/oap-server/analyzer/log-analyzer/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.library.module.ModuleProvider +++ b/oap-server/analyzer/log-analyzer/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.library.module.ModuleProvider @@ -15,4 +15,4 @@ # limitations under the License. # -org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleProvider \ No newline at end of file +org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleProvider \ No newline at end of file diff --git a/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALClassGeneratorTest.java b/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALClassGeneratorTest.java new file mode 100644 index 000000000000..b3bf2bdbefc1 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALClassGeneratorTest.java @@ -0,0 +1,672 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.compiler; + +import javassist.ClassPool; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.LalExpression; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; + +class LALClassGeneratorTest { + + private LALClassGenerator generator; + + @BeforeEach + void setUp() { + generator = new LALClassGenerator(new ClassPool(true)); + } + + @Test + void compileMinimalFilter() throws Exception { + final LalExpression expr = generator.compile( + "filter { sink {} }"); + assertNotNull(expr); + } + + @Test + void compileJsonParserFilter() throws Exception { + final LalExpression expr = generator.compile( + "filter { json {} sink {} }"); + assertNotNull(expr); + } + + @Test + void compileJsonWithExtractor() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " service parsed.service as String\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + @Test + void compileTextWithRegexp() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " text {\n" + + " regexp '(?<timestamp>\\\\d+) (?<level>\\\\w+) (?<msg>.*)'\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + @Test + void compileSinkWithEnforcer() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " sink {\n" + + " enforcer {}\n" + + " }\n" + + "}"); + assertNotNull(expr); + } + + @Test + void generateSourceReturnsJavaCode() { + final String source = generator.generateSource( + "filter { json {} sink {} }"); + assertNotNull(source); + org.junit.jupiter.api.Assertions.assertTrue( + source.contains("filterSpec.json(ctx)")); + org.junit.jupiter.api.Assertions.assertTrue( + source.contains("filterSpec.sink(ctx)")); + } + + // ==================== Error handling tests ==================== + + @Test + void emptyScriptThrows() { + // Demo error: LAL script parsing failed: 1:0 mismatched input '<EOF>' + // expecting 'filter' + assertThrows(Exception.class, () -> generator.compile("")); + } + + @Test + void missingFilterKeywordThrows() { + // Demo error: LAL script parsing failed: 1:0 extraneous input 'json' + // expecting 'filter' + assertThrows(Exception.class, () -> generator.compile("json {}")); + } + + @Test + void unclosedBraceThrows() { + // Demo error: LAL script parsing failed: 1:15 mismatched input '<EOF>' + // expecting '}' + assertThrows(Exception.class, + () -> generator.compile("filter { json {")); + } + + @Test + void invalidStatementInFilterThrows() { + // Demo error: LAL script parsing failed: 1:9 extraneous input 'invalid' + // expecting {'text', 'json', 'yaml', 'extractor', 'sink', 'abort', 'if', '}'} + assertThrows(Exception.class, + () -> generator.compile("filter { invalid {} }")); + } + + // ==================== tag() function in conditions ==================== + + @Test + void compileTagFunctionInCondition() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " if (tag(\"LOG_KIND\") == \"SLOW_SQL\") {\n" + + " sink {}\n" + + " }\n" + + "}"); + assertNotNull(expr); + } + + @Test + void generateSourceTagFunctionEmitsTagValue() { + final String source = generator.generateSource( + "filter {\n" + + " if (tag(\"LOG_KIND\") == \"SLOW_SQL\") {\n" + + " sink {}\n" + + " }\n" + + "}"); + // Should use tagValue helper, not emit null + assertTrue(source.contains("h.tagValue(\"LOG_KIND\")"), + "Expected tagValue call but got: " + source); + assertTrue(source.contains("SLOW_SQL")); + } + + @Test + void compileTagFunctionNestedInExtractor() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " if (tag(\"LOG_KIND\") == \"NET_PROFILING_SAMPLED_TRACE\") {\n" + + " service parsed.service as String\n" + + " }\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + // ==================== Safe navigation ==================== + + @Test + void compileSafeNavigationFieldAccess() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " service parsed?.response?.service as String\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + @Test + void compileSafeNavigationMethodCalls() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " service parsed?.flags?.toString()?.trim() as String\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + @Test + void generateSourceSafeNavMethodEmitsSpecificHelper() { + final String source = generator.generateSource( + "filter {\n" + + " json {}\n" + + " if (parsed?.flags?.toString()) {\n" + + " sink {}\n" + + " }\n" + + "}"); + // Safe method calls should emit specific helpers, not generic safeCall + assertTrue(source.contains("h.toString("), + "Expected toString helper for safe nav method but got: " + source); + assertTrue(source.contains("h.isNotEmpty("), + "Expected isNotEmpty for ExprCondition but got: " + source); + } + + // ==================== ProcessRegistry static calls ==================== + + @Test + void compileProcessRegistryCall() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " service ProcessRegistry.generateVirtualLocalProcess(" + + "parsed.service as String, parsed.serviceInstance as String" + + ") as String\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + @Test + void compileProcessRegistryWithThreeArgs() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " service ProcessRegistry.generateVirtualRemoteProcess(" + + "parsed.service as String, parsed.serviceInstance as String, " + + "parsed.address as String) as String\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + // ==================== Metrics block ==================== + + @Test + void compileMetricsBlock() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " metrics {\n" + + " timestamp log.timestamp as Long\n" + + " labels level: parsed.level, service: log.service\n" + + " name \"nginx_error_log_count\"\n" + + " value 1\n" + + " }\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + // ==================== SlowSql block ==================== + + @Test + void compileSlowSqlBlock() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " slowSql {\n" + + " id parsed.id as String\n" + + " statement parsed.statement as String\n" + + " latency parsed.query_time as Long\n" + + " }\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + // ==================== SampledTrace block ==================== + + @Test + void compileSampledTraceBlock() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " sampledTrace {\n" + + " latency parsed.latency as Long\n" + + " uri parsed.uri as String\n" + + " reason parsed.reason as String\n" + + " detectPoint parsed.detect_point as String\n" + + " componentId 49\n" + + " }\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + @Test + void compileSampledTraceWithIfBlocks() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " sampledTrace {\n" + + " latency parsed.latency as Long\n" + + " if (parsed.client_process.process_id as String != \"\") {\n" + + " processId parsed.client_process.process_id as String\n" + + " } else {\n" + + " processId parsed.fallback as String\n" + + " }\n" + + " detectPoint parsed.detect_point as String\n" + + " }\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + // ==================== Sampler / rateLimit ==================== + + @Test + void compileSamplerWithRateLimit() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " sink {\n" + + " sampler {\n" + + " rateLimit('service:error') {\n" + + " rpm 6000\n" + + " }\n" + + " }\n" + + " }\n" + + "}"); + assertNotNull(expr); + } + + @Test + void compileSamplerWithInterpolatedId() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " sink {\n" + + " sampler {\n" + + " rateLimit(\"${log.service}:${parsed.code}\") {\n" + + " rpm 6000\n" + + " }\n" + + " }\n" + + " }\n" + + "}"); + assertNotNull(expr); + } + + @Test + void parseInterpolatedIdParts() { + // Verify the parser correctly splits interpolated strings + final java.util.List<LALScriptModel.InterpolationPart> parts = + LALScriptParser.parseInterpolation( + "${log.service}:${parsed.code}"); + assertNotNull(parts); + // expr, literal ":", expr + assertEquals(3, parts.size()); + assertFalse(parts.get(0).isLiteral()); + assertTrue(parts.get(0).getExpression().isLogRef()); + assertTrue(parts.get(1).isLiteral()); + assertEquals(":", parts.get(1).getLiteral()); + assertFalse(parts.get(2).isLiteral()); + assertTrue(parts.get(2).getExpression().isParsedRef()); + } + + @Test + void compileSamplerWithSafeNavInterpolatedId() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " sink {\n" + + " sampler {\n" + + " rateLimit(\"${log.service}:${parsed?.commonProperties?.responseFlags?.toString()}\") {\n" + + " rpm 6000\n" + + " }\n" + + " }\n" + + " }\n" + + "}"); + assertNotNull(expr); + } + + @Test + void compileSamplerWithIfAndRateLimit() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " sink {\n" + + " sampler {\n" + + " if (parsed?.error) {\n" + + " rateLimit('svc:err') {\n" + + " rpm 6000\n" + + " }\n" + + " } else {\n" + + " rateLimit('svc:ok') {\n" + + " rpm 3000\n" + + " }\n" + + " }\n" + + " }\n" + + " }\n" + + "}"); + assertNotNull(expr); + } + + // ==================== If blocks in extractor/sink ==================== + + @Test + void compileIfInsideExtractor() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " if (parsed?.status) {\n" + + " tag 'http.status_code': parsed.status\n" + + " }\n" + + " tag 'response.flag': parsed.flags\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + @Test + void compileIfInsideExtractorWithTagCondition() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " if (tag(\"LOG_KIND\") == \"NET_PROFILING\") {\n" + + " service parsed.service as String\n" + + " layer parsed.layer as String\n" + + " }\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + // ==================== Complex production-like rules ==================== + + @Test + void compileNginxAccessLogRule() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " if (tag(\"LOG_KIND\") == \"NGINX_ACCESS_LOG\") {\n" + + " text {\n" + + " regexp '.+\"(?<request>.+)\"(?<status>\\\\d{3}).+'\n" + + " }\n" + + " extractor {\n" + + " if (parsed.status) {\n" + + " tag 'http.status_code': parsed.status\n" + + " }\n" + + " }\n" + + " sink {}\n" + + " }\n" + + "}"); + assertNotNull(expr); + } + + @Test + void compileSlowSqlProductionRule() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " if (tag(\"LOG_KIND\") == \"SLOW_SQL\") {\n" + + " layer parsed.layer as String\n" + + " service parsed.service as String\n" + + " timestamp parsed.time as String\n" + + " slowSql {\n" + + " id parsed.id as String\n" + + " statement parsed.statement as String\n" + + " latency parsed.query_time as Long\n" + + " }\n" + + " }\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + @Test + void compileEnvoyAlsAbortRuleFailsWithoutExtraLogType() { + // envoy-als pattern has no parser (json/yaml/text) — falls back to LogData + // but LogData.Builder doesn't have getResponse(), so compile fails + assertThrows(IllegalArgumentException.class, () -> generator.compile( + "filter {\n" + + " if (parsed?.response?.responseCode?.value as Integer < 400" + + " && !parsed?.commonProperties?.responseFlags?.toString()?.trim()) {\n" + + " abort {}\n" + + " }\n" + + " extractor {\n" + + " if (parsed?.response?.responseCode) {\n" + + " tag 'status.code': parsed?.response?.responseCode?.value\n" + + " }\n" + + " tag 'response.flag': parsed?.commonProperties?.responseFlags\n" + + " }\n" + + " sink {}\n" + + "}")); + } + + @Test + void compileNoParserFallsBackToLogDataProto() throws Exception { + // No parser (json/yaml/text) and no extraLogType — should use LogData.Builder + final String dsl = + "filter {\n" + + " extractor {\n" + + " service parsed.service as String\n" + + " instance parsed.serviceInstance as String\n" + + " }\n" + + " sink {}\n" + + "}"; + final String source = generator.generateSource(dsl); + // Should generate getter chains on h.ctx().log() + assertTrue(source.contains("h.ctx().log().getService()"), + "Expected h.ctx().log().getService() but got: " + source); + assertTrue(source.contains("h.ctx().log().getServiceInstance()"), + "Expected h.ctx().log().getServiceInstance() but got: " + source); + // No _p variable (LogData doesn't need it) + assertFalse(source.contains("_p"), + "Should NOT have _p variable for LogData fallback but got: " + source); + // Verify it compiles + final LalExpression expr = generator.compile(dsl); + assertNotNull(expr); + } + + @Test + void compileExtraLogTypeGeneratesDirectGetterCalls() throws Exception { + generator.setExtraLogType( + io.envoyproxy.envoy.data.accesslog.v3.HTTPAccessLogEntry.class); + final String dsl = + "filter {\n" + + " if (parsed?.response?.responseCode?.value as Integer < 400" + + " && !parsed?.commonProperties?.responseFlags?.toString()?.trim()) {\n" + + " abort {}\n" + + " }\n" + + " extractor {\n" + + " if (parsed?.response?.responseCode) {\n" + + " tag 'status.code': parsed?.response?.responseCode?.value\n" + + " }\n" + + " tag 'response.flag': parsed?.commonProperties?.responseFlags\n" + + " }\n" + + " sink {}\n" + + "}"; + final String source = generator.generateSource(dsl); + final String fqcn = + "io.envoyproxy.envoy.data.accesslog.v3.HTTPAccessLogEntry"; + // Proto field access uses _p local variable, not inline cast + assertTrue(source.contains( + fqcn + " _p = (" + fqcn + ") h.ctx().extraLog()"), + "Expected _p local variable for extraLog cast but got: " + source); + // Intermediate fields cached in _tN local variables + assertTrue(source.contains("_p.getResponse()"), + "Expected _p.getResponse() via cached variable but got: " + source); + assertTrue(source.contains("_p.getCommonProperties()"), + "Expected _p.getCommonProperties() via cached variable but got: " + source); + assertFalse(source.contains("getAt"), + "Should NOT contain getAt calls but got: " + source); + // Safe navigation: null checks with == null on local variables + assertTrue(source.contains("_p == null ? null :"), + "Expected null checks for ?. safe navigation but got: " + source); + // Dedup: _tN variables declared once and reused + assertTrue(source.contains("_t0") && source.contains("_t1"), + "Expected _tN local variables for chain dedup but got: " + source); + // Numeric comparison: direct primitive via _tN variable, no h.toLong() + assertTrue(source.contains(".getValue() < 400"), + "Expected direct primitive comparison without boxing but got: " + source); + assertFalse(source.contains("h.toLong"), + "Should NOT use h.toLong for primitive int comparison but got: " + source); + // Single-tag: uses tag(ctx, String, String), not singletonMap + assertTrue(source.contains("_e.tag(h.ctx(), \"status.code\""), + "Expected tag(ctx, String, String) overload but got: " + source); + assertFalse(source.contains("singletonMap"), + "Should NOT use singletonMap for single tags but got: " + source); + + // Verify it compiles + final LalExpression expr = generator.compile(dsl); + assertNotNull(expr); + } + + // ==================== Else-if chain ==================== + + @Test + void compileElseIfChain() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " if (parsed.a) {\n" + + " sink {}\n" + + " } else if (parsed.b) {\n" + + " sink {}\n" + + " } else if (parsed.c) {\n" + + " sink {}\n" + + " } else {\n" + + " sink {}\n" + + " }\n" + + "}"); + assertNotNull(expr); + } + + @Test + void compileElseIfInSampledTrace() throws Exception { + final LalExpression expr = generator.compile( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " sampledTrace {\n" + + " latency parsed.latency as Long\n" + + " if (parsed.client_process.process_id as String != \"\") {\n" + + " processId parsed.client_process.process_id as String\n" + + " } else if (parsed.client_process.local as Boolean) {\n" + + " processId ProcessRegistry.generateVirtualLocalProcess(" + + "parsed.service as String, parsed.serviceInstance as String) as String\n" + + " } else {\n" + + " processId ProcessRegistry.generateVirtualRemoteProcess(" + + "parsed.service as String, parsed.serviceInstance as String, " + + "parsed.client_process.address as String) as String\n" + + " }\n" + + " detectPoint parsed.detect_point as String\n" + + " }\n" + + " }\n" + + " sink {}\n" + + "}"); + assertNotNull(expr); + } + + @Test + void generateSourceElseIfEmitsNestedBranches() { + final String source = generator.generateSource( + "filter {\n" + + " json {}\n" + + " if (parsed.a) {\n" + + " sink {}\n" + + " } else if (parsed.b) {\n" + + " sink {}\n" + + " } else {\n" + + " sink {}\n" + + " }\n" + + "}"); + // The else-if should produce a nested if inside else + assertTrue(source.contains("else"), + "Expected else branch but got: " + source); + // Both condition branches should appear + int ifCount = 0; + for (int i = 0; i < source.length() - 2; i++) { + if (source.substring(i, i + 3).equals("if ")) { + ifCount++; + } + } + assertTrue(ifCount >= 2, + "Expected at least 2 if-conditions for else-if chain but got " + + ifCount + " in: " + source); + } +} diff --git a/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALExpressionExecutionTest.java b/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALExpressionExecutionTest.java new file mode 100644 index 000000000000..fb4453843f59 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALExpressionExecutionTest.java @@ -0,0 +1,585 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.compiler; + +import java.io.File; +import java.lang.reflect.Field; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.ServiceLoader; +import java.util.stream.Collectors; + +import com.google.protobuf.Message; +import com.google.protobuf.util.JsonFormat; +import org.apache.skywalking.apm.network.common.v3.KeyStringValuePair; +import org.apache.skywalking.apm.network.logging.v3.JSONLog; +import org.apache.skywalking.apm.network.logging.v3.LogData; +import org.apache.skywalking.apm.network.logging.v3.LogDataBody; +import org.apache.skywalking.apm.network.logging.v3.LogTags; +import org.apache.skywalking.apm.network.logging.v3.TextLog; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.LalExpression; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.filter.FilterSpec; +import org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider; +import org.apache.skywalking.oap.server.analyzer.provider.trace.parser.listener.SampledTraceBuilder; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleProvider; +import org.apache.skywalking.oap.server.core.CoreModule; +import org.apache.skywalking.oap.server.core.config.NamingControl; +import org.apache.skywalking.oap.server.core.config.group.EndpointNameGrouping; +import org.apache.skywalking.oap.server.core.config.ConfigService; +import org.apache.skywalking.oap.server.core.source.SourceReceiver; +import org.apache.skywalking.oap.server.library.module.ModuleManager; +import org.apache.skywalking.oap.server.library.module.ModuleProviderHolder; +import org.apache.skywalking.oap.server.library.module.ModuleServiceHolder; +import org.apache.skywalking.library.kubernetes.ObjectID; +import org.apache.skywalking.oap.meter.analyzer.v2.k8s.K8sInfoRegistry; +import org.apache.skywalking.oap.server.core.analysis.worker.MetricsStreamProcessor; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.DynamicTest; +import org.junit.jupiter.api.TestFactory; +import org.mockito.MockedStatic; +import org.mockito.Mockito; +import org.yaml.snakeyaml.Yaml; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.Mockito.doNothing; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +/** + * Data-driven runtime execution tests for compiled LAL expressions. + * + * <p>Loads LAL rules from {@code .yaml} files and mock input from + * corresponding {@code .input.data} files in the {@code test-lal/} + * directory tree. For each rule that has a matching input entry, + * compiles the DSL via {@link LALClassGenerator}, executes it against + * a real {@link FilterSpec} + {@link ExecutionContext}, and asserts on the + * expected state defined in the {@code expect} block. + */ +class LALExpressionExecutionTest { + + private static MockedStatic<K8sInfoRegistry> K8S_MOCK; + private static MockedStatic<MetricsStreamProcessor> MSP_MOCK; + + @BeforeAll + static void setupMocks() { + // Mock K8sInfoRegistry for ProcessRegistry.generateVirtualRemoteProcess() + final K8sInfoRegistry mockK8s = mock(K8sInfoRegistry.class); + when(mockK8s.findPodByIP(anyString())).thenReturn(ObjectID.EMPTY); + when(mockK8s.findServiceByIP(anyString())).thenReturn(ObjectID.EMPTY); + K8S_MOCK = Mockito.mockStatic(K8sInfoRegistry.class); + K8S_MOCK.when(K8sInfoRegistry::getInstance).thenReturn(mockK8s); + + // Mock MetricsStreamProcessor for ProcessRegistry.generateVirtualProcess() + final MetricsStreamProcessor mockMsp = mock(MetricsStreamProcessor.class); + doNothing().when(mockMsp).in(any()); + MSP_MOCK = Mockito.mockStatic(MetricsStreamProcessor.class); + MSP_MOCK.when(MetricsStreamProcessor::getInstance).thenReturn(mockMsp); + } + + @AfterAll + static void teardownMocks() { + if (K8S_MOCK != null) { + K8S_MOCK.close(); + } + if (MSP_MOCK != null) { + MSP_MOCK.close(); + } + } + + @TestFactory + Collection<DynamicTest> lalExecutionTests() throws Exception { + final List<DynamicTest> tests = new ArrayList<>(); + final FilterSpec filterSpec = buildFilterSpec(); + final LALClassGenerator generator = new LALClassGenerator(); + final Yaml yaml = new Yaml(); + + final Path testLalDir = findTestLalDir(); + if (testLalDir == null) { + return tests; + } + + // Scan subdirectories (oap-cases/, feature-cases/) + final File[] subdirs = testLalDir.toFile().listFiles(File::isDirectory); + if (subdirs == null) { + return tests; + } + + for (final File subdir : subdirs) { + final File[] files = subdir.listFiles(); + if (files == null) { + continue; + } + for (final File yamlFile : files) { + if (!yamlFile.getName().endsWith(".yaml") + && !yamlFile.getName().endsWith(".yml")) { + continue; + } + + // Look for matching .input.data file + final String baseName = yamlFile.getName() + .replaceAll("\\.(yaml|yml)$", ""); + final File inputDataFile = new File(subdir, + baseName + ".input.data"); + if (!inputDataFile.exists()) { + continue; + } + + // Parse the YAML rules + final String yamlContent = + Files.readString(yamlFile.toPath()); + final Map<String, Object> config = yaml.load(yamlContent); + if (config == null || !config.containsKey("rules")) { + continue; + } + @SuppressWarnings("unchecked") + final List<Map<String, String>> rules = + (List<Map<String, String>>) config.get("rules"); + if (rules == null) { + continue; + } + + // Parse the input data + final String inputContent = + Files.readString(inputDataFile.toPath()); + @SuppressWarnings("unchecked") + final Map<String, Object> inputData = + yaml.load(inputContent); + if (inputData == null) { + continue; + } + + final String category = subdir.getName(); + for (final Map<String, String> rule : rules) { + final String ruleName = rule.get("name"); + final String dsl = rule.get("dsl"); + final String ruleLayer = rule.get("layer"); + final String extraLogType = rule.get("extraLogType"); + if (ruleName == null || dsl == null) { + continue; + } + final Object ruleInput = inputData.get(ruleName); + if (ruleInput == null) { + continue; + } + + if (ruleInput instanceof List) { + @SuppressWarnings("unchecked") + final List<Map<String, Object>> inputs = + (List<Map<String, Object>>) ruleInput; + for (int i = 0; i < inputs.size(); i++) { + final Map<String, Object> input = inputs.get(i); + final int idx = i; + tests.add(DynamicTest.dynamicTest( + category + "/" + baseName + " | " + + ruleName + " [" + idx + "]", + () -> executeAndAssert( + generator, filterSpec, + ruleName + " [" + idx + "]", + dsl, ruleLayer, extraLogType, input) + )); + } + } else { + @SuppressWarnings("unchecked") + final Map<String, Object> input = + (Map<String, Object>) ruleInput; + tests.add(DynamicTest.dynamicTest( + category + "/" + baseName + " | " + ruleName, + () -> executeAndAssert( + generator, filterSpec, ruleName, + dsl, ruleLayer, extraLogType, input) + )); + } + } + } + } + return tests; + } + + private void executeAndAssert( + final LALClassGenerator generator, + final FilterSpec filterSpec, + final String ruleName, + final String dsl, + final String ruleLayer, + final String extraLogType, + final Map<String, Object> input) throws Exception { + if (extraLogType != null) { + generator.setExtraLogType(Class.forName(extraLogType)); + } else if (ruleLayer != null) { + // Resolve via LALSourceTypeProvider SPI + generator.setExtraLogType(spiExtraLogTypes().get(ruleLayer)); + } else { + generator.setExtraLogType(null); + } + final LalExpression expr = generator.compile(dsl); + final LogData.Builder logData = buildLogData(input); + if (ruleLayer != null) { + logData.setLayer(ruleLayer); + } + final ExecutionContext ctx = new ExecutionContext(); + ctx.log(logData); + + // Set proto extraLog if specified + final Message extraLog = buildExtraLog(input); + if (extraLog != null) { + ctx.extraLog(extraLog); + } + + expr.execute(filterSpec, ctx); + + // Assert expected values + @SuppressWarnings("unchecked") + final Map<String, Object> expect = + (Map<String, Object>) input.get("expect"); + if (expect == null) { + return; + } + + for (final Map.Entry<String, Object> entry : expect.entrySet()) { + final String key = entry.getKey(); + final String expected = String.valueOf(entry.getValue()); + + switch (key) { + case "service": + assertEquals(expected, ctx.log().getService(), + ruleName + ": service mismatch"); + break; + case "instance": + assertEquals(expected, + ctx.log().getServiceInstance(), + ruleName + ": serviceInstance mismatch"); + break; + case "endpoint": + assertEquals(expected, ctx.log().getEndpoint(), + ruleName + ": endpoint mismatch"); + break; + case "layer": + assertEquals(expected, ctx.log().getLayer(), + ruleName + ": layer mismatch"); + break; + case "save": + assertEquals(Boolean.parseBoolean(expected), + ctx.shouldSave(), + ruleName + ": shouldSave mismatch"); + break; + case "abort": + assertEquals(Boolean.parseBoolean(expected), + ctx.shouldAbort(), + ruleName + ": shouldAbort mismatch"); + break; + case "timestamp": + assertEquals(Long.parseLong(expected), + ctx.log().getTimestamp(), + ruleName + ": timestamp mismatch"); + break; + default: + if (key.startsWith("sampledTrace.")) { + assertSampledTrace( + ruleName, key, expected, ctx); + } else if (key.startsWith("tag.")) { + final String tagKey = key.substring(4); + final List<KeyStringValuePair> tags = + ctx.log().getTags().getDataList(); + assertTrue(tags.stream().anyMatch( + t -> tagKey.equals(t.getKey()) + && expected.equals(t.getValue())), + ruleName + ": expected tag " + + tagKey + "=" + expected + + ", got: " + tags.stream() + .map(t -> t.getKey() + "=" + t.getValue()) + .collect(Collectors.joining(", "))); + } + break; + } + } + } + + // ==================== SampledTrace assertions ==================== + + private static void assertSampledTrace( + final String ruleName, + final String key, + final String expected, + final ExecutionContext ctx) { + final SampledTraceBuilder builder = + ctx.sampledTraceBuilder(); + assertTrue(builder != null, + ruleName + ": sampledTraceBuilder is null" + + " but expected " + key + "=" + expected); + + final String field = key.substring("sampledTrace.".length()); + switch (field) { + case "latency": + assertEquals(Long.parseLong(expected), + builder.getLatency(), + ruleName + ": sampledTrace.latency mismatch"); + break; + case "uri": + assertEquals(expected, builder.getUri(), + ruleName + ": sampledTrace.uri mismatch"); + break; + case "reason": + assertEquals(expected, + builder.getReason().name(), + ruleName + ": sampledTrace.reason mismatch"); + break; + case "processId": + assertEquals(expected, builder.getProcessId(), + ruleName + ": sampledTrace.processId mismatch"); + break; + case "destProcessId": + assertEquals(expected, builder.getDestProcessId(), + ruleName + ": sampledTrace.destProcessId mismatch"); + break; + case "detectPoint": + assertEquals(expected, + builder.getDetectPoint().name(), + ruleName + ": sampledTrace.detectPoint mismatch"); + break; + case "componentId": + assertEquals(Integer.parseInt(expected), + builder.getComponentId(), + ruleName + + ": sampledTrace.componentId mismatch"); + break; + case "traceId": + assertEquals(expected, builder.getTraceId(), + ruleName + ": sampledTrace.traceId mismatch"); + break; + case "serviceName": + assertEquals(expected, builder.getServiceName(), + ruleName + + ": sampledTrace.serviceName mismatch"); + break; + case "serviceInstanceName": + assertEquals(expected, + builder.getServiceInstanceName(), + ruleName + + ": sampledTrace.serviceInstanceName" + + " mismatch"); + break; + case "timestamp": + assertEquals(Long.parseLong(expected), + builder.getTimestamp(), + ruleName + + ": sampledTrace.timestamp mismatch"); + break; + default: + throw new IllegalArgumentException( + ruleName + ": unknown sampledTrace field: " + + field); + } + } + + // ==================== LogData builder ==================== + + @SuppressWarnings("unchecked") + private static LogData.Builder buildLogData( + final Map<String, Object> input) { + final LogData.Builder builder = LogData.newBuilder(); + + final String service = (String) input.get("service"); + if (service != null) { + builder.setService(service); + } + + final String instance = (String) input.get("instance"); + if (instance != null) { + builder.setServiceInstance(instance); + } + + final String traceId = (String) input.get("trace-id"); + if (traceId != null) { + builder.setTraceContext( + org.apache.skywalking.apm.network.logging.v3.TraceContext + .newBuilder().setTraceId(traceId)); + } + + final Object tsObj = input.get("timestamp"); + if (tsObj != null) { + builder.setTimestamp(Long.parseLong(String.valueOf(tsObj))); + } + + final String bodyType = (String) input.get("body-type"); + final String body = (String) input.get("body"); + + if ("json".equals(bodyType) && body != null) { + builder.setBody(LogDataBody.newBuilder() + .setJson(JSONLog.newBuilder().setJson(body))); + } else if ("text".equals(bodyType) && body != null) { + builder.setBody(LogDataBody.newBuilder() + .setText(TextLog.newBuilder().setText(body))); + } + + final Map<String, String> tags = + (Map<String, String>) input.get("tags"); + if (tags != null && !tags.isEmpty()) { + final LogTags.Builder tagsBuilder = LogTags.newBuilder(); + for (final Map.Entry<String, String> tag : tags.entrySet()) { + tagsBuilder.addData(KeyStringValuePair.newBuilder() + .setKey(tag.getKey()) + .setValue(tag.getValue())); + } + builder.setTags(tagsBuilder); + } + + return builder; + } + + // ==================== Proto extraLog builder ==================== + + @SuppressWarnings("unchecked") + private static Message buildExtraLog( + final Map<String, Object> input) throws Exception { + final Map<String, String> extraLog = + (Map<String, String>) input.get("extra-log"); + if (extraLog == null) { + return null; + } + + final String protoClass = extraLog.get("proto-class"); + final String protoJson = extraLog.get("proto-json"); + if (protoClass == null || protoJson == null) { + return null; + } + + final Class<?> clazz = Class.forName(protoClass); + final Message.Builder builder = (Message.Builder) + clazz.getMethod("newBuilder").invoke(null); + JsonFormat.parser() + .ignoringUnknownFields() + .merge(protoJson, builder); + return builder.build(); + } + + // ==================== SPI lookup ==================== + + private Map<String, Class<?>> spiTypes; + + private Map<String, Class<?>> spiExtraLogTypes() { + if (spiTypes == null) { + spiTypes = new HashMap<>(); + for (final LALSourceTypeProvider p : + ServiceLoader.load(LALSourceTypeProvider.class)) { + spiTypes.put(p.layer().name(), p.extraLogType()); + } + } + return spiTypes; + } + + // ==================== FilterSpec setup ==================== + + private FilterSpec buildFilterSpec() throws Exception { + final ModuleManager manager = mock(ModuleManager.class); + setInternalField(manager, "isInPrepareStage", false); + + when(manager.find(anyString())) + .thenReturn(mock(ModuleProviderHolder.class)); + + final ModuleProviderHolder logHolder = + mock(ModuleProviderHolder.class); + final LogAnalyzerModuleProvider logProvider = + mock(LogAnalyzerModuleProvider.class); + when(logProvider.getMetricConverts()) + .thenReturn(Collections.emptyList()); + when(logHolder.provider()).thenReturn(logProvider); + when(manager.find(LogAnalyzerModule.NAME)).thenReturn(logHolder); + + final ModuleProviderHolder coreHolder = + mock(ModuleProviderHolder.class); + final ModuleServiceHolder coreServices = + mock(ModuleServiceHolder.class); + when(coreHolder.provider()).thenReturn(coreServices); + when(manager.find(CoreModule.NAME)).thenReturn(coreHolder); + + when(coreServices.getService(SourceReceiver.class)) + .thenReturn(mock(SourceReceiver.class)); + when(coreServices.getService(NamingControl.class)) + .thenReturn(new NamingControl( + 200, 200, 200, new EndpointNameGrouping())); + final ConfigService configService = mock(ConfigService.class); + when(configService.getSearchableLogsTags()).thenReturn(""); + when(coreServices.getService(ConfigService.class)) + .thenReturn(configService); + + final FilterSpec filterSpec = + new FilterSpec(manager, new LogAnalyzerModuleConfig()); + setInternalField(filterSpec, "sinkListenerFactories", + Collections.emptyList()); + + return filterSpec; + } + + // ==================== Directory resolution ==================== + + private Path findTestLalDir() { + final String[] candidates = { + // From repo root (e.g., running with -pl from top level) + "test/script-cases/scripts/lal/test-lal", + // From oap-server/analyzer/log-analyzer/ module directory + "../../../test/script-cases/scripts/lal/test-lal", + // From script-runtime-with-groovy checker location + "../../scripts/lal/test-lal" + }; + for (final String candidate : candidates) { + final Path path = Path.of(candidate); + if (Files.isDirectory(path)) { + return path; + } + } + return null; + } + + // ==================== Reflection helpers ==================== + + private static void setInternalField(final Object target, + final String fieldName, + final Object value) { + try { + Field field = null; + Class<?> clazz = target.getClass(); + while (clazz != null && field == null) { + try { + field = clazz.getDeclaredField(fieldName); + } catch (NoSuchFieldException e) { + clazz = clazz.getSuperclass(); + } + } + if (field != null) { + field.setAccessible(true); + field.set(target, value); + } + } catch (Exception e) { + throw new RuntimeException( + "Failed to set field " + fieldName, e); + } + } +} diff --git a/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALScriptParserTest.java b/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALScriptParserTest.java new file mode 100644 index 000000000000..24d59d63d381 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/LALScriptParserTest.java @@ -0,0 +1,529 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.compiler; + +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertInstanceOf; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; + +class LALScriptParserTest { + + @Test + void parseMinimalFilter() { + final LALScriptModel model = LALScriptParser.parse("filter { sink {} }"); + assertNotNull(model); + assertEquals(1, model.getStatements().size()); + assertInstanceOf(LALScriptModel.SinkBlock.class, model.getStatements().get(0)); + } + + @Test + void parseJsonParserWithExtractorAndSink() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " service parsed.service as String\n" + + " layer parsed.layer as String\n" + + " }\n" + + " sink {}\n" + + "}"); + + assertEquals(3, model.getStatements().size()); + assertInstanceOf(LALScriptModel.JsonParser.class, model.getStatements().get(0)); + + final LALScriptModel.ExtractorBlock extractor = + (LALScriptModel.ExtractorBlock) model.getStatements().get(1); + assertEquals(2, extractor.getStatements().size()); + + final LALScriptModel.FieldAssignment serviceField = + (LALScriptModel.FieldAssignment) extractor.getStatements().get(0); + assertEquals(LALScriptModel.FieldType.SERVICE, serviceField.getFieldType()); + assertTrue(serviceField.getValue().isParsedRef()); + assertEquals("String", serviceField.getCastType()); + } + + @Test + void parseTextParserWithRegexp() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " text {\n" + + " regexp '.+\"(?<request>.+)\"(?<status>\\d{3}).+'\n" + + " }\n" + + " sink {}\n" + + "}"); + + assertEquals(2, model.getStatements().size()); + final LALScriptModel.TextParser textParser = + (LALScriptModel.TextParser) model.getStatements().get(0); + assertNotNull(textParser.getRegexpPattern()); + } + + @Test + void parseSlowSql() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " layer parsed.layer as String\n" + + " service parsed.service as String\n" + + " timestamp parsed.time as String\n" + + " slowSql {\n" + + " id parsed.id as String\n" + + " statement parsed.statement as String\n" + + " latency parsed.query_time as Long\n" + + " }\n" + + " }\n" + + "}"); + + final LALScriptModel.ExtractorBlock extractor = + (LALScriptModel.ExtractorBlock) model.getStatements().get(1); + + // Find the slowSql block + LALScriptModel.SlowSqlBlock slowSql = null; + for (final LALScriptModel.ExtractorStatement stmt : extractor.getStatements()) { + if (stmt instanceof LALScriptModel.SlowSqlBlock) { + slowSql = (LALScriptModel.SlowSqlBlock) stmt; + } + } + assertNotNull(slowSql); + assertNotNull(slowSql.getId()); + assertEquals("String", slowSql.getIdCast()); + assertNotNull(slowSql.getStatement()); + assertEquals("String", slowSql.getStatementCast()); + assertNotNull(slowSql.getLatency()); + assertEquals("Long", slowSql.getLatencyCast()); + } + + @Test + void parseMetricsBlock() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " extractor {\n" + + " metrics {\n" + + " timestamp log.timestamp as Long\n" + + " labels level: parsed.level, service: log.service\n" + + " name \"nginx_error_log_count\"\n" + + " value 1\n" + + " }\n" + + " }\n" + + " sink {}\n" + + "}"); + + final LALScriptModel.ExtractorBlock extractor = + (LALScriptModel.ExtractorBlock) model.getStatements().get(0); + final LALScriptModel.MetricsBlock metrics = + (LALScriptModel.MetricsBlock) extractor.getStatements().get(0); + + assertEquals("nginx_error_log_count", metrics.getName()); + assertEquals(2, metrics.getLabels().size()); + assertTrue(metrics.getLabels().containsKey("level")); + assertTrue(metrics.getLabels().containsKey("service")); + assertNotNull(metrics.getValue()); + } + + @Test + void parseSinkWithSampler() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " sink {\n" + + " sampler {\n" + + " rateLimit('service:error') {\n" + + " rpm 6000\n" + + " }\n" + + " }\n" + + " }\n" + + "}"); + + final LALScriptModel.SinkBlock sink = + (LALScriptModel.SinkBlock) model.getStatements().get(0); + assertEquals(1, sink.getStatements().size()); + final LALScriptModel.SamplerBlock sampler = + (LALScriptModel.SamplerBlock) sink.getStatements().get(0); + assertEquals(1, sampler.getContents().size()); + final LALScriptModel.RateLimitBlock rateLimit = + (LALScriptModel.RateLimitBlock) sampler.getContents().get(0); + assertEquals("service:error", rateLimit.getId()); + assertEquals(6000, rateLimit.getRpm()); + } + + @Test + void parseInterpolatedRateLimitId() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " sink {\n" + + " sampler {\n" + + " rateLimit(\"${log.service}:${parsed.code}\") {\n" + + " rpm 3000\n" + + " }\n" + + " }\n" + + " }\n" + + "}"); + + final LALScriptModel.SinkBlock sink = + (LALScriptModel.SinkBlock) model.getStatements().get(0); + final LALScriptModel.SamplerBlock sampler = + (LALScriptModel.SamplerBlock) sink.getStatements().get(0); + final LALScriptModel.RateLimitBlock rl = + (LALScriptModel.RateLimitBlock) sampler.getContents().get(0); + + assertTrue(rl.isIdInterpolated()); + assertEquals(3, rl.getIdParts().size()); + + // Part 0: expression ${log.service} + assertFalse(rl.getIdParts().get(0).isLiteral()); + assertTrue(rl.getIdParts().get(0).getExpression().isLogRef()); + + // Part 1: literal ":" + assertTrue(rl.getIdParts().get(1).isLiteral()); + assertEquals(":", rl.getIdParts().get(1).getLiteral()); + + // Part 2: expression ${parsed.code} + assertFalse(rl.getIdParts().get(2).isLiteral()); + assertTrue(rl.getIdParts().get(2).getExpression().isParsedRef()); + } + + @Test + void parsePlainRateLimitIdNotInterpolated() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " sink {\n" + + " sampler {\n" + + " rateLimit('service:error') {\n" + + " rpm 6000\n" + + " }\n" + + " }\n" + + " }\n" + + "}"); + + final LALScriptModel.SinkBlock sink = + (LALScriptModel.SinkBlock) model.getStatements().get(0); + final LALScriptModel.SamplerBlock sampler = + (LALScriptModel.SamplerBlock) sink.getStatements().get(0); + final LALScriptModel.RateLimitBlock rl = + (LALScriptModel.RateLimitBlock) sampler.getContents().get(0); + + assertFalse(rl.isIdInterpolated()); + } + + @Test + void parseIfCondition() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " if (parsed.status) {\n" + + " extractor {\n" + + " layer parsed.layer as String\n" + + " }\n" + + " sink {}\n" + + " }\n" + + "}"); + + assertEquals(1, model.getStatements().size()); + final LALScriptModel.IfBlock ifBlock = + (LALScriptModel.IfBlock) model.getStatements().get(0); + assertNotNull(ifBlock.getCondition()); + assertEquals(2, ifBlock.getThenBranch().size()); + } + + @Test + void parseElseIfChain() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " if (parsed.a) {\n" + + " sink {}\n" + + " } else if (parsed.b) {\n" + + " sink {}\n" + + " } else if (parsed.c) {\n" + + " sink {}\n" + + " } else {\n" + + " sink {}\n" + + " }\n" + + "}"); + + assertEquals(1, model.getStatements().size()); + final LALScriptModel.IfBlock top = + (LALScriptModel.IfBlock) model.getStatements().get(0); + assertNotNull(top.getCondition()); + assertEquals(1, top.getThenBranch().size()); + + // else branch contains a nested IfBlock for "else if (parsed.b)" + assertEquals(1, top.getElseBranch().size()); + final LALScriptModel.IfBlock elseIf1 = + (LALScriptModel.IfBlock) top.getElseBranch().get(0); + assertNotNull(elseIf1.getCondition()); + assertEquals(1, elseIf1.getThenBranch().size()); + + // nested else branch contains another IfBlock for "else if (parsed.c)" + assertEquals(1, elseIf1.getElseBranch().size()); + final LALScriptModel.IfBlock elseIf2 = + (LALScriptModel.IfBlock) elseIf1.getElseBranch().get(0); + assertNotNull(elseIf2.getCondition()); + assertEquals(1, elseIf2.getThenBranch().size()); + + // innermost else branch is the final else body + assertEquals(1, elseIf2.getElseBranch().size()); + assertInstanceOf(LALScriptModel.SinkBlock.class, elseIf2.getElseBranch().get(0)); + } + + @Test + void parseElseIfWithoutFinalElse() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " if (parsed.a) {\n" + + " sink {}\n" + + " } else if (parsed.b) {\n" + + " sink {}\n" + + " }\n" + + "}"); + + final LALScriptModel.IfBlock top = + (LALScriptModel.IfBlock) model.getStatements().get(0); + assertEquals(1, top.getElseBranch().size()); + final LALScriptModel.IfBlock elseIf = + (LALScriptModel.IfBlock) top.getElseBranch().get(0); + assertNotNull(elseIf.getCondition()); + assertTrue(elseIf.getElseBranch().isEmpty()); + } + + @Test + void parseSyntaxErrorThrows() { + assertThrows(IllegalArgumentException.class, + () -> LALScriptParser.parse("filter {")); + } + + // ==================== Function call parsing ==================== + + @Test + void parseTagFunctionCallInCondition() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " if (tag(\"LOG_KIND\") == \"SLOW_SQL\") {\n" + + " sink {}\n" + + " }\n" + + "}"); + + final LALScriptModel.IfBlock ifBlock = + (LALScriptModel.IfBlock) model.getStatements().get(0); + final LALScriptModel.ComparisonCondition cond = + (LALScriptModel.ComparisonCondition) ifBlock.getCondition(); + + // Left side should be a function call + final LALScriptModel.ValueAccess left = cond.getLeft(); + assertEquals("tag", left.getFunctionCallName()); + assertEquals(1, left.getFunctionCallArgs().size()); + assertEquals("LOG_KIND", + left.getFunctionCallArgs().get(0).getValue().getSegments().get(0)); + + // Right side should be a string value (parsed as ValueAccess with stringLiteral flag) + assertInstanceOf(LALScriptModel.ValueAccessConditionValue.class, cond.getRight()); + final LALScriptModel.ValueAccessConditionValue rightVal = + (LALScriptModel.ValueAccessConditionValue) cond.getRight(); + assertTrue(rightVal.getValue().isStringLiteral()); + assertEquals("SLOW_SQL", rightVal.getValue().getSegments().get(0)); + } + + @Test + void parseTagFunctionCallAsSingleCondition() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " if (tag(\"LOG_KIND\")) {\n" + + " sink {}\n" + + " }\n" + + "}"); + + final LALScriptModel.IfBlock ifBlock = + (LALScriptModel.IfBlock) model.getStatements().get(0); + final LALScriptModel.ExprCondition cond = + (LALScriptModel.ExprCondition) ifBlock.getCondition(); + assertEquals("tag", cond.getExpr().getFunctionCallName()); + assertEquals(1, cond.getExpr().getFunctionCallArgs().size()); + } + + // ==================== Safe navigation parsing ==================== + + @Test + void parseSafeNavigationFields() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " extractor {\n" + + " service parsed?.response?.service as String\n" + + " }\n" + + " sink {}\n" + + "}"); + + final LALScriptModel.ExtractorBlock extractor = + (LALScriptModel.ExtractorBlock) model.getStatements().get(0); + final LALScriptModel.FieldAssignment field = + (LALScriptModel.FieldAssignment) extractor.getStatements().get(0); + + assertTrue(field.getValue().isParsedRef()); + assertEquals(2, field.getValue().getChain().size()); + assertTrue(((LALScriptModel.FieldSegment) field.getValue().getChain().get(0)) + .isSafeNav()); + assertTrue(((LALScriptModel.FieldSegment) field.getValue().getChain().get(1)) + .isSafeNav()); + } + + @Test + void parseSafeNavigationMethods() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " extractor {\n" + + " service parsed?.flags?.toString()?.trim() as String\n" + + " }\n" + + " sink {}\n" + + "}"); + + final LALScriptModel.ExtractorBlock extractor = + (LALScriptModel.ExtractorBlock) model.getStatements().get(0); + final LALScriptModel.FieldAssignment field = + (LALScriptModel.FieldAssignment) extractor.getStatements().get(0); + + assertEquals(3, field.getValue().getChain().size()); + // flags is a safe field + assertInstanceOf(LALScriptModel.FieldSegment.class, + field.getValue().getChain().get(0)); + assertTrue(((LALScriptModel.FieldSegment) field.getValue().getChain().get(0)) + .isSafeNav()); + // toString() is a safe method + assertInstanceOf(LALScriptModel.MethodSegment.class, + field.getValue().getChain().get(1)); + assertTrue(((LALScriptModel.MethodSegment) field.getValue().getChain().get(1)) + .isSafeNav()); + assertEquals("toString", + ((LALScriptModel.MethodSegment) field.getValue().getChain().get(1)).getName()); + // trim() is a safe method + assertTrue(((LALScriptModel.MethodSegment) field.getValue().getChain().get(2)) + .isSafeNav()); + assertEquals("trim", + ((LALScriptModel.MethodSegment) field.getValue().getChain().get(2)).getName()); + } + + // ==================== Method argument parsing ==================== + + @Test + void parseMethodWithArguments() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " service ProcessRegistry.generateVirtualLocalProcess(" + + "parsed.service as String, parsed.instance as String) as String\n" + + " }\n" + + " sink {}\n" + + "}"); + + final LALScriptModel.ExtractorBlock extractor = + (LALScriptModel.ExtractorBlock) model.getStatements().get(1); + final LALScriptModel.FieldAssignment field = + (LALScriptModel.FieldAssignment) extractor.getStatements().get(0); + + assertTrue(field.getValue().isProcessRegistryRef()); + assertEquals(1, field.getValue().getChain().size()); + + final LALScriptModel.MethodSegment method = + (LALScriptModel.MethodSegment) field.getValue().getChain().get(0); + assertEquals("generateVirtualLocalProcess", method.getName()); + assertEquals(2, method.getArguments().size()); + assertTrue(method.getArguments().get(0).getValue().isParsedRef()); + assertEquals("String", method.getArguments().get(0).getCastType()); + assertTrue(method.getArguments().get(1).getValue().isParsedRef()); + assertEquals("String", method.getArguments().get(1).getCastType()); + } + + // ==================== Sampled trace parsing ==================== + + @Test + void parseSampledTrace() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " sampledTrace {\n" + + " latency parsed.latency as Long\n" + + " uri parsed.uri as String\n" + + " reason parsed.reason as String\n" + + " detectPoint parsed.detect_point as String\n" + + " componentId 49\n" + + " }\n" + + " }\n" + + " sink {}\n" + + "}"); + + final LALScriptModel.ExtractorBlock extractor = + (LALScriptModel.ExtractorBlock) model.getStatements().get(1); + final LALScriptModel.SampledTraceBlock st = + (LALScriptModel.SampledTraceBlock) extractor.getStatements().get(0); + assertEquals(5, st.getStatements().size()); + } + + // ==================== If in extractor/sink parsing ==================== + + @Test + void parseIfInsideExtractor() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " json {}\n" + + " extractor {\n" + + " if (parsed.status) {\n" + + " tag 'http.status_code': parsed.status\n" + + " }\n" + + " tag 'response.flag': parsed.flags\n" + + " }\n" + + " sink {}\n" + + "}"); + + final LALScriptModel.ExtractorBlock extractor = + (LALScriptModel.ExtractorBlock) model.getStatements().get(1); + assertEquals(2, extractor.getStatements().size()); + assertInstanceOf(LALScriptModel.IfBlock.class, extractor.getStatements().get(0)); + assertInstanceOf(LALScriptModel.TagAssignment.class, extractor.getStatements().get(1)); + } + + @Test + void parseIfInsideSink() { + final LALScriptModel model = LALScriptParser.parse( + "filter {\n" + + " sink {\n" + + " sampler {\n" + + " if (parsed.error) {\n" + + " rateLimit('svc:err') {\n" + + " rpm 6000\n" + + " }\n" + + " } else {\n" + + " rateLimit('svc:ok') {\n" + + " rpm 3000\n" + + " }\n" + + " }\n" + + " }\n" + + " }\n" + + "}"); + + final LALScriptModel.SinkBlock sink = + (LALScriptModel.SinkBlock) model.getStatements().get(0); + final LALScriptModel.SamplerBlock sampler = + (LALScriptModel.SamplerBlock) sink.getStatements().get(0); + // The sampler has one if-block as content + assertEquals(1, sampler.getContents().size()); + assertInstanceOf(LALScriptModel.IfBlock.class, sampler.getContents().get(0)); + } +} diff --git a/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/TestMeshLALSourceTypeProvider.java b/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/TestMeshLALSourceTypeProvider.java new file mode 100644 index 000000000000..e1e5df4cfbfe --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/compiler/TestMeshLALSourceTypeProvider.java @@ -0,0 +1,34 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.compiler; + +import io.envoyproxy.envoy.data.accesslog.v3.HTTPAccessLogEntry; +import org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider; +import org.apache.skywalking.oap.server.core.analysis.Layer; + +public class TestMeshLALSourceTypeProvider implements LALSourceTypeProvider { + @Override + public Layer layer() { + return Layer.MESH; + } + + @Override + public Class<?> extraLogType() { + return HTTPAccessLogEntry.class; + } +} diff --git a/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/DSLV2Test.java b/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/DSLV2Test.java new file mode 100644 index 000000000000..461bbaf8a6b7 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/v2/dsl/DSLV2Test.java @@ -0,0 +1,49 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.log.analyzer.v2.dsl; + +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALClassGenerator; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertThrows; + +class DSLV2Test { + + @Test + void compileSimpleFilterExpression() throws Exception { + final LALClassGenerator generator = new LALClassGenerator(); + final LalExpression expr = generator.compile("filter { json {} sink {} }"); + assertNotNull(expr); + } + + @Test + void compileFilterWithExtractor() throws Exception { + final LALClassGenerator generator = new LALClassGenerator(); + final LalExpression expr = generator.compile( + "filter { json {} extractor { service parsed.service as String } sink {} }"); + assertNotNull(expr); + } + + @Test + void compileThrowsOnInvalidExpression() { + final LALClassGenerator generator = new LALClassGenerator(); + assertThrows(Exception.class, + () -> generator.compile("??? invalid !!!")); + } +} diff --git a/oap-server/analyzer/log-analyzer/src/test/resources/META-INF/services/org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider b/oap-server/analyzer/log-analyzer/src/test/resources/META-INF/services/org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider new file mode 100644 index 000000000000..bab38444ef10 --- /dev/null +++ b/oap-server/analyzer/log-analyzer/src/test/resources/META-INF/services/org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider @@ -0,0 +1,19 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# + +org.apache.skywalking.oap.log.analyzer.v2.compiler.TestMeshLALSourceTypeProvider diff --git a/oap-server/analyzer/meter-analyzer/CLAUDE.md b/oap-server/analyzer/meter-analyzer/CLAUDE.md new file mode 100644 index 000000000000..45ec834e4e67 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/CLAUDE.md @@ -0,0 +1,162 @@ +# MAL Compiler + +Compiles MAL (Meter Analysis Language) expressions into `MalExpression` implementation classes at runtime using ANTLR4 parsing and Javassist bytecode generation. + +## Compilation Workflow + +``` +MAL expression string + → MALScriptParser.parse(expression) [ANTLR4 lexer/parser → visitor] + → MALExpressionModel.Expr (immutable AST) + → MALClassGenerator.compileFromModel(name, ast) + 1. collectClosures(ast) — pre-scan for closure arguments + 2. addClosureMethod() — add closure body as method on main class + 3. classPool.makeClass() — create main class implementing MalExpression + 4. generateRunMethod() — emit Java source for run(Map<String,SampleFamily>) + 5. ctClass.toClass(MalExpressionPackageHolder.class) — load via package anchor + 6. wire closure fields via LambdaMetafactory (no extra .class files) + → MalExpression instance +``` + +The generated class implements `MalExpression`: +```java +SampleFamily run(Map<String, SampleFamily> samples) // pure computation, no side effects +ExpressionMetadata metadata() // compile-time metadata from AST +``` + +## File Structure + +``` +oap-server/analyzer/meter-analyzer/ + src/main/antlr4/.../MALLexer.g4 — ANTLR4 lexer grammar + src/main/antlr4/.../MALParser.g4 — ANTLR4 parser grammar + + src/main/java/.../compiler/ + MALScriptParser.java — ANTLR4 facade: expression → AST + MALExpressionModel.java — Immutable AST model classes + MALClassGenerator.java — Public API, run method codegen, metadata extraction + MALClosureCodegen.java — Closure method codegen (inlined on main class via LambdaMetafactory) + MALCodegenHelper.java — Static utility methods and shared constants + rt/ + MalExpressionPackageHolder.java — Class loading anchor (empty marker) + MalRuntimeHelper.java — Static helpers called by generated code (e.g., divReverse) + + src/test/java/.../compiler/ + MALScriptParserTest.java — 20 parser tests + MALClassGeneratorTest.java — 32 generator tests +``` + +## Package & Class Naming + +All v2 classes live under `org.apache.skywalking.oap.meter.analyzer.v2.*` to avoid FQCN conflicts with the v1 (Groovy) classes. + +| Component | Package / Name | +|-----------|---------------| +| Parser/Model/Generator | `org.apache.skywalking.oap.meter.analyzer.v2.compiler` | +| Generated classes | `org.apache.skywalking.oap.meter.analyzer.v2.compiler.rt.MalExpr_<N>` | +| Package holder | `org.apache.skywalking.oap.meter.analyzer.v2.compiler.rt.MalExpressionPackageHolder` | +| Runtime helper | `org.apache.skywalking.oap.meter.analyzer.v2.compiler.rt.MalRuntimeHelper` | +| Functional interface | `org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalExpression` | + +`<N>` is a global `AtomicInteger` counter. + +## Javassist Constraints + +- **No anonymous inner classes**: Javassist cannot compile `new Consumer() { ... }` or `new Function() { ... }` in method bodies. +- **No lambda expressions**: Javassist has no lambda support. +- **Closure approach**: Closure bodies are compiled as methods on the main class (e.g., `_tag_apply(Map)`), then wrapped via `LambdaMetafactory` into functional interface instances. No extra `.class` files are produced — the JVM creates hidden classes internally (same mechanism `javac` uses for lambdas). +- **Inner class notation**: Use `$` not `.` for nested classes (e.g., `SampleFamilyFunctions$TagFunction`). +- **`isPresent()`/`get()` instead of `ifPresent()`**: `ifPresent(Consumer)` would require an anonymous class. Use `Optional.isPresent()` + `Optional.get()` pattern. +- **Closure interface dispatch**: Different closure call sites use different functional interfaces: + - `tag({ ... })` → `SampleFamilyFunctions$TagFunction` + - `forEach(closure)` / `serviceRelation(closure)` etc. → `SampleFamilyFunctions$ForEachFunction` + - `instance(closure)` → `SampleFamilyFunctions$PropertiesExtractor` + - `decorate(closure)` → `SampleFamilyFunctions$DecorateFunction` +- **v2 package isolation**: All v2 classes are under `*.v2.*` packages, so there are no FQCN conflicts with the v1 Groovy module. + +## Example + +**Input**: `instance_jvm_cpu.sum(['service', 'instance'])` + +**Generated `run()` method** (pure computation, no ThreadLocal): +```java +public SampleFamily run(Map samples) { + return ((SampleFamily) samples.getOrDefault("instance_jvm_cpu", SampleFamily.EMPTY)) + .sum(java.util.List.of("service", "instance")); +} +``` + +**Generated `metadata()` method** (returns compile-time facts extracted from AST): +```java +public ExpressionMetadata metadata() { + // samples=["instance_jvm_cpu"], aggregationLabels=["service","instance"], ... + return new ExpressionMetadata(...); +} +``` + +**Input with closure**: `metric.tag({ tags -> tags['k'] = 'v' })` + +One class is generated (`MalExpr_0`): +- Method `_tag_apply(Map tags)` — contains `tags.put("k", "v"); return tags;` +- Field `_tag` — typed as `TagFunction`, wired via `LambdaMetafactory` after class loading +- `run()` body calls `metric.tag(this._tag)` + +## ExpressionMetadata (replaces ExpressionParsingContext) + +Metadata is extracted statically from the AST at compile time by `MALClassGenerator.extractMetadata()`. No ThreadLocal, no dry-run execution. The `Analyzer` calls `expression.metadata()` to get sample names, scope type, aggregation labels, downsampling, histogram/percentile info. + +## Debug Output + +When `SW_DYNAMIC_CLASS_ENGINE_DEBUG=true` environment variable is set, generated `.class` files are written to disk for inspection: + +``` +{skywalking}/mal-rt/ + *.class - Generated MalExpression .class files (one per expression, no separate closure classes) +``` + +This is the same env variable used by OAL. Useful for debugging code generation issues or comparing V1 vs V2 output. In tests, use `setClassOutputDir(dir)` instead. + +## MAL Input Data Mock Principles + +MAL test data lives in `.data.yaml` companion files alongside rule YAML files under `test/script-cases/scripts/mal/`. Each `.data.yaml` has two sections: `input` (mock samples) and `expected` (v1-verified output assertions). + +### Input Section Principles + +1. **Every metric referenced in rule expressions must have samples** — missing metrics produce EMPTY results (hard test failure). +2. **Label variants for filters**: If a rule uses `tagEqual('cpu', 'cpu-total')`, the input must have samples with `cpu: cpu-total`. If another rule in the same file uses `tagNotEqual('cpu', 'cpu-total')`, there must also be samples with a different `cpu` value (e.g., `cpu: cpu0`). +3. **`host` label for `service(['host'])`**: Rules with `expSuffix: service(['host'], ...)` derive the service entity name from the `host` label. All input samples should include a `host` label so the entity service name is non-empty. +4. **Numeric YAML keys**: Some configs (e.g., zabbix `agent.yaml`) use numeric label keys like `1`, `2`. YAML parsers read these as `Integer`, not `String`. Test code must use `String.valueOf()` on both keys and values when building label maps. +5. **Auto-generation**: `MalInputDataGenerator` extracts metric names and label requirements from compiled expression metadata. Run `MalInputDataGeneratorTest` to generate `.data.yaml` files for new rules. It skips files that already exist — delete the `.data.yaml` to regenerate. + +### Expected Section Principles + +1. **v1 is the truth**: The expected data is auto-generated by running the v1 (Groovy) engine on the input data. v1 is production-verified, so its output is the ground truth. +2. **Non-empty output required**: If v1 produces EMPTY, the input data has a bug. Fix the input, not skip the test. +3. **Rich assertions**: Expected includes entities (scope, service, instance, endpoint, layer) and samples (labels, value). Not just `min_samples: 1`. +4. **Error markers**: `error: 'v1 not-success'` means v1 failed to execute the expression. Fix the input data so v1 succeeds. +5. **Re-generation**: Run `MalExpectedDataGeneratorTest` to update expected sections after input data changes. + +### Directory Structure + +| Directory | Source | +|-----------|--------| +| `test-meter-analyzer-config` | `oap-server/server-starter/.../meter-analyzer-config/` | +| `test-otel-rules` | `oap-server/server-starter/.../otel-rules/` | +| `test-envoy-metrics-rules` | `oap-server/server-starter/.../envoy-metrics-rules/` | +| `test-log-mal-rules` | `oap-server/server-starter/.../log-mal-rules/` | +| `test-telegraf-rules` | `oap-server/server-starter/.../telegraf-rules/` | +| `test-zabbix-rules` | `oap-server/server-starter/.../zabbix-rules/` | + +### YAML Key Variants + +| Key | Used by | +|-----|---------| +| `metricsRules` | Standard rule YAMLs (OTEL, meter-analyzer, envoy, log-mal, telegraf) | +| `metrics` | Zabbix `agent.yaml` (production ZabbixConfig maps `metrics` to `getMetricsRules()`) | + +## Dependencies + +All within this module (grammar, compiler, and runtime are merged): +- ANTLR4 grammar → generates lexer/parser at build time +- `MalExpression`, `ExpressionMetadata`, `SampleFamily` — in `dsl` package of this module +- `javassist` — bytecode generation diff --git a/oap-server/analyzer/meter-analyzer/pom.xml b/oap-server/analyzer/meter-analyzer/pom.xml index 945fd061b3a3..57d3fd200e47 100644 --- a/oap-server/analyzer/meter-analyzer/pom.xml +++ b/oap-server/analyzer/meter-analyzer/pom.xml @@ -38,13 +38,37 @@ <artifactId>server-core</artifactId> <version>${project.version}</version> </dependency> - <dependency> - <groupId>org.apache.groovy</groupId> - <artifactId>groovy</artifactId> - </dependency> <dependency> <groupId>io.vavr</groupId> <artifactId>vavr</artifactId> </dependency> + <dependency> + <groupId>org.antlr</groupId> + <artifactId>antlr4-runtime</artifactId> + </dependency> + <dependency> + <groupId>org.javassist</groupId> + <artifactId>javassist</artifactId> + </dependency> </dependencies> + + <build> + <plugins> + <plugin> + <groupId>org.antlr</groupId> + <artifactId>antlr4-maven-plugin</artifactId> + <configuration> + <visitor>true</visitor> + </configuration> + <executions> + <execution> + <id>antlr</id> + <goals> + <goal>antlr4</goal> + </goals> + </execution> + </executions> + </plugin> + </plugins> + </build> </project> diff --git a/oap-server/analyzer/meter-analyzer/src/main/antlr4/org/apache/skywalking/mal/rt/grammar/MALLexer.g4 b/oap-server/analyzer/meter-analyzer/src/main/antlr4/org/apache/skywalking/mal/rt/grammar/MALLexer.g4 new file mode 100644 index 000000000000..b8958b2916f9 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/antlr4/org/apache/skywalking/mal/rt/grammar/MALLexer.g4 @@ -0,0 +1,132 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +// Meter Analysis Language lexer +lexer grammar MALLexer; + +@Header {package org.apache.skywalking.mal.rt.grammar;} + +// Operators +PLUS: '+'; +MINUS: '-'; +STAR: '*'; +SLASH: '/'; + +// Comparison +DEQ: '=='; +NEQ: '!='; +AND: '&&'; +OR: '||'; + +// Delimiters +DOT: '.'; +COMMA: ','; +L_PAREN: '('; +R_PAREN: ')'; +L_BRACKET: '['; +R_BRACKET: ']'; +L_BRACE: '{'; +R_BRACE: '}'; +SEMI: ';'; +COLON: ':'; +QUESTION: '?'; +ARROW: '->'; +ASSIGN: '='; +GT: '>'; +LT: '<'; +GTE: '>='; +LTE: '<='; +NOT: '!'; + +// Regex match operator: switches to REGEX_MODE to lex the pattern +REGEX_MATCH: '=~' -> pushMode(REGEX_MODE); + +// Keywords +DEF: 'def'; +IF: 'if'; +ELSE: 'else'; +RETURN: 'return'; +NULL: 'null'; +TRUE: 'true'; +FALSE: 'false'; +IN: 'in'; + +// Literals +NUMBER + : Digit+ ('.' Digit+)? + ; + +STRING + : '\'' (~['\\\r\n] | EscapeSequence)* '\'' + | '"' (~["\\\r\n] | EscapeSequence)* '"' + ; + +// Comments +LINE_COMMENT + : '//' ~[\r\n]* -> channel(HIDDEN) + ; + +BLOCK_COMMENT + : '/*' .*? '*/' -> channel(HIDDEN) + ; + +// Whitespace +WS + : [ \t\r\n]+ -> channel(HIDDEN) + ; + +// Identifiers - must come after keywords +IDENTIFIER + : Letter LetterOrDigit* + ; + +// Fragments +fragment EscapeSequence + : '\\' [btnfr"'\\] + | '\\' ([0-3]? [0-7])? [0-7] + ; + +fragment Digit + : [0-9] + ; + +fragment Letter + : [a-zA-Z_] + ; + +fragment LetterOrDigit + : Letter + | [0-9] + ; + +// ==================== Regex mode ==================== +// Activated after '=~', lexes a /pattern/ regex literal, then pops back. +mode REGEX_MODE; + +REGEX_WS + : [ \t\r\n]+ -> channel(HIDDEN) + ; + +REGEX_LITERAL + : '/' RegexBodyChar+ '/' -> popMode + ; + +fragment RegexBodyChar + : '\\' . // escaped character (e.g. \. \( \[ ) + | ~[/\r\n] // anything except / and newline + ; diff --git a/oap-server/analyzer/meter-analyzer/src/main/antlr4/org/apache/skywalking/mal/rt/grammar/MALParser.g4 b/oap-server/analyzer/meter-analyzer/src/main/antlr4/org/apache/skywalking/mal/rt/grammar/MALParser.g4 new file mode 100644 index 000000000000..eaf417e137e7 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/antlr4/org/apache/skywalking/mal/rt/grammar/MALParser.g4 @@ -0,0 +1,282 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +// Meter Analysis Language parser +// +// Covers MAL expression patterns: +// metric_name.tagEqual("k","v").sum(["tag"]).rate("PT1M").service(["svc"], Layer.GENERAL) +// metric1 + metric2, (metric * 100), metric1.div(metric2) +// tag({tags -> tags.key = "val"}), forEach(["prefix"], {prefix, tags -> ...}) +// .valueEqual(1), .retagByK8sMeta("svc", K8sRetagType.Pod2Service, "pod", "ns") +// .histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) +parser grammar MALParser; + +@Header {package org.apache.skywalking.mal.rt.grammar;} + +options { tokenVocab=MALLexer; } + +// ==================== Top-level ==================== + +// A MAL expression: arithmetic tree of postfix-chained metric references +expression + : additiveExpression EOF + ; + +// A standalone filter closure: { tags -> tags.job_name == 'value' } +filterExpression + : closureExpression EOF + ; + +// ==================== Arithmetic ==================== + +additiveExpression + : multiplicativeExpression ((PLUS | MINUS) multiplicativeExpression)* + ; + +multiplicativeExpression + : unaryExpression ((STAR | SLASH) unaryExpression)* + ; + +unaryExpression + : MINUS unaryExpression # unaryNeg + | postfixExpression # unaryPostfix + | NUMBER # unaryNumber + ; + +// ==================== Postfix (method chaining) ==================== + +// primary.method1().method2()... +postfixExpression + : primary (DOT methodCall)* + ; + +primary + : IDENTIFIER // metric name + | functionCall // top-level function: count(metric), topN(...) + | L_PAREN additiveExpression R_PAREN // parenthesized: (metric * 100).sum() + ; + +functionCall + : IDENTIFIER L_PAREN argumentList? R_PAREN + ; + +methodCall + : IDENTIFIER L_PAREN argumentList? R_PAREN + ; + +// ==================== Arguments ==================== + +argumentList + : argument (COMMA argument)* + ; + +argument + : additiveExpression // nested expression (metric ref, number, arithmetic) + | stringList // ["tag1", "tag2"] + | numberList // [50, 75, 90, 95, 99] + | L_PAREN stringList R_PAREN // (["tag1", "tag2"]) — extra parens + | L_PAREN numberList R_PAREN // ([50, 75, 90]) — extra parens + | closureExpression // {tags -> ...} + | enumRef // Layer.GENERAL, K8sRetagType.Pod2Service + | STRING // "PT1M", "k8s-key" + | boolLiteral // true, false + | NULL // null + ; + +stringList + : L_BRACKET STRING (COMMA STRING)* R_BRACKET + ; + +numberList + : L_BRACKET NUMBER (COMMA NUMBER)* R_BRACKET + ; + +enumRef + : IDENTIFIER DOT IDENTIFIER + ; + +boolLiteral + : TRUE | FALSE + ; + +// ==================== Closure expressions ==================== +// +// Used in tag(), forEach(), and filter expressions: +// { tags -> tags.key = "val" } +// { prefix, tags -> if (tags[prefix + "_process_id"] != null) { ... } } +// { tags -> tags.job_name == 'mysql-monitoring' } +// { tags -> { tags.cloud_provider == 'aws' && tags.Namespace == 'AWS/S3' } } + +closureExpression + : L_BRACE closureParams? ARROW closureBody R_BRACE + ; + +closureParams + : IDENTIFIER (COMMA IDENTIFIER)* + ; + +closureBody + : closureCondition // bare condition: { tags -> tags.x == 'v' } + | L_BRACE closureCondition R_BRACE // braced condition: { tags -> { tags.x == 'v' } } + | closureStatement+ + | L_BRACE closureStatement+ R_BRACE // optional extra braces: { tags -> { ... } } + ; + +closureStatement + : ifStatement + | returnStatement + | variableDeclaration + | assignmentStatement + | expressionStatement + ; + +// ==================== Variable declarations ==================== +// Groovy-style: String result = "", String protocol = tags['protocol'] +// Also supports array types: String[] parts = ... +// Also supports def keyword: def matcher = ... +variableDeclaration + : IDENTIFIER L_BRACKET R_BRACKET IDENTIFIER ASSIGN closureExpr SEMI? + | IDENTIFIER IDENTIFIER ASSIGN closureExpr SEMI? + | DEF IDENTIFIER ASSIGN closureExpr SEMI? + ; + +// ==================== Closure statements ==================== + +ifStatement + : IF L_PAREN closureCondition R_PAREN closureBlock + (ELSE ifStatement)? + (ELSE closureBlock)? + ; + +closureBlock + : L_BRACE closureStatement* R_BRACE + ; + +returnStatement + : RETURN closureExpr? SEMI? + ; + +assignmentStatement + : closureFieldAccess ASSIGN closureExpr SEMI? + ; + +expressionStatement + : closureExpr SEMI? + ; + +// ==================== Closure expressions (within closures) ==================== + +closureCondition + : closureConditionOr + ; + +closureConditionOr + : closureConditionAnd (OR closureConditionAnd)* + ; + +closureConditionAnd + : closureConditionPrimary (AND closureConditionPrimary)* + ; + +closureConditionPrimary + : NOT closureConditionPrimary # conditionNot + | closureExpr DEQ closureExpr # conditionEq + | closureExpr NEQ closureExpr # conditionNeq + | closureExpr GT closureExpr # conditionGt + | closureExpr LT closureExpr # conditionLt + | closureExpr GTE closureExpr # conditionGte + | closureExpr LTE closureExpr # conditionLte + | closureExpr IN closureListLiteral # conditionIn + | L_PAREN closureCondition R_PAREN # conditionParen + | closureExpr # conditionExpr + ; + +closureExpr + // In ANTLR4 left-recursive rules, alternatives listed FIRST have the + // HIGHEST precedence (tightest binding). Order: MUL/DIV > ADD/SUB > + // regex > ternary/elvis (loosest). + : closureExpr STAR closureExpr # closureMul + | closureExpr SLASH closureExpr # closureDiv + | closureExpr PLUS closureExpr # closureAdd + | closureExpr MINUS closureExpr # closureSub + | closureExpr REGEX_MATCH REGEX_LITERAL # closureRegexMatch + | closureExpr compOp closureExpr QUESTION closureExpr COLON closureExpr # closureTernaryComp + | closureExpr QUESTION closureExpr COLON closureExpr # closureTernary + | closureExpr QUESTION COLON closureExpr # closureElvis + | MINUS closureExprPrimary # closureUnaryMinus + | closureExprPrimary # closurePrimary + ; + +closureExprPrimary + : STRING closureChainAccess* # closureString + | NUMBER # closureNumber + | NULL # closureNull + | boolLiteral # closureBool + | closureMapLiteral # closureMap + | closureMethodChain # closureChain + | L_PAREN closureExpr R_PAREN closureChainAccess* # closureParen + ; + +// Groovy map literal: ['key': expr, 'key2': expr2] +closureMapLiteral + : L_BRACKET closureMapEntry (COMMA closureMapEntry)* R_BRACKET + ; + +closureMapEntry + : STRING COLON closureExpr + ; + +closureMethodChain + : closureTarget closureChainAccess* + ; + +closureChainAccess + : DOT closureChainSegment + | safeNav closureChainSegment + | L_BRACKET closureExpr R_BRACKET // direct bracket: tags['key'] + ; + +closureTarget + : IDENTIFIER + ; + +closureChainSegment + : IDENTIFIER L_PAREN closureArgList? R_PAREN # chainMethodCall + | IDENTIFIER # chainFieldAccess + | L_BRACKET closureExpr R_BRACKET # chainIndexAccess + ; + +safeNav + : QUESTION DOT + ; + +closureArgList + : closureExpr (COMMA closureExpr)* + ; + +compOp + : GT | LT | GTE | LTE | DEQ | NEQ + ; + +closureFieldAccess + : IDENTIFIER (DOT IDENTIFIER)* (L_BRACKET closureExpr R_BRACKET)? + ; + +closureListLiteral + : L_BRACKET (STRING (COMMA STRING)*)? R_BRACKET + ; diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/Analyzer.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/Analyzer.java new file mode 100644 index 000000000000..d07fcfde4046 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/Analyzer.java @@ -0,0 +1,438 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2; + +import com.google.common.base.Strings; +import com.google.common.collect.ImmutableMap; +import com.google.gson.JsonObject; +import io.vavr.Tuple; +import io.vavr.Tuple2; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.function.Predicate; +import java.util.stream.Stream; +import lombok.AccessLevel; +import lombok.RequiredArgsConstructor; +import lombok.ToString; +import lombok.extern.slf4j.Slf4j; +import org.apache.commons.lang3.StringUtils; +import org.apache.commons.text.CaseUtils; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.DSL; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.DownsamplingType; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.Expression; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.ExpressionMetadata; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.FilterExpression; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.Result; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily; +import org.apache.skywalking.oap.server.core.analysis.Layer; +import org.apache.skywalking.oap.server.core.analysis.TimeBucket; +import org.apache.skywalking.oap.server.core.analysis.manual.endpoint.EndpointTraffic; +import org.apache.skywalking.oap.server.core.analysis.manual.instance.InstanceTraffic; +import org.apache.skywalking.oap.server.core.analysis.manual.relation.process.ProcessRelationClientSideMetrics; +import org.apache.skywalking.oap.server.core.analysis.manual.relation.process.ProcessRelationServerSideMetrics; +import org.apache.skywalking.oap.server.core.analysis.manual.relation.service.ServiceRelationClientSideMetrics; +import org.apache.skywalking.oap.server.core.analysis.manual.relation.service.ServiceRelationServerSideMetrics; +import org.apache.skywalking.oap.server.core.analysis.manual.service.ServiceTraffic; +import org.apache.skywalking.oap.server.core.analysis.meter.MeterEntity; +import org.apache.skywalking.oap.server.core.analysis.meter.MeterSystem; +import org.apache.skywalking.oap.server.core.analysis.meter.ScopeType; +import org.apache.skywalking.oap.server.core.analysis.meter.function.AcceptableValue; +import org.apache.skywalking.oap.server.core.analysis.meter.function.BucketedValues; +import org.apache.skywalking.oap.server.core.analysis.meter.function.PercentileArgument; +import org.apache.skywalking.oap.server.core.analysis.metrics.DataLabel; +import org.apache.skywalking.oap.server.core.analysis.metrics.DataTable; +import org.apache.skywalking.oap.server.core.analysis.worker.MetricsStreamProcessor; + +import static com.google.common.collect.ImmutableMap.toImmutableMap; +import static java.util.Objects.requireNonNull; +import static java.util.stream.Collectors.groupingBy; +import static java.util.stream.Collectors.mapping; +import static java.util.stream.Collectors.toList; + +/** + * Analyzer analyses DSL expression with input samples, then to generate meter-system metrics. + * + * <p>One Analyzer is created per {@code metricsRules} entry in a MAL config YAML file. + * + * <p>Initialization ({@link #build}): + * <ol> + * <li>Compiles the MAL expression string into a + * {@link org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalExpression MalExpression} + * via ANTLR4 + Javassist.</li> + * <li>Extracts compile-time {@link ExpressionMetadata} from the AST (sample names, scope type, + * aggregation labels, downsampling, histogram/percentile info).</li> + * <li>Registers the metric in {@link MeterSystem} (generates storage class via Javassist).</li> + * </ol> + * + * <p>Runtime ({@link #analyse}): + * <ol> + * <li>Receives the full {@code sampleFamilies} map (all metrics from one scrape).</li> + * <li>Selects only the entries matching {@code this.samples} (e.g., ["node_cpu_seconds_total"]). + * This is an O(n) lookup where n is the number of input metric names in the expression + * (typically 1-2), not the size of the full map.</li> + * <li>Applies the optional tag filter (e.g., {@code job_name == 'vm-monitoring'}).</li> + * <li>Executes the compiled MAL expression on the filtered input.</li> + * <li>Sends computed metric values to MeterSystem for storage.</li> + * </ol> + */ +@Slf4j +@RequiredArgsConstructor(access = AccessLevel.PRIVATE) +@ToString(of = { + "metricName", + "expression" +}) +public class Analyzer { + + public static final Tuple2<String, SampleFamily> NIL = Tuple.of("", null); + + public static Analyzer build(final String metricName, + final String filterExpression, + final String expression, + final MeterSystem meterSystem) { + return build(metricName, filterExpression, expression, meterSystem, null); + } + + public static Analyzer build(final String metricName, + final String filterExpression, + final String expression, + final MeterSystem meterSystem, + final String yamlSource) { + Expression e = DSL.parse(metricName, expression, yamlSource); + FilterExpression filter = null; + if (!Strings.isNullOrEmpty(filterExpression)) { + filter = new FilterExpression(filterExpression); + } + ExpressionMetadata ctx = e.parse(); + Analyzer analyzer = new Analyzer(metricName, filter, e, meterSystem, ctx); + analyzer.init(); + return analyzer; + } + + private List<String> samples; + + private final String metricName; + + private final FilterExpression filterExpression; + + private final Expression expression; + + private final MeterSystem meterSystem; + + private final ExpressionMetadata ctx; + + private MetricType metricType; + + private int[] percentiles; + + /** + * Analyse the full sample family map and produce meter-system metrics. + * + * <p>The {@code sampleFamilies} map contains ALL metrics from one scrape batch. + * This method first selects only the entries matching {@code this.samples} + * (the input metric names extracted from the MAL expression AST at compile time), + * then applies the optional filter and runs the expression on the selected subset. + * + * @param sampleFamilies all sample families from one scrape, keyed by metric name. + */ + public void analyse(final ImmutableMap<String, SampleFamily> sampleFamilies) { + // Select only the metric families this expression references (typically 1-2 keys). + Map<String, SampleFamily> input = samples.stream() + .map(s -> Tuple.of(s, sampleFamilies.get(s))) + .filter(t -> t._2 != null) + .collect(toImmutableMap(t -> t._1, t -> t._2)); + if (input.size() < 1) { + if (log.isDebugEnabled()) { + log.debug("{} is ignored due to the lack of {}", expression, samples); + } + return; + } + if (filterExpression != null) { + input = filterExpression.filter(input); + if (input.isEmpty()) { + if (log.isDebugEnabled()) { + log.debug("{} is ignored due to mismatch of filter {}", expression, filterExpression); + } + return; + } + } + if (log.isDebugEnabled()) { + final StringBuilder sb = new StringBuilder(); + input.forEach((k, v) -> { + if (sb.length() > 0) { + sb.append(", "); + } + sb.append(k).append('(').append(v.samples.length).append(" samples)"); + }); + log.debug("[MAL] metric={}, class={}, input=[{}]", + metricName, expression.generatedClassName(), sb); + } + Result r = expression.run(input); + if (!r.isSuccess()) { + return; + } + SampleFamily.RunningContext ctx = r.getData().context; + Map<MeterEntity, Sample[]> meterSamples = ctx.getMeterSamples(); + meterSamples.forEach((meterEntity, ss) -> { + generateTraffic(meterEntity); + switch (metricType) { + case single: + AcceptableValue<Long> sv = meterSystem.buildMetrics(metricName, Long.class); + sv.accept(meterEntity, getValue(ss[0])); + send(sv, ss[0].getTimestamp()); + break; + case labeled: + AcceptableValue<DataTable> lv = meterSystem.buildMetrics(metricName, DataTable.class); + DataTable dt = new DataTable(); + // put all labels into the data table. + for (Sample each : ss) { + DataLabel dataLabel = new DataLabel(); + dataLabel.putAll(each.getLabels()); + dt.put(dataLabel, getValue(each)); + } + lv.accept(meterEntity, dt); + send(lv, ss[0].getTimestamp()); + break; + case histogram: + case histogramPercentile: + Stream.of(ss).map(s -> Tuple.of(getDataLabels(s.getLabels(), k -> !Objects.equals("le", k)), s)) + .collect(groupingBy(Tuple2::_1, mapping(Tuple2::_2, toList()))) + .forEach((dataLabel, subSs) -> { + if (subSs.size() < 1) { + return; + } + long[] bb = new long[subSs.size()]; + long[] vv = new long[bb.length]; + for (int i = 0; i < subSs.size(); i++) { + Sample s = subSs.get(i); + final double leVal = Double.parseDouble(s.getLabels().get("le")); + if (leVal == Double.NEGATIVE_INFINITY) { + bb[i] = Long.MIN_VALUE; + } else { + bb[i] = (long) leVal; + } + vv[i] = getValue(s); + } + BucketedValues bv = new BucketedValues(bb, vv); + bv.setLabels(dataLabel); + long time = subSs.get(0).getTimestamp(); + if (metricType == MetricType.histogram) { + AcceptableValue<BucketedValues> v = meterSystem.buildMetrics( + metricName, BucketedValues.class); + v.accept(meterEntity, bv); + send(v, time); + return; + } + AcceptableValue<PercentileArgument> v = meterSystem.buildMetrics( + metricName, PercentileArgument.class); + v.accept(meterEntity, new PercentileArgument(bv, percentiles)); + send(v, time); + }); + break; + } + }); + } + + private long getValue(Sample sample) { + if (sample.getValue() <= 0.0) { + return 0L; + } + if (sample.getValue() < 1.0) { + return 1L; + } + return Math.round(sample.getValue()); + } + + private DataLabel getDataLabels(ImmutableMap<String, String> labels, Predicate<String> filter) { + DataLabel dataLabel = new DataLabel(); + labels.keySet().stream().filter(filter).forEach(k -> dataLabel.put(k, labels.get(k))); + return dataLabel; + } + + @RequiredArgsConstructor + private enum MetricType { + // metrics is aggregated by histogram function. + histogram("histogram"), + // metrics is aggregated by histogram based percentile function. + histogramPercentile("histogramPercentile"), + // metrics is aggregated by labeled function. + labeled("labeled"), + // metrics is aggregated by single value function. + single(""); + + private final String literal; + } + + /** + * Initializes runtime state from compile-time metadata. + * + * <p>{@code ctx.getSamples()} provides the Prometheus metric names this expression references + * (e.g., ["node_cpu_seconds_total"]). These are used at runtime to select relevant entries + * from the full sample family map, avoiding unnecessary expression evaluation. + */ + private void init() { + this.samples = ctx.getSamples(); + if (ctx.isHistogram()) { + if (ctx.getPercentiles() != null && ctx.getPercentiles().length > 0) { + metricType = MetricType.histogramPercentile; + this.percentiles = ctx.getPercentiles(); + } else { + metricType = MetricType.histogram; + } + } else { + if (ctx.getLabels().isEmpty()) { + metricType = MetricType.single; + } else { + metricType = MetricType.labeled; + } + } + createMetric(ctx.getScopeType(), metricType.literal, ctx.getDownsampling()); + } + + private void createMetric(final ScopeType scopeType, + final String dataType, + final DownsamplingType downsamplingType) { + String downSamplingStr = CaseUtils.toCamelCase(downsamplingType.toString().toLowerCase(), false, '_'); + String functionName = String.format("%s%s", downSamplingStr, StringUtils.capitalize(dataType)); + meterSystem.create(metricName, functionName, scopeType); + } + + private void send(final AcceptableValue<?> v, final long time) { + v.setTimeBucket(TimeBucket.getMinuteTimeBucket(time)); + meterSystem.doStreamingCalculation(v); + } + + private void generateTraffic(MeterEntity entity) { + if (entity.getDetectPoint() != null) { + switch (entity.getScopeType()) { + case SERVICE_RELATION: + serviceRelationTraffic(entity); + break; + case PROCESS_RELATION: + processRelationTraffic(entity); + break; + default: + } + } else { + toService(requireNonNull(entity.getServiceName()), entity.getLayer()); + } + + if (!com.google.common.base.Strings.isNullOrEmpty(entity.getInstanceName())) { + InstanceTraffic instanceTraffic = new InstanceTraffic(); + instanceTraffic.setName(entity.getInstanceName()); + instanceTraffic.setServiceId(entity.serviceId()); + instanceTraffic.setTimeBucket(TimeBucket.getMinuteTimeBucket(System.currentTimeMillis())); + instanceTraffic.setLastPingTimestamp(TimeBucket.getMinuteTimeBucket(System.currentTimeMillis())); + if (entity.getInstanceProperties() != null && !entity.getInstanceProperties().isEmpty()) { + final JsonObject properties = new JsonObject(); + entity.getInstanceProperties().forEach((k, v) -> properties.addProperty(k, v)); + instanceTraffic.setProperties(properties); + } + MetricsStreamProcessor.getInstance().in(instanceTraffic); + } + if (!com.google.common.base.Strings.isNullOrEmpty(entity.getEndpointName())) { + EndpointTraffic endpointTraffic = new EndpointTraffic(); + endpointTraffic.setName(entity.getEndpointName()); + endpointTraffic.setServiceId(entity.serviceId()); + endpointTraffic.setTimeBucket(TimeBucket.getMinuteTimeBucket(System.currentTimeMillis())); + endpointTraffic.setLastPingTimestamp(TimeBucket.getMinuteTimeBucket(System.currentTimeMillis())); + MetricsStreamProcessor.getInstance().in(endpointTraffic); + } + } + + private void toService(String serviceName, Layer layer) { + ServiceTraffic s = new ServiceTraffic(); + s.setName(requireNonNull(serviceName)); + s.setTimeBucket(TimeBucket.getMinuteTimeBucket(System.currentTimeMillis())); + s.setLayer(layer); + MetricsStreamProcessor.getInstance().in(s); + } + + private void serviceRelationTraffic(MeterEntity entity) { + switch (entity.getDetectPoint()) { + case SERVER: + entity.setServiceName(entity.getDestServiceName()); + toService(requireNonNull(entity.getDestServiceName()), entity.getLayer()); + serviceRelationServerSide(entity); + break; + case CLIENT: + entity.setServiceName(entity.getSourceServiceName()); + toService(requireNonNull(entity.getSourceServiceName()), entity.getLayer()); + serviceRelationClientSide(entity); + break; + default: + } + } + + private void serviceRelationServerSide(MeterEntity entity) { + ServiceRelationServerSideMetrics metrics = new ServiceRelationServerSideMetrics(); + metrics.setTimeBucket(TimeBucket.getMinuteTimeBucket(System.currentTimeMillis())); + metrics.setSourceServiceId(entity.sourceServiceId()); + metrics.setDestServiceId(entity.destServiceId()); + metrics.getComponentIds().add(entity.getComponentId()); + metrics.setEntityId(entity.id()); + MetricsStreamProcessor.getInstance().in(metrics); + } + + private void serviceRelationClientSide(MeterEntity entity) { + ServiceRelationClientSideMetrics metrics = new ServiceRelationClientSideMetrics(); + metrics.setTimeBucket(TimeBucket.getMinuteTimeBucket(System.currentTimeMillis())); + metrics.setSourceServiceId(entity.sourceServiceId()); + metrics.setDestServiceId(entity.destServiceId()); + metrics.getComponentIds().add(entity.getComponentId()); + metrics.setEntityId(entity.id()); + MetricsStreamProcessor.getInstance().in(metrics); + } + + private void processRelationTraffic(MeterEntity entity) { + switch (entity.getDetectPoint()) { + case SERVER: + processRelationServerSide(entity); + break; + case CLIENT: + processRelationClientSide(entity); + break; + default: + } + } + + private void processRelationServerSide(MeterEntity entity) { + ProcessRelationServerSideMetrics metrics = new ProcessRelationServerSideMetrics(); + metrics.setTimeBucket(TimeBucket.getMinuteTimeBucket(System.currentTimeMillis())); + metrics.setServiceInstanceId(entity.serviceInstanceId()); + metrics.setSourceProcessId(entity.getSourceProcessId()); + metrics.setDestProcessId(entity.getDestProcessId()); + metrics.setEntityId(entity.id()); + metrics.setComponentId(entity.getComponentId()); + MetricsStreamProcessor.getInstance().in(metrics); + } + + private void processRelationClientSide(MeterEntity entity) { + ProcessRelationClientSideMetrics metrics = new ProcessRelationClientSideMetrics(); + metrics.setTimeBucket(TimeBucket.getMinuteTimeBucket(System.currentTimeMillis())); + metrics.setServiceInstanceId(entity.serviceInstanceId()); + metrics.setSourceProcessId(entity.getSourceProcessId()); + metrics.setDestProcessId(entity.getDestProcessId()); + metrics.setEntityId(entity.id()); + metrics.setComponentId(entity.getComponentId()); + MetricsStreamProcessor.getInstance().in(metrics); + } + +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/MetricConvert.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/MetricConvert.java new file mode 100644 index 000000000000..2891937864a8 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/MetricConvert.java @@ -0,0 +1,153 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2; + +import com.google.common.base.Preconditions; +import com.google.common.base.Strings; +import com.google.common.collect.ImmutableMap; +import io.vavr.control.Try; +import java.util.List; +import java.util.StringJoiner; +import java.util.stream.IntStream; +import java.util.stream.Stream; +import lombok.extern.slf4j.Slf4j; +import org.apache.commons.lang3.StringUtils; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily; +import org.apache.skywalking.oap.server.core.analysis.meter.MeterSystem; + +import static java.util.stream.Collectors.toList; + +/** + * MetricConvert converts {@link SampleFamily} collection to meter-system metrics, then store them to backend storage. + * + * <p>One MetricConvert instance is created per MAL config YAML file (e.g., {@code vm.yaml}). + * It holds a list of {@link Analyzer}s, one per {@code metricsRules} entry in the YAML. + * + * <p>Construction (at startup): + * <pre> + * YAML file (e.g., vm.yaml) + * metricPrefix: meter_vm + * expSuffix: service(['host'], Layer.OS_LINUX) + * filter: { tags -> tags.job_name == 'vm-monitoring' } + * metricsRules: + * - name: cpu_total_percentage + * exp: (node_cpu_seconds_total * 100).sum(['host']).rate('PT1M') + * + * MetricConvert(rule, meterSystem) + * for each rule: + * metricName = metricPrefix + "_" + name → "meter_vm_cpu_total_percentage" + * finalExp = (exp).expSuffix → "(...).service(['host'], Layer.OS_LINUX)" + * → Analyzer.build(metricName, filter, finalExp, meterSystem) + * </pre> + * + * <p>Runtime ({@link #toMeter}): receives the full {@code sampleFamilies} map (all metrics + * from one scrape) and broadcasts it to every Analyzer. Each Analyzer self-filters to only + * the input metrics it needs (via {@code this.samples} from compile-time metadata). + */ +@Slf4j +public class MetricConvert { + + public static <T> Stream<T> log(Try<T> t, String debugMessage) { + return t + .onSuccess(i -> log.debug(debugMessage + " :{}", i)) + .onFailure(e -> log.debug(debugMessage + " failed", e)) + .toJavaStream(); + } + + private final List<Analyzer> analyzers; + + public MetricConvert(MetricRuleConfig rule, MeterSystem service) { + Preconditions.checkState(!Strings.isNullOrEmpty(rule.getMetricPrefix())); + final String sourceName = rule.getSourceName(); + final List<? extends MetricRuleConfig.RuleConfig> rules = rule.getMetricsRules(); + this.analyzers = IntStream.range(0, rules.size()).mapToObj( + i -> { + final MetricRuleConfig.RuleConfig r = rules.get(i); + final String yamlSource = sourceName != null + ? sourceName + ".yaml:" + i : null; + return buildAnalyzer( + formatMetricName(rule, r.getName()), + rule.getFilter(), + formatExp(rule.getExpPrefix(), rule.getExpSuffix(), r.getExp()), + service, + yamlSource + ); + } + ).collect(toList()); + } + + Analyzer buildAnalyzer(final String metricsName, + final String filter, + final String exp, + final MeterSystem service, + final String yamlSource) { + return Analyzer.build( + metricsName, + filter, + exp, + service, + yamlSource + ); + } + + private String formatExp(final String expPrefix, String expSuffix, String exp) { + String ret = exp; + if (!Strings.isNullOrEmpty(expPrefix)) { + ret = String.format("(%s.%s)", StringUtils.substringBefore(exp, "."), expPrefix); + final String after = StringUtils.substringAfter(exp, "."); + if (!Strings.isNullOrEmpty(after)) { + ret = String.format("(%s.%s)", ret, after); + } + } + if (!Strings.isNullOrEmpty(expSuffix)) { + ret = String.format("(%s).%s", ret, expSuffix); + } + return ret; + } + + /** + * Broadcasts the full sample family map to every Analyzer in this config file. + * + * <p>The map contains ALL metrics from a single scrape batch keyed by Prometheus metric name + * (e.g., "node_cpu_seconds_total", "node_memory_MemTotal_bytes", ...). + * Each Analyzer selects only the entries it needs via O(1) HashMap lookups on + * {@code this.samples} (derived from compile-time AST metadata). + * + * @param sampleFamilies all sample families from one scrape, keyed by metric name. + */ + public void toMeter(final ImmutableMap<String, SampleFamily> sampleFamilies) { + Preconditions.checkNotNull(sampleFamilies); + if (sampleFamilies.size() < 1) { + return; + } + for (Analyzer each : analyzers) { + try { + each.analyse(sampleFamilies); + } catch (Throwable t) { + log.error("Analyze {} error", each, t); + } + } + } + + private String formatMetricName(MetricRuleConfig rule, String meterRuleName) { + StringJoiner metricName = new StringJoiner("_"); + metricName.add(rule.getMetricPrefix()).add(meterRuleName); + return metricName.toString(); + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/MetricRuleConfig.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/MetricRuleConfig.java new file mode 100644 index 000000000000..42f55eebebd3 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/MetricRuleConfig.java @@ -0,0 +1,72 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2; + +import java.util.List; + +/** + * Metrics rules convert to meter system. + */ +public interface MetricRuleConfig { + + /** + * Get metrics name prefix + */ + String getMetricPrefix(); + + /** + * Get MAL expression suffix + */ + String getExpSuffix(); + + /** + * Get MAL expression prefix + */ + String getExpPrefix(); + + /** + * Get all rules + */ + List<? extends RuleConfig> getMetricsRules(); + + String getFilter(); + + /** + * Returns the source name of this config (e.g., YAML file name without extension). + * Used to build informative {@code SourceFile} attributes in generated bytecode + * so stack traces show the originating config file. + * + * @return source name, or {@code null} if unknown. + */ + default String getSourceName() { + return null; + } + + interface RuleConfig { + /** + * Get definition metrics name + */ + String getName(); + + /** + * Build metrics MAL + */ + String getExp(); + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALClassGenerator.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALClassGenerator.java new file mode 100644 index 000000000000..e55a50bfe57b --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALClassGenerator.java @@ -0,0 +1,1334 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.compiler; + +import java.io.DataOutputStream; +import java.io.File; +import java.io.FileOutputStream; +import java.lang.invoke.MethodHandle; +import java.lang.invoke.MethodHandles; +import java.util.ArrayList; +import java.util.LinkedHashSet; +import java.util.List; +import java.util.Set; +import java.util.concurrent.atomic.AtomicInteger; +import javassist.ClassPool; +import javassist.CtClass; +import javassist.CtNewConstructor; +import javassist.CtNewMethod; +import lombok.extern.slf4j.Slf4j; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.rt.MalExpressionPackageHolder; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.DownsamplingType; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.ExpressionMetadata; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalExpression; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalFilter; +import org.apache.skywalking.oap.server.core.WorkPath; +import org.apache.skywalking.oap.server.core.analysis.meter.ScopeType; +import org.apache.skywalking.oap.server.library.util.StringUtil; + +/** + * Generates {@link MalExpression} implementation classes from + * {@link MALExpressionModel} AST using Javassist bytecode generation. + * + * <p>Each generated class implements: + * <pre> + * SampleFamily run(Map<String, SampleFamily> samples) + * </pre> + */ +@Slf4j +public final class MALClassGenerator { + + private static final AtomicInteger CLASS_COUNTER = new AtomicInteger(0); + + private static final String PACKAGE_PREFIX = + "org.apache.skywalking.oap.meter.analyzer.v2.compiler.rt."; + + private static final String SF = "org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily"; + + private static final Set<String> USED_CLASS_NAMES = + java.util.Collections.synchronizedSet(new java.util.HashSet<>()); + + private final ClassPool classPool; + private List<String> closureFieldNames; + private int closureFieldIndex; + private File classOutputDir; + private String classNameHint; + private String yamlSource; + + public MALClassGenerator() { + this(createClassPool()); + if (StringUtil.isNotEmpty(System.getenv("SW_DYNAMIC_CLASS_ENGINE_DEBUG"))) { + classOutputDir = new File(WorkPath.getPath().getParentFile(), "mal-rt"); + } + } + + private static ClassPool createClassPool() { + final ClassPool pool = new ClassPool(true); + pool.appendClassPath( + new javassist.LoaderClassPath( + Thread.currentThread().getContextClassLoader())); + return pool; + } + + public MALClassGenerator(final ClassPool classPool) { + this.classPool = classPool; + } + + public void setClassOutputDir(final File dir) { + this.classOutputDir = dir; + } + + public void setClassNameHint(final String hint) { + this.classNameHint = hint; + } + + public void setYamlSource(final String yamlSource) { + this.yamlSource = yamlSource; + } + + private String makeClassName(final String defaultPrefix) { + if (classNameHint != null) { + return dedupClassName(PACKAGE_PREFIX + MALCodegenHelper.sanitizeName(classNameHint)); + } + return PACKAGE_PREFIX + defaultPrefix + CLASS_COUNTER.getAndIncrement(); + } + + private String dedupClassName(final String base) { + if (USED_CLASS_NAMES.add(base)) { + return base; + } + for (int i = 2; ; i++) { + final String candidate = base + "_" + i; + if (USED_CLASS_NAMES.add(candidate)) { + return candidate; + } + } + } + + void writeClassFile(final CtClass ctClass) { + if (classOutputDir == null) { + return; + } + if (!classOutputDir.exists()) { + classOutputDir.mkdirs(); + } + final File file = new File(classOutputDir, ctClass.getSimpleName() + ".class"); + try (DataOutputStream out = new DataOutputStream(new FileOutputStream(file))) { + ctClass.toBytecode(out); + } catch (Exception e) { + log.warn("Failed to write class file {}: {}", file, e.getMessage()); + } + } + + /** + * Adds a {@code LineNumberTable} attribute to the method by scanning + * bytecode for store instructions to local variable slots ≥ + * {@code firstResultSlot}. Each such store marks the end of a + * source-level statement; the following instruction gets the next + * sequential line number. + * + * <p>This gives meaningful line numbers in stack traces even though + * the generated Java source is compiled in-memory by Javassist + * (which does not produce line numbers on its own). + * + * @param method the compiled method + * @param firstResultSlot the first local variable slot that holds + * a generated result variable (stores to + * earlier slots are parameters and ignored) + */ + void addLineNumberTable(final javassist.CtMethod method, + final int firstResultSlot) { + try { + final javassist.bytecode.MethodInfo mi = method.getMethodInfo(); + final javassist.bytecode.CodeAttribute code = mi.getCodeAttribute(); + if (code == null) { + return; + } + + final List<int[]> entries = new ArrayList<>(); + int line = 1; + boolean nextIsNewLine = true; + + final javassist.bytecode.CodeIterator ci = code.iterator(); + while (ci.hasNext()) { + final int pc = ci.next(); + if (nextIsNewLine) { + entries.add(new int[]{pc, line++}); + nextIsNewLine = false; + } + final int op = ci.byteAt(pc) & 0xFF; + int slot = -1; + // Compact store opcodes: istore_0(59)..astore_3(78) + if (op >= 59 && op <= 78) { + slot = (op - 59) % 4; + } + // Wide store opcodes: istore(54)..astore(58) + else if (op >= 54 && op <= 58) { + slot = ci.byteAt(pc + 1) & 0xFF; + } + if (slot >= firstResultSlot) { + nextIsNewLine = true; + } + } + + if (entries.isEmpty()) { + return; + } + + // Build LineNumberTable: u2 count, then (u2 start_pc, u2 line_number)[] + final javassist.bytecode.ConstPool cp = mi.getConstPool(); + final byte[] info = new byte[2 + entries.size() * 4]; + info[0] = (byte) (entries.size() >> 8); + info[1] = (byte) entries.size(); + for (int i = 0; i < entries.size(); i++) { + final int off = 2 + i * 4; + info[off] = (byte) (entries.get(i)[0] >> 8); + info[off + 1] = (byte) entries.get(i)[0]; + info[off + 2] = (byte) (entries.get(i)[1] >> 8); + info[off + 3] = (byte) entries.get(i)[1]; + } + code.getAttributes().add( + new javassist.bytecode.AttributeInfo(cp, "LineNumberTable", info)); + } catch (Exception e) { + log.warn("Failed to add LineNumberTable: {}", e.getMessage()); + } + } + + /** + * Builds the SourceFile name for a generated class. When YAML source info + * is available, produces {@code "spring-sleuth[3](metricName.java)"}; + * otherwise falls back to {@code "metricName.java"}. + */ + private String formatSourceFileName(final String metricName) { + final String classFile = metricName + ".java"; + if (yamlSource != null) { + return "(" + yamlSource + ")" + classFile; + } + return classFile; + } + + /** + * Sets the {@code SourceFile} attribute of the class to the given name, + * replacing the default (class name + ".java"). This makes stack traces + * show the metric/rule name instead of the generated class name. + */ + private static void setSourceFile(final CtClass ctClass, final String name) { + try { + final javassist.bytecode.ClassFile cf = ctClass.getClassFile(); + final javassist.bytecode.AttributeInfo sf = cf.getAttribute("SourceFile"); + if (sf != null) { + final javassist.bytecode.ConstPool cp = cf.getConstPool(); + final int idx = cp.addUtf8Info(name); + sf.set(new byte[]{(byte) (idx >> 8), (byte) idx}); + } + } catch (Exception e) { + // best-effort — ignore + } + } + + void addLocalVariableTable(final javassist.CtMethod method, + final String className, + final String[][] vars) { + try { + final javassist.bytecode.MethodInfo mi = method.getMethodInfo(); + final javassist.bytecode.CodeAttribute code = mi.getCodeAttribute(); + if (code == null) { + return; + } + final javassist.bytecode.ConstPool cp = mi.getConstPool(); + final int len = code.getCodeLength(); + + final javassist.bytecode.LocalVariableAttribute lva = + new javassist.bytecode.LocalVariableAttribute(cp); + lva.addEntry(0, len, + cp.addUtf8Info("this"), + cp.addUtf8Info("L" + className.replace('.', '/') + ";"), 0); + for (int i = 0; i < vars.length; i++) { + lva.addEntry(0, len, + cp.addUtf8Info(vars[i][0]), + cp.addUtf8Info(vars[i][1]), i + 1); + } + code.getAttributes().add(lva); + } catch (Exception e) { + log.warn("Failed to add LocalVariableTable: {}", e.getMessage()); + } + } + + private void addRunLocalVariableTable(final javassist.CtMethod method, + final String className, + final int tempCount) { + final String sfDesc = "L" + SF.replace('.', '/') + ";"; + final String[][] vars = new String[2 + tempCount][]; + vars[0] = new String[]{"samples", "Ljava/util/Map;"}; + vars[1] = new String[]{RUN_VAR, sfDesc}; + for (int i = 0; i < tempCount; i++) { + vars[2 + i] = new String[]{"_t" + i, sfDesc}; + } + addLocalVariableTable(method, className, vars); + } + + /** + * Compiles a MAL expression into a MalExpression implementation. + * + * @param metricName the metric name (used in the generated class name) + * @param expression the MAL expression string + * @return a MalExpression instance + * @throws Exception if parsing or compilation fails + */ + public MalExpression compile(final String metricName, + final String expression) throws Exception { + final MALExpressionModel.Expr ast = MALScriptParser.parse(expression); + final String saved = classNameHint; + if (classNameHint == null) { + classNameHint = metricName; + } + try { + return compileFromModel(metricName, ast); + } finally { + classNameHint = saved; + } + } + + /** + * Compiles a MAL filter closure into a {@link MalFilter} implementation. + * + * @param filterExpression e.g. {@code "{ tags -> tags.job_name == 'mysql-monitoring' }"} + * @return a MalFilter instance + * @throws Exception if parsing or compilation fails + */ + @SuppressWarnings("unchecked") + public MalFilter compileFilter(final String filterExpression) throws Exception { + final MALExpressionModel.ClosureArgument closure = + MALScriptParser.parseFilter(filterExpression); + + final String className = makeClassName("MalFilter_"); + + final CtClass ctClass = classPool.makeClass(className); + ctClass.addInterface(classPool.get( + "org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalFilter")); + + ctClass.addConstructor(CtNewConstructor.defaultConstructor(ctClass)); + + final List<String> params = closure.getParams(); + final String paramName = params.isEmpty() ? "it" : params.get(0); + + final MALClosureCodegen cc = new MALClosureCodegen(classPool, this); + final StringBuilder sb = new StringBuilder(); + sb.append("public boolean test(java.util.Map ").append(paramName) + .append(") {\n"); + + final List<MALExpressionModel.ClosureStatement> body = closure.getBody(); + if (body.size() == 1 + && body.get(0) instanceof MALExpressionModel.ClosureExprStatement) { + // Single expression → evaluate as condition and return boolean + final MALExpressionModel.ClosureExpr expr = + ((MALExpressionModel.ClosureExprStatement) body.get(0)).getExpr(); + if (expr instanceof MALExpressionModel.ClosureCondition) { + sb.append(" return "); + cc.generateClosureCondition( + sb, (MALExpressionModel.ClosureCondition) expr, paramName); + sb.append(";\n"); + } else { + // Truthy evaluation of the expression + sb.append(" Object _v = "); + cc.generateClosureExpr(sb, expr, paramName); + sb.append(";\n"); + sb.append(" return _v != null && !Boolean.FALSE.equals(_v);\n"); + } + } else { + // Multi-statement body — generate statements, last expression is the return + for (final MALExpressionModel.ClosureStatement stmt : body) { + cc.generateClosureStatement(sb, stmt, paramName); + } + sb.append(" return false;\n"); + } + sb.append("}\n"); + + final String filterBody = sb.toString(); + if (log.isDebugEnabled()) { + log.debug("MAL compileFilter AST: {}", closure); + log.debug("MAL compileFilter test():\n{}", filterBody); + } + + final javassist.CtMethod testMethod = + CtNewMethod.make(filterBody, ctClass); + ctClass.addMethod(testMethod); + addLocalVariableTable(testMethod, className, new String[][]{ + {paramName, "Ljava/util/Map;"} + }); + addLineNumberTable(testMethod, 2); // slot 0=this, 1=samples + + writeClassFile(ctClass); + + final Class<?> clazz = ctClass.toClass(MalExpressionPackageHolder.class); + ctClass.detach(); + return (MalFilter) clazz.getDeclaredConstructor().newInstance(); + } + + /** + * Compiles from a pre-parsed AST model. + * + * @param metricName the metric name (used in the generated class name) + * @param ast the pre-parsed AST model + * @return a MalExpression instance + * @throws Exception if compilation fails + */ + public MalExpression compileFromModel(final String metricName, + final MALExpressionModel.Expr ast) throws Exception { + final String className = makeClassName("MalExpr_"); + + closureFieldIndex = 0; + final MALClosureCodegen cc = new MALClosureCodegen(classPool, this); + final List<MALClosureCodegen.ClosureInfo> closures = new ArrayList<>(); + cc.collectClosures(ast, closures); + + // Build closure field names and determine interface types + final List<String> closureFieldNames = new ArrayList<>(); + final List<String> closureInterfaceTypes = new ArrayList<>(); + final java.util.Map<String, Integer> closureNameCounts = new java.util.HashMap<>(); + for (int i = 0; i < closures.size(); i++) { + final String purpose = closures.get(i).methodName; + final int count = closureNameCounts.getOrDefault(purpose, 0); + closureNameCounts.put(purpose, count + 1); + final String suffix = count == 0 ? purpose : purpose + "_" + (count + 1); + closureFieldNames.add("_" + suffix); + closureInterfaceTypes.add(closures.get(i).interfaceType); + } + + final CtClass ctClass = classPool.makeClass(className); + ctClass.addInterface(classPool.get( + "org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalExpression")); + + // Add closure fields typed as functional interfaces (not concrete closure classes) + for (int i = 0; i < closures.size(); i++) { + ctClass.addField(javassist.CtField.make( + "public " + closureInterfaceTypes.get(i) + " " + + closureFieldNames.get(i) + ";", ctClass)); + } + + // Add closure bodies as methods on the main class + final List<String> closureMethodNames = new ArrayList<>(); + for (int i = 0; i < closures.size(); i++) { + final String methodName = cc.addClosureMethod( + ctClass, closureFieldNames.get(i), closures.get(i)); + closureMethodNames.add(methodName); + } + + ctClass.addConstructor(CtNewConstructor.defaultConstructor(ctClass)); + + this.closureFieldNames = closureFieldNames; + this.closureFieldIndex = 0; + final String runBody = generateRunMethod(ast); + final ExpressionMetadata metadata = extractMetadata(ast); + final String metadataBody = generateMetadataMethod(metadata); + + if (log.isDebugEnabled()) { + log.debug("MAL compile [{}] AST: {}", metricName, ast); + log.debug("MAL compile [{}] run():\n{}", metricName, runBody); + log.debug("MAL compile [{}] metadata():\n{}", metricName, metadataBody); + } + + final javassist.CtMethod runMethod = CtNewMethod.make(runBody, ctClass); + ctClass.addMethod(runMethod); + addRunLocalVariableTable(runMethod, className, runTempCounter); + addLineNumberTable(runMethod, 2); // slot 2 = sf + final javassist.CtMethod metaMethod = + CtNewMethod.make(metadataBody, ctClass); + ctClass.addMethod(metaMethod); + addLocalVariableTable(metaMethod, className, new String[][]{ + {"_samples", "Ljava/util/List;"}, + {"_scopeLabels", "Ljava/util/Set;"}, + {"_aggLabels", "Ljava/util/Set;"}, + {"_pct", "[I"} + }); + setSourceFile(ctClass, formatSourceFileName(metricName)); + + writeClassFile(ctClass); + + final Class<?> clazz = ctClass.toClass(MalExpressionPackageHolder.class); + ctClass.detach(); + final MalExpression instance = (MalExpression) clazz.getDeclaredConstructor() + .newInstance(); + + // Wire closure fields via LambdaMetafactory — creates functional interface + // instances from method handles pointing to the closure methods on this class. + // No separate .class files are produced (same mechanism as javac lambdas). + if (!closures.isEmpty()) { + final MethodHandles.Lookup lookup = MethodHandles.privateLookupIn( + clazz, MethodHandles.lookup()); + for (int i = 0; i < closures.size(); i++) { + final MALCodegenHelper.ClosureTypeInfo typeInfo = + MALCodegenHelper.getClosureTypeInfo(closureInterfaceTypes.get(i)); + final MethodHandle mh = lookup.findVirtual( + clazz, closureMethodNames.get(i), typeInfo.methodType); + final Object func = MALCodegenHelper.createLambda( + lookup, typeInfo, mh, clazz, instance); + clazz.getField(closureFieldNames.get(i)).set(instance, func); + } + } + + return instance; + } + + private static final String RUN_VAR = "sf"; + + private int runTempCounter; + + private String generateRunMethod(final MALExpressionModel.Expr ast) { + runTempCounter = 0; + final StringBuilder sb = new StringBuilder(); + sb.append("public ").append(SF).append(" run(java.util.Map samples) {\n"); + sb.append(" ").append(SF).append(" ").append(RUN_VAR).append(";\n"); + generateExprStatements(sb, ast); + sb.append(" return ").append(RUN_VAR).append(";\n"); + sb.append("}\n"); + return sb.toString(); + } + + private String nextTemp() { + return "_t" + runTempCounter++; + } + + /** + * Emits the expression as a series of {@code sf = ...;} reassignment statements, + * one per chain call. All results are stored in the single {@code sf} variable. + * For binary SF op SF expressions, a temporary variable saves the left operand. + */ + private void generateExprStatements(final StringBuilder sb, + final MALExpressionModel.Expr expr) { + if (expr instanceof MALExpressionModel.MetricExpr) { + generateMetricExprStatements( + sb, (MALExpressionModel.MetricExpr) expr); + } else if (expr instanceof MALExpressionModel.NumberExpr) { + final double val = ((MALExpressionModel.NumberExpr) expr).getValue(); + sb.append(" ").append(RUN_VAR).append(" = ") + .append(SF).append(".EMPTY.plus(Double.valueOf(") + .append(val).append("));\n"); + } else if (expr instanceof MALExpressionModel.BinaryExpr) { + generateBinaryExprStatements( + sb, (MALExpressionModel.BinaryExpr) expr); + } else if (expr instanceof MALExpressionModel.UnaryNegExpr) { + generateExprStatements( + sb, ((MALExpressionModel.UnaryNegExpr) expr).getOperand()); + sb.append(" ").append(RUN_VAR).append(" = ") + .append(RUN_VAR).append(".negative();\n"); + } else if (expr instanceof MALExpressionModel.FunctionCallExpr) { + generateFunctionCallStatements( + sb, (MALExpressionModel.FunctionCallExpr) expr); + } else if (expr instanceof MALExpressionModel.ParenChainExpr) { + generateParenChainStatements( + sb, (MALExpressionModel.ParenChainExpr) expr); + } else { + throw new IllegalArgumentException("Unknown expr type: " + expr); + } + } + + private void generateMetricExprStatements( + final StringBuilder sb, final MALExpressionModel.MetricExpr expr) { + sb.append(" ").append(RUN_VAR).append(" = ((").append(SF) + .append(") samples.getOrDefault(\"") + .append(MALCodegenHelper.escapeJava(expr.getMetricName())) + .append("\", ").append(SF).append(".EMPTY));\n"); + emitChainStatements(sb, expr.getMethodChain()); + } + + private void generateParenChainStatements( + final StringBuilder sb, final MALExpressionModel.ParenChainExpr expr) { + generateExprStatements(sb, expr.getInner()); + emitChainStatements(sb, expr.getMethodChain()); + } + + private void generateFunctionCallStatements( + final StringBuilder sb, final MALExpressionModel.FunctionCallExpr expr) { + final String fn = expr.getFunctionName(); + final List<MALExpressionModel.Argument> args = expr.getArguments(); + + if (("count".equals(fn) || "topN".equals(fn)) && !args.isEmpty()) { + final MALExpressionModel.Argument firstArg = args.get(0); + if (firstArg instanceof MALExpressionModel.ExprArgument) { + generateExprStatements( + sb, ((MALExpressionModel.ExprArgument) firstArg).getExpr()); + } + sb.append(" ").append(RUN_VAR).append(" = ") + .append(RUN_VAR).append('.').append(fn).append('('); + for (int i = 1; i < args.size(); i++) { + if (i > 1) { + sb.append(", "); + } + generateArgument(sb, args.get(i)); + } + sb.append(");\n"); + } else { + sb.append(" ").append(RUN_VAR).append(" = ") + .append(fn).append('('); + for (int i = 0; i < args.size(); i++) { + if (i > 0) { + sb.append(", "); + } + generateArgument(sb, args.get(i)); + } + sb.append(");\n"); + } + emitChainStatements(sb, expr.getMethodChain()); + } + + private void generateBinaryExprStatements( + final StringBuilder sb, final MALExpressionModel.BinaryExpr expr) { + final MALExpressionModel.Expr left = expr.getLeft(); + final MALExpressionModel.Expr right = expr.getRight(); + final MALExpressionModel.ArithmeticOp op = expr.getOp(); + + final boolean leftIsNumber = left instanceof MALExpressionModel.NumberExpr + || isScalarFunction(left); + final boolean rightIsNumber = right instanceof MALExpressionModel.NumberExpr + || isScalarFunction(right); + + if (leftIsNumber && !rightIsNumber) { + generateExprStatements(sb, right); + sb.append(" ").append(RUN_VAR).append(" = "); + switch (op) { + case ADD: + sb.append(RUN_VAR).append(".plus(Double.valueOf("); + generateScalarExpr(sb, left); + sb.append("))"); + break; + case SUB: + sb.append(RUN_VAR).append(".minus(Double.valueOf("); + generateScalarExpr(sb, left); + sb.append(")).negative()"); + break; + case MUL: + sb.append(RUN_VAR).append(".multiply(Double.valueOf("); + generateScalarExpr(sb, left); + sb.append("))"); + break; + case DIV: + sb.append("org.apache.skywalking.oap.meter.analyzer.v2.compiler.rt") + .append(".MalRuntimeHelper.divReverse("); + generateScalarExpr(sb, left); + sb.append(", ").append(RUN_VAR).append(")"); + break; + default: + throw new IllegalArgumentException("Unsupported op: " + op); + } + sb.append(";\n"); + } else if (!leftIsNumber && rightIsNumber) { + generateExprStatements(sb, left); + sb.append(" ").append(RUN_VAR).append(" = ") + .append(RUN_VAR).append(".").append(MALCodegenHelper.opMethodName(op)) + .append("(Double.valueOf("); + generateScalarExpr(sb, right); + sb.append("));\n"); + } else { + // SF op SF: compute left to sf, save to temp, compute right to sf, combine + generateExprStatements(sb, left); + final String temp = nextTemp(); + sb.append(" ").append(SF).append(" ").append(temp) + .append(" = ").append(RUN_VAR).append(";\n"); + generateExprStatements(sb, right); + sb.append(" ").append(RUN_VAR).append(" = ") + .append(temp).append(".").append(MALCodegenHelper.opMethodName(op)) + .append("(").append(RUN_VAR).append(");\n"); + } + } + + /** + * Emits each method chain call as a reassignment of {@code sf}. + */ + private void emitChainStatements(final StringBuilder sb, + final List<MALExpressionModel.MethodCall> chain) { + for (final MALExpressionModel.MethodCall mc : chain) { + sb.append(" ").append(RUN_VAR).append(" = ") + .append(RUN_VAR).append('.').append(mc.getName()).append('('); + final List<MALExpressionModel.Argument> args = mc.getArguments(); + if (MALCodegenHelper.VARARGS_STRING_METHODS.contains(mc.getName()) && !args.isEmpty() + && allStringArgs(args)) { + sb.append("new String[]{"); + for (int i = 0; i < args.size(); i++) { + if (i > 0) { + sb.append(", "); + } + generateArgument(sb, args.get(i)); + } + sb.append('}'); + } else { + final boolean primitiveDouble = + MALCodegenHelper.PRIMITIVE_DOUBLE_METHODS.contains(mc.getName()); + for (int i = 0; i < args.size(); i++) { + if (i > 0) { + sb.append(", "); + } + generateMethodCallArg(sb, args.get(i), primitiveDouble); + } + } + sb.append(");\n"); + } + } + + private void generateExpr(final StringBuilder sb, + final MALExpressionModel.Expr expr) { + if (expr instanceof MALExpressionModel.MetricExpr) { + generateMetricExpr(sb, (MALExpressionModel.MetricExpr) expr); + } else if (expr instanceof MALExpressionModel.NumberExpr) { + final double val = ((MALExpressionModel.NumberExpr) expr).getValue(); + sb.append(SF).append(".EMPTY.plus(Double.valueOf(").append(val).append("))"); + } else if (expr instanceof MALExpressionModel.BinaryExpr) { + generateBinaryExpr(sb, (MALExpressionModel.BinaryExpr) expr); + } else if (expr instanceof MALExpressionModel.UnaryNegExpr) { + sb.append("("); + generateExpr(sb, ((MALExpressionModel.UnaryNegExpr) expr).getOperand()); + sb.append(").negative()"); + } else if (expr instanceof MALExpressionModel.FunctionCallExpr) { + generateFunctionCallExpr(sb, (MALExpressionModel.FunctionCallExpr) expr); + } else if (expr instanceof MALExpressionModel.ParenChainExpr) { + generateParenChainExpr(sb, (MALExpressionModel.ParenChainExpr) expr); + } + } + + private void generateMetricExpr(final StringBuilder sb, + final MALExpressionModel.MetricExpr expr) { + sb.append("((").append(SF) + .append(") samples.getOrDefault(\"") + .append(MALCodegenHelper.escapeJava(expr.getMetricName())) + .append("\", ").append(SF).append(".EMPTY))"); + generateMethodChain(sb, expr.getMethodChain()); + } + + private void generateFunctionCallExpr(final StringBuilder sb, + final MALExpressionModel.FunctionCallExpr expr) { + // Top-level functions like count(metric), topN(metric, n, Order) + // These are static-style calls on the first argument (SampleFamily) + final String fn = expr.getFunctionName(); + final List<MALExpressionModel.Argument> args = expr.getArguments(); + + if (("count".equals(fn) || "topN".equals(fn)) && !args.isEmpty()) { + // First arg is the SampleFamily + final MALExpressionModel.Argument firstArg = args.get(0); + if (firstArg instanceof MALExpressionModel.ExprArgument) { + generateExpr(sb, + ((MALExpressionModel.ExprArgument) firstArg).getExpr()); + } + sb.append('.').append(fn).append('('); + for (int i = 1; i < args.size(); i++) { + if (i > 1) { + sb.append(", "); + } + generateArgument(sb, args.get(i)); + } + sb.append(')'); + } else { + // Generic function call + sb.append(fn).append('('); + for (int i = 0; i < args.size(); i++) { + if (i > 0) { + sb.append(", "); + } + generateArgument(sb, args.get(i)); + } + sb.append(')'); + } + generateMethodChain(sb, expr.getMethodChain()); + } + + private void generateParenChainExpr(final StringBuilder sb, + final MALExpressionModel.ParenChainExpr expr) { + sb.append("("); + generateExpr(sb, expr.getInner()); + sb.append(")"); + generateMethodChain(sb, expr.getMethodChain()); + } + + private void generateBinaryExpr(final StringBuilder sb, + final MALExpressionModel.BinaryExpr expr) { + final MALExpressionModel.Expr left = expr.getLeft(); + final MALExpressionModel.Expr right = expr.getRight(); + final MALExpressionModel.ArithmeticOp op = expr.getOp(); + + final boolean leftIsNumber = left instanceof MALExpressionModel.NumberExpr + || isScalarFunction(left); + final boolean rightIsNumber = right instanceof MALExpressionModel.NumberExpr + || isScalarFunction(right); + + if (leftIsNumber && !rightIsNumber) { + // N op SF -> swap to SF.op(N) with special handling for SUB and DIV + switch (op) { + case ADD: + sb.append("("); + generateExpr(sb, right); + sb.append(").plus(Double.valueOf("); + generateScalarExpr(sb, left); + sb.append("))"); + break; + case SUB: + sb.append("("); + generateExpr(sb, right); + sb.append(").minus(Double.valueOf("); + generateScalarExpr(sb, left); + sb.append(")).negative()"); + break; + case MUL: + sb.append("("); + generateExpr(sb, right); + sb.append(").multiply(Double.valueOf("); + generateScalarExpr(sb, left); + sb.append("))"); + break; + case DIV: + sb.append("org.apache.skywalking.oap.meter.analyzer.v2.compiler.rt") + .append(".MalRuntimeHelper.divReverse("); + generateScalarExpr(sb, left); + sb.append(", "); + generateExpr(sb, right); + sb.append(")"); + break; + default: + throw new IllegalArgumentException("Unsupported op: " + op); + } + } else if (!leftIsNumber && rightIsNumber) { + // SF op N + sb.append("("); + generateExpr(sb, left); + sb.append(").").append(MALCodegenHelper.opMethodName(op)) + .append("(Double.valueOf("); + generateScalarExpr(sb, right); + sb.append("))"); + } else { + // SF op SF (both non-number) + sb.append("("); + generateExpr(sb, left); + sb.append(").").append(MALCodegenHelper.opMethodName(op)).append("("); + generateExpr(sb, right); + sb.append(")"); + } + } + + private void generateMethodChain(final StringBuilder sb, + final List<MALExpressionModel.MethodCall> chain) { + for (final MALExpressionModel.MethodCall mc : chain) { + sb.append('.').append(mc.getName()).append('('); + final List<MALExpressionModel.Argument> args = mc.getArguments(); + if (MALCodegenHelper.VARARGS_STRING_METHODS.contains(mc.getName()) && !args.isEmpty() + && allStringArgs(args)) { + sb.append("new String[]{"); + for (int i = 0; i < args.size(); i++) { + if (i > 0) { + sb.append(", "); + } + generateArgument(sb, args.get(i)); + } + sb.append('}'); + } else { + final boolean primitiveDouble = + MALCodegenHelper.PRIMITIVE_DOUBLE_METHODS.contains(mc.getName()); + for (int i = 0; i < args.size(); i++) { + if (i > 0) { + sb.append(", "); + } + generateMethodCallArg(sb, args.get(i), primitiveDouble); + } + } + sb.append(')'); + } + } + + /** + * Generates a method call argument, handling numeric ExprArgument + * specially when the target method expects a primitive double. + */ + private void generateMethodCallArg(final StringBuilder sb, + final MALExpressionModel.Argument arg, + final boolean primitiveDouble) { + if (primitiveDouble + && arg instanceof MALExpressionModel.ExprArgument) { + final MALExpressionModel.Expr innerExpr = + ((MALExpressionModel.ExprArgument) arg).getExpr(); + if (innerExpr instanceof MALExpressionModel.NumberExpr) { + // Emit raw double literal for methods taking primitive double + final double num = + ((MALExpressionModel.NumberExpr) innerExpr).getValue(); + sb.append(num); + return; + } + } + generateArgument(sb, arg); + } + + private static boolean allStringArgs(final List<MALExpressionModel.Argument> args) { + for (final MALExpressionModel.Argument arg : args) { + if (!(arg instanceof MALExpressionModel.StringArgument) + && !(arg instanceof MALExpressionModel.NullArgument)) { + return false; + } + } + return true; + } + + private void generateArgument(final StringBuilder sb, + final MALExpressionModel.Argument arg) { + if (arg instanceof MALExpressionModel.StringArgument) { + sb.append('"') + .append(MALCodegenHelper.escapeJava(((MALExpressionModel.StringArgument) arg).getValue())) + .append('"'); + } else if (arg instanceof MALExpressionModel.StringListArgument) { + final List<String> vals = + ((MALExpressionModel.StringListArgument) arg).getValues(); + sb.append("java.util.List.of("); + for (int i = 0; i < vals.size(); i++) { + if (i > 0) { + sb.append(", "); + } + sb.append('"').append(MALCodegenHelper.escapeJava(vals.get(i))).append('"'); + } + sb.append(')'); + } else if (arg instanceof MALExpressionModel.NumberListArgument) { + final List<Double> vals = + ((MALExpressionModel.NumberListArgument) arg).getValues(); + sb.append("java.util.List.of("); + for (int i = 0; i < vals.size(); i++) { + if (i > 0) { + sb.append(", "); + } + final double v = vals.get(i); + if (v == Math.floor(v) && !Double.isInfinite(v)) { + sb.append("Integer.valueOf(").append((int) v).append(')'); + } else { + sb.append("Double.valueOf(").append(v).append(')'); + } + } + sb.append(')'); + } else if (arg instanceof MALExpressionModel.BoolArgument) { + sb.append(((MALExpressionModel.BoolArgument) arg).isValue()); + } else if (arg instanceof MALExpressionModel.NullArgument) { + sb.append("null"); + } else if (arg instanceof MALExpressionModel.EnumRefArgument) { + final MALExpressionModel.EnumRefArgument enumRef = + (MALExpressionModel.EnumRefArgument) arg; + final String fqcn = MALCodegenHelper.ENUM_FQCN.get(enumRef.getEnumType()); + if (fqcn != null) { + sb.append(fqcn); + } else { + sb.append(enumRef.getEnumType()); + } + sb.append('.').append(enumRef.getEnumValue()); + } else if (arg instanceof MALExpressionModel.ExprArgument) { + final MALExpressionModel.Expr innerExpr = + ((MALExpressionModel.ExprArgument) arg).getExpr(); + if (innerExpr instanceof MALExpressionModel.NumberExpr) { + // Numeric literal argument (e.g., valueEqual(1), multiply(100)) + // Emit as Double.valueOf() to match Number parameter types. + final double num = ((MALExpressionModel.NumberExpr) innerExpr).getValue(); + sb.append("Double.valueOf(").append(num).append(")"); + } else if (innerExpr instanceof MALExpressionModel.MetricExpr + && ((MALExpressionModel.MetricExpr) innerExpr).getMethodChain().isEmpty()) { + // Bare identifier — could be an enum constant like SUM, AVG + final String name = + ((MALExpressionModel.MetricExpr) innerExpr).getMetricName(); + if (MALCodegenHelper.isDownsamplingType(name)) { + sb.append(MALCodegenHelper.ENUM_FQCN.get("DownsamplingType")).append('.').append(name); + } else { + // It's a metric reference used as argument (e.g., div(other_metric)) + generateExpr(sb, innerExpr); + } + } else { + generateExpr(sb, innerExpr); + } + } else if (arg instanceof MALExpressionModel.ClosureArgument) { + generateClosureArgument(sb, (MALExpressionModel.ClosureArgument) arg); + } + } + + private void generateClosureArgument(final StringBuilder sb, + final MALExpressionModel.ClosureArgument closure) { + // Reference pre-compiled closure field + sb.append("this.").append(closureFieldNames.get(closureFieldIndex++)); + } + + // Closure statement/expr/condition generation delegated to MALClosureCodegen. + + private static void collectSampleNames(final MALExpressionModel.Expr expr, + final Set<String> names) { + if (expr instanceof MALExpressionModel.MetricExpr) { + final MALExpressionModel.MetricExpr me = (MALExpressionModel.MetricExpr) expr; + names.add(me.getMetricName()); + collectSampleNamesFromChain(me.getMethodChain(), names); + } else if (expr instanceof MALExpressionModel.BinaryExpr) { + collectSampleNames(((MALExpressionModel.BinaryExpr) expr).getLeft(), names); + collectSampleNames(((MALExpressionModel.BinaryExpr) expr).getRight(), names); + } else if (expr instanceof MALExpressionModel.UnaryNegExpr) { + collectSampleNames( + ((MALExpressionModel.UnaryNegExpr) expr).getOperand(), names); + } else if (expr instanceof MALExpressionModel.ParenChainExpr) { + final MALExpressionModel.ParenChainExpr pce = + (MALExpressionModel.ParenChainExpr) expr; + collectSampleNames(pce.getInner(), names); + collectSampleNamesFromChain(pce.getMethodChain(), names); + } else if (expr instanceof MALExpressionModel.FunctionCallExpr) { + for (final MALExpressionModel.Argument arg : + ((MALExpressionModel.FunctionCallExpr) expr).getArguments()) { + if (arg instanceof MALExpressionModel.ExprArgument) { + collectSampleNames( + ((MALExpressionModel.ExprArgument) arg).getExpr(), names); + } + } + } + } + + private static void collectSampleNamesFromChain( + final List<MALExpressionModel.MethodCall> chain, + final Set<String> names) { + for (final MALExpressionModel.MethodCall mc : chain) { + if ("downsampling".equals(mc.getName())) { + continue; + } + for (final MALExpressionModel.Argument arg : mc.getArguments()) { + if (arg instanceof MALExpressionModel.ExprArgument) { + collectSampleNames( + ((MALExpressionModel.ExprArgument) arg).getExpr(), names); + } + } + } + } + + /** + * Extracts compile-time metadata from the AST by walking all method chains. + */ + static ExpressionMetadata extractMetadata(final MALExpressionModel.Expr ast) { + final Set<String> sampleNames = new LinkedHashSet<>(); + collectSampleNames(ast, sampleNames); + + ScopeType scopeType = null; + final Set<String> scopeLabels = new LinkedHashSet<>(); + final Set<String> aggregationLabels = new LinkedHashSet<>(); + DownsamplingType downsampling = DownsamplingType.AVG; + boolean isHistogram = false; + int[] percentiles = null; + + final List<List<MALExpressionModel.MethodCall>> allChains = new ArrayList<>(); + collectMethodChains(ast, allChains); + + for (final List<MALExpressionModel.MethodCall> chain : allChains) { + for (final MALExpressionModel.MethodCall mc : chain) { + final String name = mc.getName(); + switch (name) { + case "sum": + case "avg": + case "max": + case "min": + addStringListLabels(mc, aggregationLabels); + break; + case "count": + addStringListLabels(mc, aggregationLabels); + break; + case "service": + scopeType = ScopeType.SERVICE; + addStringListLabels(mc, scopeLabels); + break; + case "instance": + scopeType = ScopeType.SERVICE_INSTANCE; + addAllStringListLabels(mc, scopeLabels); + break; + case "endpoint": + scopeType = ScopeType.ENDPOINT; + addAllStringListLabels(mc, scopeLabels); + break; + case "process": + scopeType = ScopeType.PROCESS; + addAllStringListLabels(mc, scopeLabels); + addStringArgLabels(mc, scopeLabels); + break; + case "serviceRelation": + scopeType = ScopeType.SERVICE_RELATION; + addAllStringListLabels(mc, scopeLabels); + addStringArgLabels(mc, scopeLabels); + break; + case "processRelation": + scopeType = ScopeType.PROCESS_RELATION; + addAllStringListLabels(mc, scopeLabels); + addStringArgLabels(mc, scopeLabels); + break; + case "histogram": + isHistogram = true; + break; + case "histogram_percentile": + if (!mc.getArguments().isEmpty() + && mc.getArguments().get(0) instanceof MALExpressionModel.NumberListArgument) { + final List<Double> vals = + ((MALExpressionModel.NumberListArgument) mc.getArguments().get(0)).getValues(); + percentiles = new int[vals.size()]; + for (int i = 0; i < vals.size(); i++) { + percentiles[i] = vals.get(i).intValue(); + } + } + break; + case "downsampling": + if (!mc.getArguments().isEmpty()) { + final MALExpressionModel.Argument dsArg = mc.getArguments().get(0); + if (dsArg instanceof MALExpressionModel.EnumRefArgument) { + final String val = + ((MALExpressionModel.EnumRefArgument) dsArg).getEnumValue(); + downsampling = DownsamplingType.valueOf(val); + } else if (dsArg instanceof MALExpressionModel.ExprArgument) { + final MALExpressionModel.Expr dsExpr = + ((MALExpressionModel.ExprArgument) dsArg).getExpr(); + if (dsExpr instanceof MALExpressionModel.MetricExpr) { + final String val = + ((MALExpressionModel.MetricExpr) dsExpr).getMetricName(); + downsampling = DownsamplingType.valueOf(val); + } + } + } + break; + default: + break; + } + } + } + + // Validate decorate() usage: must be after service(), not after + // instance()/endpoint()/etc., and not with histogram metrics + boolean hasDecorate = false; + for (final List<MALExpressionModel.MethodCall> chain : allChains) { + for (final MALExpressionModel.MethodCall mc : chain) { + if ("decorate".equals(mc.getName())) { + hasDecorate = true; + break; + } + } + if (hasDecorate) { + break; + } + } + if (hasDecorate) { + if (scopeType != null && scopeType != ScopeType.SERVICE) { + throw new IllegalStateException( + "decorate() should be invoked after service()"); + } + if (isHistogram) { + throw new IllegalStateException( + "decorate() not supported for histogram metrics"); + } + } + + return new ExpressionMetadata( + new ArrayList<>(sampleNames), + scopeType, + scopeLabels, + aggregationLabels, + downsampling, + isHistogram, + percentiles + ); + } + + private static void addStringListLabels(final MALExpressionModel.MethodCall mc, + final Set<String> target) { + if (!mc.getArguments().isEmpty() + && mc.getArguments().get(0) instanceof MALExpressionModel.StringListArgument) { + target.addAll( + ((MALExpressionModel.StringListArgument) mc.getArguments().get(0)).getValues()); + } + } + + private static void addAllStringListLabels(final MALExpressionModel.MethodCall mc, + final Set<String> target) { + for (final MALExpressionModel.Argument arg : mc.getArguments()) { + if (arg instanceof MALExpressionModel.StringListArgument) { + target.addAll(((MALExpressionModel.StringListArgument) arg).getValues()); + } + } + } + + private static void addStringArgLabels(final MALExpressionModel.MethodCall mc, + final Set<String> target) { + for (final MALExpressionModel.Argument arg : mc.getArguments()) { + if (arg instanceof MALExpressionModel.StringArgument) { + target.add(((MALExpressionModel.StringArgument) arg).getValue()); + } + } + } + + private static void collectMethodChains(final MALExpressionModel.Expr expr, + final List<List<MALExpressionModel.MethodCall>> chains) { + if (expr instanceof MALExpressionModel.MetricExpr) { + chains.add(((MALExpressionModel.MetricExpr) expr).getMethodChain()); + } else if (expr instanceof MALExpressionModel.BinaryExpr) { + collectMethodChains(((MALExpressionModel.BinaryExpr) expr).getLeft(), chains); + collectMethodChains(((MALExpressionModel.BinaryExpr) expr).getRight(), chains); + } else if (expr instanceof MALExpressionModel.UnaryNegExpr) { + collectMethodChains(((MALExpressionModel.UnaryNegExpr) expr).getOperand(), chains); + } else if (expr instanceof MALExpressionModel.ParenChainExpr) { + collectMethodChains(((MALExpressionModel.ParenChainExpr) expr).getInner(), chains); + chains.add(((MALExpressionModel.ParenChainExpr) expr).getMethodChain()); + } else if (expr instanceof MALExpressionModel.FunctionCallExpr) { + for (final MALExpressionModel.Argument arg : + ((MALExpressionModel.FunctionCallExpr) expr).getArguments()) { + if (arg instanceof MALExpressionModel.ExprArgument) { + collectMethodChains(((MALExpressionModel.ExprArgument) arg).getExpr(), chains); + } + } + chains.add(((MALExpressionModel.FunctionCallExpr) expr).getMethodChain()); + } + } + + private String generateMetadataMethod(final ExpressionMetadata metadata) { + final StringBuilder sb = new StringBuilder(); + final String mdClass = "org.apache.skywalking.oap.meter.analyzer.v2.dsl.ExpressionMetadata"; + final String scopeTypeClass = "org.apache.skywalking.oap.server.core.analysis.meter.ScopeType"; + final String dsTypeClass = "org.apache.skywalking.oap.meter.analyzer.v2.dsl.DownsamplingType"; + + sb.append("public ").append(mdClass).append(" metadata() {\n"); + + // samples list + sb.append(" java.util.List _samples = new java.util.ArrayList();\n"); + for (final String sample : metadata.getSamples()) { + sb.append(" _samples.add(\"").append(MALCodegenHelper.escapeJava(sample)).append("\");\n"); + } + + // scope labels set + sb.append(" java.util.Set _scopeLabels = new java.util.LinkedHashSet();\n"); + for (final String label : metadata.getScopeLabels()) { + sb.append(" _scopeLabels.add(\"").append(MALCodegenHelper.escapeJava(label)).append("\");\n"); + } + + // aggregation labels set + sb.append(" java.util.Set _aggLabels = new java.util.LinkedHashSet();\n"); + for (final String label : metadata.getAggregationLabels()) { + sb.append(" _aggLabels.add(\"").append(MALCodegenHelper.escapeJava(label)).append("\");\n"); + } + + // percentiles array + if (metadata.getPercentiles() != null) { + sb.append(" int[] _pct = new int[]{"); + for (int i = 0; i < metadata.getPercentiles().length; i++) { + if (i > 0) { + sb.append(", "); + } + sb.append(metadata.getPercentiles()[i]); + } + sb.append("};\n"); + } else { + sb.append(" int[] _pct = null;\n"); + } + + sb.append(" return new ").append(mdClass).append("(\n"); + sb.append(" _samples,\n"); + if (metadata.getScopeType() != null) { + sb.append(" ").append(scopeTypeClass).append('.').append(metadata.getScopeType().name()).append(",\n"); + } else { + sb.append(" null,\n"); + } + sb.append(" _scopeLabels,\n"); + sb.append(" _aggLabels,\n"); + sb.append(" ").append(dsTypeClass).append('.').append(metadata.getDownsampling().name()).append(",\n"); + sb.append(" ").append(metadata.isHistogram()).append(",\n"); + sb.append(" _pct\n"); + sb.append(" );\n"); + sb.append("}\n"); + return sb.toString(); + } + + /** + * Whether the expression is a scalar (number-producing) function like {@code time()}. + */ + private static boolean isScalarFunction(final MALExpressionModel.Expr expr) { + if (expr instanceof MALExpressionModel.FunctionCallExpr) { + final String fn = ((MALExpressionModel.FunctionCallExpr) expr).getFunctionName(); + return "time".equals(fn); + } + return false; + } + + /** + * Generate code for a scalar expression (literal number or scalar function). + */ + private void generateScalarExpr(final StringBuilder sb, + final MALExpressionModel.Expr expr) { + if (expr instanceof MALExpressionModel.NumberExpr) { + sb.append(((MALExpressionModel.NumberExpr) expr).getValue()); + } else if (isScalarFunction(expr)) { + final String fn = ((MALExpressionModel.FunctionCallExpr) expr).getFunctionName(); + if ("time".equals(fn)) { + sb.append("(double) java.time.Instant.now().getEpochSecond()"); + } + } + } + + /** + * Generates the Java source body of the run method for debugging/testing. + */ + public String generateSource(final String expression) { + final MALExpressionModel.Expr ast = MALScriptParser.parse(expression); + final MALClosureCodegen cc = new MALClosureCodegen(classPool, this); + final List<MALClosureCodegen.ClosureInfo> closures = new ArrayList<>(); + cc.collectClosures(ast, closures); + // Build field names for source generation + final List<String> fieldNames = new ArrayList<>(); + final java.util.Map<String, Integer> nameCounts = new java.util.HashMap<>(); + for (final MALClosureCodegen.ClosureInfo ci : closures) { + final String purpose = ci.methodName; + final int count = nameCounts.getOrDefault(purpose, 0); + nameCounts.put(purpose, count + 1); + final String suffix = count == 0 ? purpose : purpose + "_" + (count + 1); + fieldNames.add("_" + suffix); + } + this.closureFieldNames = fieldNames; + this.closureFieldIndex = 0; + return generateRunMethod(ast); + } + + /** + * Generates the Java source body of the filter test method for debugging/testing. + */ + public String generateFilterSource(final String filterExpression) { + final MALExpressionModel.ClosureArgument closure = + MALScriptParser.parseFilter(filterExpression); + + final List<String> params = closure.getParams(); + final String paramName = params.isEmpty() ? "it" : params.get(0); + + final MALClosureCodegen cc = new MALClosureCodegen(classPool, this); + final StringBuilder sb = new StringBuilder(); + sb.append("public boolean test(java.util.Map ").append(paramName) + .append(") {\n"); + + final List<MALExpressionModel.ClosureStatement> body = closure.getBody(); + if (body.size() == 1 + && body.get(0) instanceof MALExpressionModel.ClosureExprStatement) { + final MALExpressionModel.ClosureExpr expr = + ((MALExpressionModel.ClosureExprStatement) body.get(0)).getExpr(); + if (expr instanceof MALExpressionModel.ClosureCondition) { + sb.append(" return "); + cc.generateClosureCondition( + sb, (MALExpressionModel.ClosureCondition) expr, paramName); + sb.append(";\n"); + } else { + sb.append(" Object _v = "); + cc.generateClosureExpr(sb, expr, paramName); + sb.append(";\n"); + sb.append(" return _v != null && !Boolean.FALSE.equals(_v);\n"); + } + } else { + for (final MALExpressionModel.ClosureStatement stmt : body) { + cc.generateClosureStatement(sb, stmt, paramName); + } + sb.append(" return false;\n"); + } + sb.append("}\n"); + return sb.toString(); + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALClosureCodegen.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALClosureCodegen.java new file mode 100644 index 000000000000..4db10cf2b9f0 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALClosureCodegen.java @@ -0,0 +1,783 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.compiler; + +import java.util.List; +import javassist.ClassPool; +import javassist.CtClass; +import javassist.CtNewMethod; +import lombok.extern.slf4j.Slf4j; + +/** + * Generates closure classes for MAL expressions using Javassist bytecode generation. + * + * <p>This class handles all closure-related code generation: collecting closures from + * the AST, compiling each closure into a separate class implementing the appropriate + * functional interface, and generating closure statement/expression/condition code. + */ +@Slf4j +final class MALClosureCodegen { + + private final ClassPool classPool; + private final MALClassGenerator generator; + + MALClosureCodegen(final ClassPool classPool, final MALClassGenerator generator) { + this.classPool = classPool; + this.generator = generator; + } + + static final class ClosureInfo { + final MALExpressionModel.ClosureArgument closure; + final String interfaceType; + final String methodName; + int fieldIndex; + + ClosureInfo(final MALExpressionModel.ClosureArgument closure, + final String interfaceType, + final String methodName) { + this.closure = closure; + this.interfaceType = interfaceType; + this.methodName = methodName; + } + } + + void collectClosures(final MALExpressionModel.Expr expr, + final List<ClosureInfo> closures) { + if (expr instanceof MALExpressionModel.MetricExpr) { + collectClosuresFromChain( + ((MALExpressionModel.MetricExpr) expr).getMethodChain(), closures); + } else if (expr instanceof MALExpressionModel.BinaryExpr) { + collectClosures(((MALExpressionModel.BinaryExpr) expr).getLeft(), closures); + collectClosures(((MALExpressionModel.BinaryExpr) expr).getRight(), closures); + } else if (expr instanceof MALExpressionModel.UnaryNegExpr) { + collectClosures( + ((MALExpressionModel.UnaryNegExpr) expr).getOperand(), closures); + } else if (expr instanceof MALExpressionModel.ParenChainExpr) { + collectClosures( + ((MALExpressionModel.ParenChainExpr) expr).getInner(), closures); + collectClosuresFromChain( + ((MALExpressionModel.ParenChainExpr) expr).getMethodChain(), closures); + } else if (expr instanceof MALExpressionModel.FunctionCallExpr) { + final MALExpressionModel.FunctionCallExpr fce = + (MALExpressionModel.FunctionCallExpr) expr; + collectClosuresFromArgs(fce.getFunctionName(), + fce.getArguments(), closures); + collectClosuresFromChain( + fce.getMethodChain(), closures); + } + } + + void collectClosuresFromChain(final List<MALExpressionModel.MethodCall> chain, + final List<ClosureInfo> closures) { + for (final MALExpressionModel.MethodCall mc : chain) { + collectClosuresFromArgs(mc.getName(), mc.getArguments(), closures); + } + } + + void collectClosuresFromArgs(final String methodName, + final List<MALExpressionModel.Argument> args, + final List<ClosureInfo> closures) { + for (final MALExpressionModel.Argument arg : args) { + if (arg instanceof MALExpressionModel.ClosureArgument) { + final String interfaceType; + if ("forEach".equals(methodName)) { + interfaceType = "org.apache.skywalking.oap.meter.analyzer.v2.dsl" + + ".SampleFamilyFunctions$ForEachFunction"; + } else if ("instance".equals(methodName)) { + interfaceType = "org.apache.skywalking.oap.meter.analyzer.v2.dsl" + + ".SampleFamilyFunctions$PropertiesExtractor"; + } else if ("decorate".equals(methodName)) { + interfaceType = MALCodegenHelper.DECORATE_FUNCTION_TYPE; + } else { + interfaceType = "org.apache.skywalking.oap.meter.analyzer.v2.dsl" + + ".SampleFamilyFunctions$TagFunction"; + } + final ClosureInfo info = new ClosureInfo( + (MALExpressionModel.ClosureArgument) arg, + interfaceType, methodName); + info.fieldIndex = closures.size(); + closures.add(info); + } else if (arg instanceof MALExpressionModel.ExprArgument) { + collectClosures( + ((MALExpressionModel.ExprArgument) arg).getExpr(), closures); + } + } + } + + /** + * Adds a closure method to the main class instead of creating a separate class. + * Returns the generated method name. + */ + String addClosureMethod(final CtClass mainClass, + final String fieldName, + final ClosureInfo info) throws Exception { + final String className = mainClass.getName(); + final MALExpressionModel.ClosureArgument closure = info.closure; + final List<String> params = closure.getParams(); + final boolean isForEach = MALCodegenHelper.FOR_EACH_FUNCTION_TYPE.equals(info.interfaceType); + final boolean isPropertiesExtractor = + MALCodegenHelper.PROPERTIES_EXTRACTOR_TYPE.equals(info.interfaceType); + + if (isForEach) { + final String methodName = fieldName + "_accept"; + final String elementParam = params.size() >= 1 ? params.get(0) : "element"; + final String tagsParam = params.size() >= 2 ? params.get(1) : "tags"; + + final StringBuilder sb = new StringBuilder(); + sb.append("public void ").append(methodName).append("(String ") + .append(elementParam).append(", java.util.Map ").append(tagsParam) + .append(") {\n"); + for (final MALExpressionModel.ClosureStatement stmt : closure.getBody()) { + generateClosureStatement(sb, stmt, tagsParam); + } + sb.append("}\n"); + + if (log.isDebugEnabled()) { + log.debug("ForEach closure method:\n{}", sb); + } + final javassist.CtMethod m = CtNewMethod.make(sb.toString(), mainClass); + mainClass.addMethod(m); + generator.addLocalVariableTable(m, className, new String[][]{ + {elementParam, "Ljava/lang/String;"}, + {tagsParam, "Ljava/util/Map;"} + }); + generator.addLineNumberTable(m, 3); // slot 0=this, 1=element, 2=tags + return methodName; + } else if (isPropertiesExtractor) { + final String methodName = fieldName + "_apply"; + final String paramName = params.isEmpty() ? "it" : params.get(0); + + final StringBuilder sb = new StringBuilder(); + sb.append("public java.util.Map ").append(methodName) + .append("(java.util.Map ").append(paramName).append(") {\n"); + + final List<MALExpressionModel.ClosureStatement> body = closure.getBody(); + if (body.size() == 1 + && body.get(0) instanceof MALExpressionModel.ClosureExprStatement + && ((MALExpressionModel.ClosureExprStatement) body.get(0)).getExpr() + instanceof MALExpressionModel.ClosureMapLiteral) { + final MALExpressionModel.ClosureMapLiteral mapLit = + (MALExpressionModel.ClosureMapLiteral) + ((MALExpressionModel.ClosureExprStatement) body.get(0)).getExpr(); + sb.append(" java.util.Map _result = new java.util.HashMap();\n"); + for (final MALExpressionModel.MapEntry entry : mapLit.getEntries()) { + sb.append(" _result.put(\"") + .append(MALCodegenHelper.escapeJava(entry.getKey())).append("\", "); + generateClosureExpr(sb, entry.getValue(), paramName); + sb.append(");\n"); + } + sb.append(" return _result;\n"); + } else { + for (final MALExpressionModel.ClosureStatement stmt : body) { + generateClosureStatement(sb, stmt, paramName); + } + sb.append(" return ").append(paramName).append(";\n"); + } + sb.append("}\n"); + + final javassist.CtMethod m = CtNewMethod.make(sb.toString(), mainClass); + mainClass.addMethod(m); + generator.addLocalVariableTable(m, className, new String[][]{ + {paramName, "Ljava/util/Map;"} + }); + generator.addLineNumberTable(m, 2); // slot 0=this, 1=it/param + return methodName; + } else if (MALCodegenHelper.DECORATE_FUNCTION_TYPE.equals(info.interfaceType)) { + final String methodName = fieldName + "_accept"; + final String paramName = params.isEmpty() ? "it" : params.get(0); + + final StringBuilder sb = new StringBuilder(); + sb.append("public void ").append(methodName).append("(Object _arg) {\n"); + sb.append(" ").append(MALCodegenHelper.METER_ENTITY_FQCN).append(" ") + .append(paramName).append(" = (").append(MALCodegenHelper.METER_ENTITY_FQCN) + .append(") _arg;\n"); + for (final MALExpressionModel.ClosureStatement stmt : closure.getBody()) { + generateClosureStatement(sb, stmt, paramName, true); + } + sb.append("}\n"); + + if (log.isDebugEnabled()) { + log.debug("Decorate closure method:\n{}", sb); + } + final javassist.CtMethod m = CtNewMethod.make(sb.toString(), mainClass); + mainClass.addMethod(m); + generator.addLocalVariableTable(m, className, new String[][]{ + {"_arg", "Ljava/lang/Object;"}, + {paramName, "L" + MALCodegenHelper.METER_ENTITY_FQCN.replace('.', '/') + ";"} + }); + generator.addLineNumberTable(m, 2); // slot 0=this, 1=_arg + return methodName; + } else { + // TagFunction: Map<String,String> apply(Map<String,String> tags) + final String methodName = fieldName + "_apply"; + final String paramName = params.isEmpty() ? "it" : params.get(0); + + final StringBuilder sb = new StringBuilder(); + sb.append("public java.util.Map ").append(methodName) + .append("(java.util.Map ").append(paramName).append(") {\n"); + for (final MALExpressionModel.ClosureStatement stmt : closure.getBody()) { + generateClosureStatement(sb, stmt, paramName); + } + sb.append(" return ").append(paramName).append(";\n"); + sb.append("}\n"); + + final javassist.CtMethod m = CtNewMethod.make(sb.toString(), mainClass); + mainClass.addMethod(m); + generator.addLocalVariableTable(m, className, new String[][]{ + {paramName, "Ljava/util/Map;"} + }); + generator.addLineNumberTable(m, 2); // slot 0=this, 1=it/param + return methodName; + } + } + + void generateClosureStatement(final StringBuilder sb, + final MALExpressionModel.ClosureStatement stmt, + final String paramName) { + generateClosureStatement(sb, stmt, paramName, false); + } + + void generateClosureStatement(final StringBuilder sb, + final MALExpressionModel.ClosureStatement stmt, + final String paramName, + final boolean beanMode) { + if (stmt instanceof MALExpressionModel.ClosureAssignment) { + final MALExpressionModel.ClosureAssignment assign = + (MALExpressionModel.ClosureAssignment) stmt; + if (beanMode) { + // Bean setter: me.attr0 = 'value' → me.setAttr0("value") + final String keyText = MALCodegenHelper.extractConstantKey(assign.getKeyExpr()); + if (keyText != null) { + sb.append(" ").append(assign.getMapVar()).append(".set") + .append(Character.toUpperCase(keyText.charAt(0))) + .append(keyText.substring(1)).append("("); + generateClosureExpr(sb, assign.getValue(), paramName, beanMode); + sb.append(");\n"); + } else { + // Fallback to map put for dynamic keys + sb.append(" ").append(assign.getMapVar()).append(".put("); + generateClosureExpr(sb, assign.getKeyExpr(), paramName, beanMode); + sb.append(", "); + generateClosureExpr(sb, assign.getValue(), paramName, beanMode); + sb.append(");\n"); + } + } else { + sb.append(" ").append(assign.getMapVar()).append(".put("); + generateClosureExpr(sb, assign.getKeyExpr(), paramName, beanMode); + sb.append(", "); + generateClosureExpr(sb, assign.getValue(), paramName, beanMode); + sb.append(");\n"); + } + } else if (stmt instanceof MALExpressionModel.ClosureIfStatement) { + final MALExpressionModel.ClosureIfStatement ifStmt = + (MALExpressionModel.ClosureIfStatement) stmt; + sb.append(" if ("); + generateClosureCondition(sb, ifStmt.getCondition(), paramName, beanMode); + sb.append(") {\n"); + for (final MALExpressionModel.ClosureStatement s : ifStmt.getThenBranch()) { + generateClosureStatement(sb, s, paramName, beanMode); + } + sb.append(" }\n"); + if (!ifStmt.getElseBranch().isEmpty()) { + sb.append(" else {\n"); + for (final MALExpressionModel.ClosureStatement s : ifStmt.getElseBranch()) { + generateClosureStatement(sb, s, paramName, beanMode); + } + sb.append(" }\n"); + } + } else if (stmt instanceof MALExpressionModel.ClosureReturnStatement) { + final MALExpressionModel.ClosureReturnStatement retStmt = + (MALExpressionModel.ClosureReturnStatement) stmt; + if (retStmt.getValue() == null) { + sb.append(" return;\n"); + } else { + if (beanMode) { + sb.append(" return "); + } else { + sb.append(" return (java.util.Map) "); + } + generateClosureExpr(sb, retStmt.getValue(), paramName, beanMode); + sb.append(";\n"); + } + } else if (stmt instanceof MALExpressionModel.ClosureVarDecl) { + final MALExpressionModel.ClosureVarDecl vd = + (MALExpressionModel.ClosureVarDecl) stmt; + sb.append(" ").append(vd.getTypeName()).append(" ") + .append(vd.getVarName()).append(" = "); + generateClosureExpr(sb, vd.getInitializer(), paramName, beanMode); + sb.append(";\n"); + } else if (stmt instanceof MALExpressionModel.ClosureVarAssign) { + final MALExpressionModel.ClosureVarAssign va = + (MALExpressionModel.ClosureVarAssign) stmt; + sb.append(" ").append(va.getVarName()).append(" = "); + generateClosureExpr(sb, va.getValue(), paramName, beanMode); + sb.append(";\n"); + } else if (stmt instanceof MALExpressionModel.ClosureExprStatement) { + sb.append(" "); + generateClosureExpr(sb, + ((MALExpressionModel.ClosureExprStatement) stmt).getExpr(), paramName, + beanMode); + sb.append(";\n"); + } + } + + void generateClosureExpr(final StringBuilder sb, + final MALExpressionModel.ClosureExpr expr, + final String paramName) { + generateClosureExpr(sb, expr, paramName, false); + } + + void generateClosureExpr(final StringBuilder sb, + final MALExpressionModel.ClosureExpr expr, + final String paramName, + final boolean beanMode) { + if (expr instanceof MALExpressionModel.ClosureStringLiteral) { + sb.append('"') + .append(MALCodegenHelper.escapeJava(((MALExpressionModel.ClosureStringLiteral) expr).getValue())) + .append('"'); + } else if (expr instanceof MALExpressionModel.ClosureNumberLiteral) { + final double val = + ((MALExpressionModel.ClosureNumberLiteral) expr).getValue(); + if (val == (int) val) { + sb.append((int) val); + } else { + sb.append(val); + } + } else if (expr instanceof MALExpressionModel.ClosureBoolLiteral) { + sb.append(((MALExpressionModel.ClosureBoolLiteral) expr).isValue()); + } else if (expr instanceof MALExpressionModel.ClosureNullLiteral) { + sb.append("null"); + } else if (expr instanceof MALExpressionModel.ClosureMapLiteral) { + final MALExpressionModel.ClosureMapLiteral mapLit = + (MALExpressionModel.ClosureMapLiteral) expr; + sb.append("java.util.Map.of("); + for (int i = 0; i < mapLit.getEntries().size(); i++) { + if (i > 0) { + sb.append(", "); + } + final MALExpressionModel.MapEntry entry = mapLit.getEntries().get(i); + sb.append('"').append(MALCodegenHelper.escapeJava(entry.getKey())).append("\", "); + generateClosureExpr(sb, entry.getValue(), paramName, beanMode); + } + sb.append(")"); + } else if (expr instanceof MALExpressionModel.ClosureMethodChain) { + generateClosureMethodChain(sb, + (MALExpressionModel.ClosureMethodChain) expr, paramName, beanMode); + } else if (expr instanceof MALExpressionModel.ClosureBinaryExpr) { + final MALExpressionModel.ClosureBinaryExpr bin = + (MALExpressionModel.ClosureBinaryExpr) expr; + sb.append("("); + generateClosureExpr(sb, bin.getLeft(), paramName, beanMode); + switch (bin.getOp()) { + case ADD: + sb.append(" + "); + break; + case SUB: + sb.append(" - "); + break; + case MUL: + sb.append(" * "); + break; + case DIV: + sb.append(" / "); + break; + default: + break; + } + generateClosureExpr(sb, bin.getRight(), paramName, beanMode); + sb.append(")"); + } else if (expr instanceof MALExpressionModel.ClosureCompTernaryExpr) { + final MALExpressionModel.ClosureCompTernaryExpr ct = + (MALExpressionModel.ClosureCompTernaryExpr) expr; + sb.append("("); + generateClosureExpr(sb, ct.getLeft(), paramName, beanMode); + sb.append(MALCodegenHelper.comparisonOperator(ct.getOp())); + generateClosureExpr(sb, ct.getRight(), paramName, beanMode); + sb.append(" ? "); + generateClosureExpr(sb, ct.getTrueExpr(), paramName, beanMode); + sb.append(" : "); + generateClosureExpr(sb, ct.getFalseExpr(), paramName, beanMode); + sb.append(")"); + } else if (expr instanceof MALExpressionModel.ClosureTernaryExpr) { + final MALExpressionModel.ClosureTernaryExpr ternary = + (MALExpressionModel.ClosureTernaryExpr) expr; + sb.append("(((Object)("); + generateClosureExpr(sb, ternary.getCondition(), paramName, beanMode); + sb.append(")) != null ? ("); + generateClosureExpr(sb, ternary.getTrueExpr(), paramName, beanMode); + sb.append(") : ("); + generateClosureExpr(sb, ternary.getFalseExpr(), paramName, beanMode); + sb.append("))"); + } else if (expr instanceof MALExpressionModel.ClosureElvisExpr) { + final MALExpressionModel.ClosureElvisExpr elvis = + (MALExpressionModel.ClosureElvisExpr) expr; + sb.append("java.util.Optional.ofNullable("); + generateClosureExpr(sb, elvis.getPrimary(), paramName, beanMode); + sb.append(").orElse("); + generateClosureExpr(sb, elvis.getFallback(), paramName, beanMode); + sb.append(")"); + } else if (expr instanceof MALExpressionModel.ClosureRegexMatchExpr) { + final MALExpressionModel.ClosureRegexMatchExpr rm = + (MALExpressionModel.ClosureRegexMatchExpr) expr; + sb.append(MALCodegenHelper.RUNTIME_HELPER_FQCN).append(".regexMatch(String.valueOf("); + generateClosureExpr(sb, rm.getTarget(), paramName, beanMode); + sb.append("), \"").append(MALCodegenHelper.escapeJava(rm.getPattern())).append("\")"); + } else if (expr instanceof MALExpressionModel.ClosureExprChain) { + final MALExpressionModel.ClosureExprChain chain = + (MALExpressionModel.ClosureExprChain) expr; + final StringBuilder local = new StringBuilder(); + // Cast to String when the chain has method calls (e.g., .split(), .toString()) + // so Javassist can resolve the method on the concrete type. + final boolean needsCast = chain.getSegments().stream() + .anyMatch(s -> s instanceof MALExpressionModel.ClosureMethodCallSeg); + if (needsCast) { + local.append("((String) "); + } else { + local.append("("); + } + generateClosureExpr(local, chain.getBase(), paramName, beanMode); + local.append(")"); + for (final MALExpressionModel.ClosureChainSegment seg : chain.getSegments()) { + if (seg instanceof MALExpressionModel.ClosureMethodCallSeg) { + final MALExpressionModel.ClosureMethodCallSeg mc = + (MALExpressionModel.ClosureMethodCallSeg) seg; + if ("size".equals(mc.getName()) && mc.getArguments().isEmpty()) { + local.append(".length"); + } else { + local.append('.').append(mc.getName()).append('('); + for (int i = 0; i < mc.getArguments().size(); i++) { + if (i > 0) { + local.append(", "); + } + generateClosureExpr(local, mc.getArguments().get(i), + paramName, beanMode); + } + local.append(')'); + } + } else if (seg instanceof MALExpressionModel.ClosureFieldAccess) { + local.append('.').append( + ((MALExpressionModel.ClosureFieldAccess) seg).getName()); + } else if (seg instanceof MALExpressionModel.ClosureIndexAccess) { + local.append("[(int) "); + generateClosureExpr(local, + ((MALExpressionModel.ClosureIndexAccess) seg).getIndex(), + paramName, beanMode); + local.append(']'); + } + } + sb.append(local); + } else if (expr instanceof MALExpressionModel.ClosureExprCondition) { + // A bare condition expression used as a statement (e.g., tags.remove('x') + // parsed as closureCondition → conditionExpr). Unwrap and emit the inner + // expression directly — this is a side-effect call, not a boolean test. + generateClosureExpr(sb, + ((MALExpressionModel.ClosureExprCondition) expr).getExpr(), + paramName, beanMode); + } + } + + void generateClosureMethodChain( + final StringBuilder sb, + final MALExpressionModel.ClosureMethodChain chain, + final String paramName, + final boolean beanMode) { + final String target = chain.getTarget(); + final String resolvedTarget = MALCodegenHelper.CLOSURE_CLASS_FQCN.getOrDefault(target, target); + final boolean isClassRef = MALCodegenHelper.CLOSURE_CLASS_FQCN.containsKey(target); + final List<MALExpressionModel.ClosureChainSegment> segs = chain.getSegments(); + + // Static class method call: ProcessRegistry.generateVirtualLocalProcess(...) + if (isClassRef) { + final StringBuilder local = new StringBuilder(); + local.append(resolvedTarget); + for (final MALExpressionModel.ClosureChainSegment seg : segs) { + if (seg instanceof MALExpressionModel.ClosureMethodCallSeg) { + final MALExpressionModel.ClosureMethodCallSeg mc = + (MALExpressionModel.ClosureMethodCallSeg) seg; + local.append('.').append(mc.getName()).append('('); + for (int i = 0; i < mc.getArguments().size(); i++) { + if (i > 0) { + local.append(", "); + } + generateClosureExpr(local, mc.getArguments().get(i), paramName, + beanMode); + } + local.append(')'); + } else if (seg instanceof MALExpressionModel.ClosureFieldAccess) { + local.append('.').append( + ((MALExpressionModel.ClosureFieldAccess) seg).getName()); + } + } + sb.append(local); + return; + } + + if (segs.isEmpty()) { + sb.append(resolvedTarget); + return; + } + + if (beanMode) { + // Bean mode: me.serviceName → me.getServiceName() + // me.layer.name() → me.getLayer().name() + // parts[0] → parts[0] (array index works as-is) + final StringBuilder local = new StringBuilder(); + local.append(resolvedTarget); + for (final MALExpressionModel.ClosureChainSegment seg : segs) { + if (seg instanceof MALExpressionModel.ClosureFieldAccess) { + final String name = + ((MALExpressionModel.ClosureFieldAccess) seg).getName(); + if (target.equals(paramName) || local.toString().contains(".get")) { + // Bean property on the closure parameter → getter + local.append(".get") + .append(Character.toUpperCase(name.charAt(0))) + .append(name.substring(1)).append("()"); + } else { + // Field access on a local variable (e.g., parts.length) + local.append('.').append(name); + } + } else if (seg instanceof MALExpressionModel.ClosureIndexAccess) { + local.append('['); + generateClosureExpr(local, + ((MALExpressionModel.ClosureIndexAccess) seg).getIndex(), paramName, + beanMode); + local.append(']'); + } else if (seg instanceof MALExpressionModel.ClosureMethodCallSeg) { + final MALExpressionModel.ClosureMethodCallSeg mc = + (MALExpressionModel.ClosureMethodCallSeg) seg; + // Groovy .size() on arrays → Java .length (for local vars) + if (!target.equals(paramName) + && "size".equals(mc.getName()) + && mc.getArguments().isEmpty()) { + local.append(".length"); + } else { + local.append('.').append(mc.getName()).append('('); + for (int i = 0; i < mc.getArguments().size(); i++) { + if (i > 0) { + local.append(", "); + } + generateClosureExpr(local, mc.getArguments().get(i), + paramName, beanMode); + } + local.append(')'); + } + } + } + sb.append(local); + return; + } + + // Local variable access (not closure param, not a class reference): + // e.g., matcher[0][1] → matcher[(int)0][(int)1] (plain Java array access) + // e.g., parts.length → parts.length (field access) + // e.g., parts.size() → parts.length (Groovy .size() on arrays) + if (!target.equals(paramName) && !isClassRef) { + final StringBuilder local = new StringBuilder(); + local.append(resolvedTarget); + for (final MALExpressionModel.ClosureChainSegment seg : segs) { + if (seg instanceof MALExpressionModel.ClosureIndexAccess) { + local.append("[(int) "); + generateClosureExpr(local, + ((MALExpressionModel.ClosureIndexAccess) seg).getIndex(), paramName, + beanMode); + local.append(']'); + } else if (seg instanceof MALExpressionModel.ClosureFieldAccess) { + local.append('.').append( + ((MALExpressionModel.ClosureFieldAccess) seg).getName()); + } else if (seg instanceof MALExpressionModel.ClosureMethodCallSeg) { + final MALExpressionModel.ClosureMethodCallSeg mc = + (MALExpressionModel.ClosureMethodCallSeg) seg; + // Groovy .size() on arrays → Java .length + if ("size".equals(mc.getName()) && mc.getArguments().isEmpty()) { + local.append(".length"); + } else { + local.append('.').append(mc.getName()).append('('); + for (int i = 0; i < mc.getArguments().size(); i++) { + if (i > 0) { + local.append(", "); + } + generateClosureExpr(local, mc.getArguments().get(i), paramName, + beanMode); + } + local.append(')'); + } + } + } + sb.append(local); + return; + } + + // Map mode (original): tags.key → tags.get("key") + if (segs.size() == 1 + && segs.get(0) instanceof MALExpressionModel.ClosureFieldAccess) { + final String key = + ((MALExpressionModel.ClosureFieldAccess) segs.get(0)).getName(); + sb.append("(String) ").append(resolvedTarget).append(".get(\"") + .append(MALCodegenHelper.escapeJava(key)).append("\")"); + } else if (segs.size() == 1 + && segs.get(0) instanceof MALExpressionModel.ClosureIndexAccess) { + sb.append("(String) ").append(resolvedTarget).append(".get("); + generateClosureExpr(sb, + ((MALExpressionModel.ClosureIndexAccess) segs.get(0)).getIndex(), paramName, + beanMode); + sb.append(")"); + } else { + // General chain: build in a local buffer to support safe navigation. + // The first FieldAccess/IndexAccess is a map .get() returning String. + // After that, method calls may return other types (e.g., split() → + // String[]), so subsequent IndexAccess uses array syntax [(int) index]. + final StringBuilder local = new StringBuilder(); + local.append(resolvedTarget); + boolean pastMapAccess = false; + for (final MALExpressionModel.ClosureChainSegment seg : segs) { + if (seg instanceof MALExpressionModel.ClosureFieldAccess) { + final String name = ((MALExpressionModel.ClosureFieldAccess) seg) + .getName(); + if (!pastMapAccess) { + final String prior = local.toString(); + local.setLength(0); + local.append("((String) ").append(prior).append(".get(\"") + .append(MALCodegenHelper.escapeJava(name)).append("\"))"); + pastMapAccess = true; + } else { + local.append('.').append(name); + } + } else if (seg instanceof MALExpressionModel.ClosureIndexAccess) { + if (!pastMapAccess) { + final String prior2 = local.toString(); + local.setLength(0); + local.append("((String) ").append(prior2).append(".get("); + generateClosureExpr(local, + ((MALExpressionModel.ClosureIndexAccess) seg).getIndex(), + paramName, beanMode); + local.append("))"); + pastMapAccess = true; + } else { + local.append("[(int) "); + generateClosureExpr(local, + ((MALExpressionModel.ClosureIndexAccess) seg).getIndex(), + paramName, beanMode); + local.append(']'); + } + } else if (seg instanceof MALExpressionModel.ClosureMethodCallSeg) { + final MALExpressionModel.ClosureMethodCallSeg mc = + (MALExpressionModel.ClosureMethodCallSeg) seg; + if (mc.isSafeNav()) { + final String prior = local.toString(); + local.setLength(0); + local.append("(").append(prior).append(" == null ? null : ") + .append("((String) ").append(prior).append(").") + .append(mc.getName()).append('('); + for (int i = 0; i < mc.getArguments().size(); i++) { + if (i > 0) { + local.append(", "); + } + generateClosureExpr(local, mc.getArguments().get(i), paramName, + beanMode); + } + local.append("))"); + } else { + local.append('.').append(mc.getName()).append('('); + for (int i = 0; i < mc.getArguments().size(); i++) { + if (i > 0) { + local.append(", "); + } + generateClosureExpr(local, mc.getArguments().get(i), paramName, + beanMode); + } + local.append(')'); + } + } + } + sb.append(local); + } + } + + void generateClosureCondition(final StringBuilder sb, + final MALExpressionModel.ClosureCondition cond, + final String paramName) { + generateClosureCondition(sb, cond, paramName, false); + } + + void generateClosureCondition(final StringBuilder sb, + final MALExpressionModel.ClosureCondition cond, + final String paramName, + final boolean beanMode) { + if (cond instanceof MALExpressionModel.ClosureComparison) { + final MALExpressionModel.ClosureComparison cc = + (MALExpressionModel.ClosureComparison) cond; + switch (cc.getOp()) { + case EQ: + sb.append("java.util.Objects.equals("); + generateClosureExpr(sb, cc.getLeft(), paramName, beanMode); + sb.append(", "); + generateClosureExpr(sb, cc.getRight(), paramName, beanMode); + sb.append(")"); + break; + case NEQ: + sb.append("!java.util.Objects.equals("); + generateClosureExpr(sb, cc.getLeft(), paramName, beanMode); + sb.append(", "); + generateClosureExpr(sb, cc.getRight(), paramName, beanMode); + sb.append(")"); + break; + default: + generateClosureExpr(sb, cc.getLeft(), paramName, beanMode); + sb.append(MALCodegenHelper.comparisonOperator(cc.getOp())); + generateClosureExpr(sb, cc.getRight(), paramName, beanMode); + break; + } + } else if (cond instanceof MALExpressionModel.ClosureLogical) { + final MALExpressionModel.ClosureLogical lc = + (MALExpressionModel.ClosureLogical) cond; + sb.append("("); + generateClosureCondition(sb, lc.getLeft(), paramName, beanMode); + sb.append(lc.getOp() == MALExpressionModel.LogicalOp.AND ? " && " : " || "); + generateClosureCondition(sb, lc.getRight(), paramName, beanMode); + sb.append(")"); + } else if (cond instanceof MALExpressionModel.ClosureNot) { + sb.append("!("); + generateClosureCondition(sb, + ((MALExpressionModel.ClosureNot) cond).getInner(), paramName, beanMode); + sb.append(")"); + } else if (cond instanceof MALExpressionModel.ClosureExprCondition) { + final MALExpressionModel.ClosureExpr condExpr = + ((MALExpressionModel.ClosureExprCondition) cond).getExpr(); + if (MALCodegenHelper.isBooleanExpression(condExpr)) { + generateClosureExpr(sb, condExpr, paramName, beanMode); + } else { + sb.append("("); + generateClosureExpr(sb, condExpr, paramName, beanMode); + sb.append(" != null)"); + } + } else if (cond instanceof MALExpressionModel.ClosureInCondition) { + final MALExpressionModel.ClosureInCondition ic = + (MALExpressionModel.ClosureInCondition) cond; + sb.append("java.util.List.of("); + for (int i = 0; i < ic.getValues().size(); i++) { + if (i > 0) { + sb.append(", "); + } + sb.append('"').append(MALCodegenHelper.escapeJava(ic.getValues().get(i))).append('"'); + } + sb.append(").contains("); + generateClosureExpr(sb, ic.getExpr(), paramName, beanMode); + sb.append(")"); + } + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALCodegenHelper.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALCodegenHelper.java new file mode 100644 index 000000000000..072a7e2b4a25 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALCodegenHelper.java @@ -0,0 +1,318 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.compiler; + +import java.lang.invoke.CallSite; +import java.lang.invoke.LambdaMetafactory; +import java.lang.invoke.MethodHandle; +import java.lang.invoke.MethodHandles; +import java.lang.invoke.MethodType; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; + +/** + * Static utility methods and constants shared across MAL code generation classes. + */ +final class MALCodegenHelper { + + private MALCodegenHelper() { + } + + // ---- Well-known enum types used in MAL expressions ---- + + static final Map<String, String> ENUM_FQCN; + + // ---- Well-known helper classes used inside MAL closures ---- + + static final Map<String, String> CLOSURE_CLASS_FQCN; + + static { + ENUM_FQCN = new HashMap<>(); + ENUM_FQCN.put("Layer", "org.apache.skywalking.oap.server.core.analysis.Layer"); + ENUM_FQCN.put("DetectPoint", + "org.apache.skywalking.oap.server.core.source.DetectPoint"); + ENUM_FQCN.put("K8sRetagType", + "org.apache.skywalking.oap.meter.analyzer.v2.dsl.tagOpt.K8sRetagType"); + ENUM_FQCN.put("DownsamplingType", + "org.apache.skywalking.oap.meter.analyzer.v2.dsl.DownsamplingType"); + + CLOSURE_CLASS_FQCN = new HashMap<>(); + CLOSURE_CLASS_FQCN.put("ProcessRegistry", + "org.apache.skywalking.oap.meter.analyzer.v2.dsl.registry.ProcessRegistry"); + } + + // ---- Closure interface FQCNs ---- + + static final String FOR_EACH_FUNCTION_TYPE = + "org.apache.skywalking.oap.meter.analyzer.v2.dsl" + + ".SampleFamilyFunctions$ForEachFunction"; + + static final String PROPERTIES_EXTRACTOR_TYPE = + "org.apache.skywalking.oap.meter.analyzer.v2.dsl" + + ".SampleFamilyFunctions$PropertiesExtractor"; + + static final String DECORATE_FUNCTION_TYPE = + "org.apache.skywalking.oap.meter.analyzer.v2.dsl" + + ".SampleFamilyFunctions$DecorateFunction"; + + static final String METER_ENTITY_FQCN = + "org.apache.skywalking.oap.server.core.analysis.meter.MeterEntity"; + + static final String RUNTIME_HELPER_FQCN = + "org.apache.skywalking.oap.meter.analyzer.v2.compiler.rt.MalRuntimeHelper"; + + // ---- Method classification sets ---- + + /** + * Methods on SampleFamily that take String[] (varargs). + * Javassist doesn't support varargs syntax, so multiple string args + * must be wrapped in {@code new String[]{}}. + */ + static final Set<String> VARARGS_STRING_METHODS = Set.of( + "tagEqual", "tagNotEqual", "tagMatch", "tagNotMatch" + ); + + /** + * Methods on SampleFamily whose first argument is a primitive {@code double}. + * Javassist cannot auto-unbox {@code Double} to {@code double}, so numeric + * arguments to these methods must be emitted as raw double literals. + */ + static final Set<String> PRIMITIVE_DOUBLE_METHODS = Set.of( + "valueEqual", "valueNotEqual", "valueGreater", + "valueGreaterEqual", "valueLess", "valueLessEqual" + ); + + // ---- Static utility methods ---- + + static String sanitizeName(final String name) { + final StringBuilder sb = new StringBuilder(name.length()); + for (int i = 0; i < name.length(); i++) { + final char c = name.charAt(i); + sb.append(i == 0 + ? (Character.isJavaIdentifierStart(c) ? c : '_') + : (Character.isJavaIdentifierPart(c) ? c : '_')); + } + return sb.length() == 0 ? "Generated" : sb.toString(); + } + + static String escapeJava(final String s) { + return s.replace("\\", "\\\\") + .replace("\"", "\\\"") + .replace("\n", "\\n") + .replace("\r", "\\r") + .replace("\t", "\\t"); + } + + static String opMethodName(final MALExpressionModel.ArithmeticOp op) { + switch (op) { + case ADD: + return "plus"; + case SUB: + return "minus"; + case MUL: + return "multiply"; + case DIV: + return "div"; + default: + throw new IllegalArgumentException("Unknown op: " + op); + } + } + + static String comparisonOperator(final MALExpressionModel.CompareOp op) { + switch (op) { + case GT: + return " > "; + case LT: + return " < "; + case GTE: + return " >= "; + case LTE: + return " <= "; + default: + return " == "; + } + } + + static boolean isDownsamplingType(final String name) { + return "AVG".equals(name) || "SUM".equals(name) || "LATEST".equals(name) + || "SUM_PER_MIN".equals(name) || "MAX".equals(name) || "MIN".equals(name); + } + + /** + * Extracts a constant string key from a closure expression + * (used for bean setter naming). + * Returns the key string if the expression is a string literal, + * or null otherwise. + */ + static String extractConstantKey(final MALExpressionModel.ClosureExpr expr) { + if (expr instanceof MALExpressionModel.ClosureStringLiteral) { + return ((MALExpressionModel.ClosureStringLiteral) expr).getValue(); + } + return null; + } + + // ---- LambdaMetafactory wiring for closure methods ---- + + /** + * Closure type metadata for each functional interface used in MAL closures. + * Used to create LambdaMetafactory-based wrappers from methods on the main class. + */ + static final class ClosureTypeInfo { + final Class<?> interfaceClass; + final String samName; + final MethodType samType; + final MethodType instantiatedType; + final MethodType methodType; + + ClosureTypeInfo(final Class<?> interfaceClass, + final String samName, + final MethodType samType, + final MethodType instantiatedType, + final MethodType methodType) { + this.interfaceClass = interfaceClass; + this.samName = samName; + this.samType = samType; + this.instantiatedType = instantiatedType; + this.methodType = methodType; + } + } + + private static final Map<String, ClosureTypeInfo> CLOSURE_TYPE_INFO; + + static { + CLOSURE_TYPE_INFO = new HashMap<>(); + + // TagFunction extends Function<Map, Map> + // SAM: apply(Object) → Object (erased), instantiated: apply(Map) → Map + CLOSURE_TYPE_INFO.put( + "org.apache.skywalking.oap.meter.analyzer.v2.dsl" + + ".SampleFamilyFunctions$TagFunction", + new ClosureTypeInfo( + org.apache.skywalking.oap.meter.analyzer.v2.dsl + .SampleFamilyFunctions.TagFunction.class, + "apply", + MethodType.methodType(Object.class, Object.class), + MethodType.methodType(Map.class, Map.class), + MethodType.methodType(Map.class, Map.class))); + + // ForEachFunction — not generic, SAM = instantiated + CLOSURE_TYPE_INFO.put(FOR_EACH_FUNCTION_TYPE, + new ClosureTypeInfo( + org.apache.skywalking.oap.meter.analyzer.v2.dsl + .SampleFamilyFunctions.ForEachFunction.class, + "accept", + MethodType.methodType(void.class, String.class, Map.class), + MethodType.methodType(void.class, String.class, Map.class), + MethodType.methodType(void.class, String.class, Map.class))); + + // PropertiesExtractor extends Function<Map, Map> + CLOSURE_TYPE_INFO.put(PROPERTIES_EXTRACTOR_TYPE, + new ClosureTypeInfo( + org.apache.skywalking.oap.meter.analyzer.v2.dsl + .SampleFamilyFunctions.PropertiesExtractor.class, + "apply", + MethodType.methodType(Object.class, Object.class), + MethodType.methodType(Map.class, Map.class), + MethodType.methodType(Map.class, Map.class))); + + // DecorateFunction extends Consumer<MeterEntity> + // SAM: accept(Object) → void (erased), instantiated: accept(Object) → void + CLOSURE_TYPE_INFO.put(DECORATE_FUNCTION_TYPE, + new ClosureTypeInfo( + org.apache.skywalking.oap.meter.analyzer.v2.dsl + .SampleFamilyFunctions.DecorateFunction.class, + "accept", + MethodType.methodType(void.class, Object.class), + MethodType.methodType(void.class, Object.class), + MethodType.methodType(void.class, Object.class))); + } + + static ClosureTypeInfo getClosureTypeInfo(final String interfaceType) { + return CLOSURE_TYPE_INFO.get(interfaceType); + } + + /** + * Creates a functional interface instance from a method handle using + * {@link LambdaMetafactory}. This is the same mechanism {@code javac} + * uses to compile lambda expressions — the JIT can fully inline through + * the callsite. No separate {@code .class} file is produced on disk. + */ + /** + * Creates a functional interface instance from a direct (unbound) method handle + * using {@link LambdaMetafactory}, capturing the target instance. + * + * @param target a direct method handle (not bound via bindTo) + * @param receiverClass the class of the instance to capture + * @param receiver the instance to capture as the lambda's receiver + */ + static Object createLambda(final MethodHandles.Lookup lookup, + final ClosureTypeInfo typeInfo, + final MethodHandle target, + final Class<?> receiverClass, + final Object receiver) throws Exception { + try { + // The factory type captures the receiver: (ReceiverClass) → InterfaceType + final CallSite site = LambdaMetafactory.metafactory( + lookup, + typeInfo.samName, + MethodType.methodType(typeInfo.interfaceClass, receiverClass), + typeInfo.samType, + target, + typeInfo.instantiatedType); + return site.getTarget().invoke(receiver); + } catch (final Exception e) { + throw e; + } catch (final Throwable t) { + throw new RuntimeException("Failed to create lambda for " + typeInfo.samName, t); + } + } + + /** + * Checks whether a closure expression returns {@code boolean} by inspecting + * the last method call in the chain against {@link String} method signatures. + * MAL closure params are always {@code Map<String, String>}, so chained + * methods operate on {@code String}. + */ + static boolean isBooleanExpression(final MALExpressionModel.ClosureExpr expr) { + String lastMethodName = null; + if (expr instanceof MALExpressionModel.ClosureMethodChain) { + final List<MALExpressionModel.ClosureChainSegment> segs = + ((MALExpressionModel.ClosureMethodChain) expr).getSegments(); + for (int i = segs.size() - 1; i >= 0; i--) { + if (segs.get(i) instanceof MALExpressionModel.ClosureMethodCallSeg) { + lastMethodName = + ((MALExpressionModel.ClosureMethodCallSeg) segs.get(i)) + .getName(); + break; + } + } + } + if (lastMethodName == null) { + return false; + } + for (final java.lang.reflect.Method m : String.class.getMethods()) { + if (m.getName().equals(lastMethodName) + && m.getReturnType() == boolean.class) { + return true; + } + } + return false; + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALExpressionModel.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALExpressionModel.java new file mode 100644 index 000000000000..cd05a0b630a8 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALExpressionModel.java @@ -0,0 +1,638 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.compiler; + +import java.util.Collections; +import java.util.List; +import lombok.Getter; + +/** + * Immutable AST model for MAL (Meter Analysis Language) expressions. + * + * <p>Represents parsed expressions like: + * <pre> + * metric_name.tagEqual("k","v").sum(["tag"]).rate("PT1M").service(["svc"], Layer.GENERAL) + * (metric1 + metric2) * 100 + * metric.tag({tags -> tags.key = "val"}).histogram().histogram_percentile([50,75,90,95,99]) + * </pre> + */ +public final class MALExpressionModel { + + // ==================== Expression nodes ==================== + + /** + * Base interface for all expression AST nodes. + */ + public interface Expr { + } + + /** + * Metric reference with optional method chain: + * {@code metric_name} or {@code metric_name.sum(["tag"]).rate("PT1M")} + */ + @Getter + public static final class MetricExpr implements Expr { + private final String metricName; + private final List<MethodCall> methodChain; + + public MetricExpr(final String metricName, final List<MethodCall> methodChain) { + this.metricName = metricName; + this.methodChain = Collections.unmodifiableList(methodChain); + } + } + + /** + * Top-level function call: {@code count(metric)}, {@code topN(metric, 10, Order.ASC)} + */ + @Getter + public static final class FunctionCallExpr implements Expr { + private final String functionName; + private final List<Argument> arguments; + private final List<MethodCall> methodChain; + + public FunctionCallExpr(final String functionName, + final List<Argument> arguments, + final List<MethodCall> methodChain) { + this.functionName = functionName; + this.arguments = Collections.unmodifiableList(arguments); + this.methodChain = Collections.unmodifiableList(methodChain); + } + } + + /** + * Parenthesized expression with method chain: + * {@code (metric * 100).sum(['tag']).rate('PT1M')} + */ + @Getter + public static final class ParenChainExpr implements Expr { + private final Expr inner; + private final List<MethodCall> methodChain; + + public ParenChainExpr(final Expr inner, final List<MethodCall> methodChain) { + this.inner = inner; + this.methodChain = Collections.unmodifiableList(methodChain); + } + } + + /** + * Binary arithmetic: {@code metric1 + metric2}, {@code (metric * 100)} + */ + @Getter + public static final class BinaryExpr implements Expr { + private final Expr left; + private final ArithmeticOp op; + private final Expr right; + + public BinaryExpr(final Expr left, final ArithmeticOp op, final Expr right) { + this.left = left; + this.op = op; + this.right = right; + } + } + + /** + * Unary negation: {@code -metric} + */ + @Getter + public static final class UnaryNegExpr implements Expr { + private final Expr operand; + + public UnaryNegExpr(final Expr operand) { + this.operand = operand; + } + } + + /** + * Numeric literal: {@code 100}, {@code 3.14} + */ + @Getter + public static final class NumberExpr implements Expr { + private final double value; + + public NumberExpr(final double value) { + this.value = value; + } + } + + // ==================== Method calls ==================== + + /** + * A method call in a chain: {@code .sum(["tag"])}, {@code .rate("PT1M")} + */ + @Getter + public static final class MethodCall { + private final String name; + private final List<Argument> arguments; + + public MethodCall(final String name, final List<Argument> arguments) { + this.name = name; + this.arguments = Collections.unmodifiableList(arguments); + } + } + + // ==================== Arguments ==================== + + /** + * Base interface for method/function arguments. + */ + public interface Argument { + } + + /** + * Expression argument (metric ref, number, arithmetic). + */ + @Getter + public static final class ExprArgument implements Argument { + private final Expr expr; + + public ExprArgument(final Expr expr) { + this.expr = expr; + } + } + + /** + * String list: {@code ["tag1", "tag2"]} + */ + @Getter + public static final class StringListArgument implements Argument { + private final List<String> values; + + public StringListArgument(final List<String> values) { + this.values = Collections.unmodifiableList(values); + } + } + + /** + * Number list: {@code [50, 75, 90, 95, 99]} + */ + @Getter + public static final class NumberListArgument implements Argument { + private final List<Double> values; + + public NumberListArgument(final List<Double> values) { + this.values = Collections.unmodifiableList(values); + } + } + + /** + * String literal: {@code "PT1M"}, {@code 'command'} + */ + @Getter + public static final class StringArgument implements Argument { + private final String value; + + public StringArgument(final String value) { + this.value = value; + } + } + + /** + * Boolean literal: {@code true}, {@code false} + */ + @Getter + public static final class BoolArgument implements Argument { + private final boolean value; + + public BoolArgument(final boolean value) { + this.value = value; + } + } + + /** + * Enum reference: {@code Layer.GENERAL}, {@code K8sRetagType.Pod2Service} + */ + @Getter + public static final class EnumRefArgument implements Argument { + private final String enumType; + private final String enumValue; + + public EnumRefArgument(final String enumType, final String enumValue) { + this.enumType = enumType; + this.enumValue = enumValue; + } + } + + /** + * Null literal argument. + */ + public static final class NullArgument implements Argument { + } + + /** + * Closure expression: {@code {tags -> tags.key = "val"}} + */ + @Getter + public static final class ClosureArgument implements Argument { + private final List<String> params; + private final List<ClosureStatement> body; + + public ClosureArgument(final List<String> params, final List<ClosureStatement> body) { + this.params = Collections.unmodifiableList(params); + this.body = Collections.unmodifiableList(body); + } + } + + // ==================== Closure statements ==================== + + public interface ClosureStatement { + } + + @Getter + public static final class ClosureIfStatement implements ClosureStatement { + private final ClosureCondition condition; + private final List<ClosureStatement> thenBranch; + private final List<ClosureStatement> elseBranch; + + public ClosureIfStatement(final ClosureCondition condition, + final List<ClosureStatement> thenBranch, + final List<ClosureStatement> elseBranch) { + this.condition = condition; + this.thenBranch = Collections.unmodifiableList(thenBranch); + this.elseBranch = elseBranch != null + ? Collections.unmodifiableList(elseBranch) : Collections.emptyList(); + } + } + + @Getter + public static final class ClosureReturnStatement implements ClosureStatement { + private final ClosureExpr value; + + public ClosureReturnStatement(final ClosureExpr value) { + this.value = value; + } + } + + /** + * Assignment statement: {@code tags.key = expr} or {@code tags[expr] = expr} + * + * <p>{@code mapVar} is the variable (e.g. "tags"), {@code keyExpr} is the key + * expression (string literal for field access, or arbitrary expression for bracket access). + */ + @Getter + public static final class ClosureAssignment implements ClosureStatement { + private final String mapVar; + private final ClosureExpr keyExpr; + private final ClosureExpr value; + + public ClosureAssignment(final String mapVar, final ClosureExpr keyExpr, + final ClosureExpr value) { + this.mapVar = mapVar; + this.keyExpr = keyExpr; + this.value = value; + } + } + + /** + * Local variable declaration: {@code String result = ""}, {@code String protocol = tags['protocol']} + */ + @Getter + public static final class ClosureVarDecl implements ClosureStatement { + private final String typeName; + private final String varName; + private final ClosureExpr initializer; + + public ClosureVarDecl(final String typeName, final String varName, + final ClosureExpr initializer) { + this.typeName = typeName; + this.varName = varName; + this.initializer = initializer; + } + } + + /** + * Local variable reassignment: {@code result = '129'} + */ + @Getter + public static final class ClosureVarAssign implements ClosureStatement { + private final String varName; + private final ClosureExpr value; + + public ClosureVarAssign(final String varName, final ClosureExpr value) { + this.varName = varName; + this.value = value; + } + } + + @Getter + public static final class ClosureExprStatement implements ClosureStatement { + private final ClosureExpr expr; + + public ClosureExprStatement(final ClosureExpr expr) { + this.expr = expr; + } + } + + // ==================== Closure expressions ==================== + + public interface ClosureExpr { + } + + @Getter + public static final class ClosureStringLiteral implements ClosureExpr { + private final String value; + + public ClosureStringLiteral(final String value) { + this.value = value; + } + } + + @Getter + public static final class ClosureNumberLiteral implements ClosureExpr { + private final double value; + + public ClosureNumberLiteral(final double value) { + this.value = value; + } + } + + @Getter + public static final class ClosureBoolLiteral implements ClosureExpr { + private final boolean value; + + public ClosureBoolLiteral(final boolean value) { + this.value = value; + } + } + + public static final class ClosureNullLiteral implements ClosureExpr { + } + + /** + * Groovy map literal: {@code ['pod': tags.pod, 'namespace': tags.namespace]} + */ + @Getter + public static final class ClosureMapLiteral implements ClosureExpr { + private final List<MapEntry> entries; + + public ClosureMapLiteral(final List<MapEntry> entries) { + this.entries = Collections.unmodifiableList(entries); + } + } + + /** + * Single entry in a Groovy map literal: {@code 'key': expr} + */ + @Getter + public static final class MapEntry { + private final String key; + private final ClosureExpr value; + + public MapEntry(final String key, final ClosureExpr value) { + this.key = key; + this.value = value; + } + } + + /** + * Method chain in closure: {@code tags.service_name}, {@code tags['key']}, + * {@code tags.service?.trim()} + */ + @Getter + public static final class ClosureMethodChain implements ClosureExpr { + private final String target; + private final List<ClosureChainSegment> segments; + + public ClosureMethodChain(final String target, + final List<ClosureChainSegment> segments) { + this.target = target; + this.segments = Collections.unmodifiableList(segments); + } + } + + /** + * Method chain on a non-identifier base expression: + * {@code "str".toString()}, {@code (expr).split(...)}. + */ + @Getter + public static final class ClosureExprChain implements ClosureExpr { + private final ClosureExpr base; + private final List<ClosureChainSegment> segments; + + public ClosureExprChain(final ClosureExpr base, + final List<ClosureChainSegment> segments) { + this.base = base; + this.segments = Collections.unmodifiableList(segments); + } + } + + @Getter + public static final class ClosureBinaryExpr implements ClosureExpr { + private final ClosureExpr left; + private final ArithmeticOp op; + private final ClosureExpr right; + + public ClosureBinaryExpr(final ClosureExpr left, + final ArithmeticOp op, + final ClosureExpr right) { + this.left = left; + this.op = op; + this.right = right; + } + } + + @Getter + public static final class ClosureElvisExpr implements ClosureExpr { + private final ClosureExpr primary; + private final ClosureExpr fallback; + + public ClosureElvisExpr(final ClosureExpr primary, + final ClosureExpr fallback) { + this.primary = primary; + this.fallback = fallback; + } + } + + /** + * Ternary expression: {@code condition ? trueExpr : falseExpr} + */ + @Getter + public static final class ClosureTernaryExpr implements ClosureExpr { + private final ClosureExpr condition; + private final ClosureExpr trueExpr; + private final ClosureExpr falseExpr; + + public ClosureTernaryExpr(final ClosureExpr condition, + final ClosureExpr trueExpr, + final ClosureExpr falseExpr) { + this.condition = condition; + this.trueExpr = trueExpr; + this.falseExpr = falseExpr; + } + } + + /** + * Ternary with explicit comparison condition: {@code left op right ? trueExpr : falseExpr}. + * E.g., {@code parts.length > 0 ? parts[0] : ''}. + */ + @Getter + public static final class ClosureCompTernaryExpr implements ClosureExpr { + private final ClosureExpr left; + private final CompareOp op; + private final ClosureExpr right; + private final ClosureExpr trueExpr; + private final ClosureExpr falseExpr; + + public ClosureCompTernaryExpr(final ClosureExpr left, + final CompareOp op, + final ClosureExpr right, + final ClosureExpr trueExpr, + final ClosureExpr falseExpr) { + this.left = left; + this.op = op; + this.right = right; + this.trueExpr = trueExpr; + this.falseExpr = falseExpr; + } + } + + /** + * Groovy regex match: {@code expr =~ /pattern/}. + * Represents a regex match that produces a {@code java.util.regex.Matcher}. + */ + @Getter + public static final class ClosureRegexMatchExpr implements ClosureExpr { + private final ClosureExpr target; + private final String pattern; + + public ClosureRegexMatchExpr(final ClosureExpr target, final String pattern) { + this.target = target; + this.pattern = pattern; + } + } + + // ==================== Closure chain segments ==================== + + public interface ClosureChainSegment { + } + + @Getter + public static final class ClosureFieldAccess implements ClosureChainSegment { + private final String name; + private final boolean safeNav; + + public ClosureFieldAccess(final String name, final boolean safeNav) { + this.name = name; + this.safeNav = safeNav; + } + } + + @Getter + public static final class ClosureMethodCallSeg implements ClosureChainSegment { + private final String name; + private final List<ClosureExpr> arguments; + private final boolean safeNav; + + public ClosureMethodCallSeg(final String name, + final List<ClosureExpr> arguments, + final boolean safeNav) { + this.name = name; + this.arguments = Collections.unmodifiableList(arguments); + this.safeNav = safeNav; + } + } + + @Getter + public static final class ClosureIndexAccess implements ClosureChainSegment { + private final ClosureExpr index; + + public ClosureIndexAccess(final ClosureExpr index) { + this.index = index; + } + } + + // ==================== Closure conditions ==================== + + public interface ClosureCondition extends ClosureExpr { + } + + @Getter + public static final class ClosureComparison implements ClosureCondition { + private final ClosureExpr left; + private final CompareOp op; + private final ClosureExpr right; + + public ClosureComparison(final ClosureExpr left, + final CompareOp op, + final ClosureExpr right) { + this.left = left; + this.op = op; + this.right = right; + } + } + + @Getter + public static final class ClosureLogical implements ClosureCondition { + private final ClosureCondition left; + private final LogicalOp op; + private final ClosureCondition right; + + public ClosureLogical(final ClosureCondition left, + final LogicalOp op, + final ClosureCondition right) { + this.left = left; + this.op = op; + this.right = right; + } + } + + @Getter + public static final class ClosureNot implements ClosureCondition { + private final ClosureCondition inner; + + public ClosureNot(final ClosureCondition inner) { + this.inner = inner; + } + } + + @Getter + public static final class ClosureExprCondition implements ClosureCondition { + private final ClosureExpr expr; + + public ClosureExprCondition(final ClosureExpr expr) { + this.expr = expr; + } + } + + @Getter + public static final class ClosureInCondition implements ClosureCondition { + private final ClosureExpr expr; + private final List<String> values; + + public ClosureInCondition(final ClosureExpr expr, final List<String> values) { + this.expr = expr; + this.values = Collections.unmodifiableList(values); + } + } + + // ==================== Enums ==================== + + public enum ArithmeticOp { + ADD, SUB, MUL, DIV + } + + public enum CompareOp { + EQ, NEQ, GT, LT, GTE, LTE + } + + public enum LogicalOp { + AND, OR + } + + private MALExpressionModel() { + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALScriptParser.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALScriptParser.java new file mode 100644 index 000000000000..f7c608b36438 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALScriptParser.java @@ -0,0 +1,888 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.compiler; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.regex.Matcher; +import java.util.regex.Pattern; +import org.antlr.v4.runtime.BaseErrorListener; +import org.antlr.v4.runtime.CharStreams; +import org.antlr.v4.runtime.CommonTokenStream; +import org.antlr.v4.runtime.RecognitionException; +import org.antlr.v4.runtime.Recognizer; +import org.apache.skywalking.mal.rt.grammar.MALLexer; +import org.apache.skywalking.mal.rt.grammar.MALParser; +import org.apache.skywalking.mal.rt.grammar.MALParserBaseVisitor; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.Argument; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ArithmeticOp; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.BinaryExpr; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.BoolArgument; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureArgument; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureAssignment; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureBinaryExpr; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureBoolLiteral; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureChainSegment; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureCondition; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureExpr; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureExprCondition; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureExprStatement; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureFieldAccess; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureIfStatement; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureIndexAccess; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureMethodCallSeg; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureMethodChain; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureNullLiteral; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureNumberLiteral; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureReturnStatement; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureStatement; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureStringLiteral; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureVarAssign; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureVarDecl; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.CompareOp; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.EnumRefArgument; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.Expr; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ExprArgument; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.FunctionCallExpr; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.LogicalOp; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.MetricExpr; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.MethodCall; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.NumberExpr; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.NumberListArgument; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ParenChainExpr; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.StringArgument; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.StringListArgument; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.UnaryNegExpr; + +/** + * Facade: parses MAL expression strings into {@link MALExpressionModel.Expr} AST. + * + * <pre> + * MALExpressionModel.Expr ast = MALScriptParser.parse( + * "metric.sum(['tag']).rate('PT1M').service(['svc'], Layer.GENERAL)"); + * </pre> + */ +public final class MALScriptParser { + + private MALScriptParser() { + } + + /** + * Pre-process expression to convert Groovy regex literals used as method + * arguments into string literals. E.g., {@code split(/\|/, -1)} becomes + * {@code split("\\|", -1)}. Regex literals after {@code =~} are handled + * by the lexer mode and are NOT touched here. + */ + static String preprocessRegexLiterals(final String expression) { + // Match /pattern/ that appears after ( or , (method arg context), + // but NOT after =~ (which is handled by lexer mode) + final Pattern argRegex = Pattern.compile( + "(?<=[,(])\\s*/([^/\\r\\n]+)/"); + final Matcher m = argRegex.matcher(expression); + if (!m.find()) { + return expression; + } + final StringBuffer sb = new StringBuffer(); + m.reset(); + while (m.find()) { + // Escape backslashes: regex literal \| must become string literal \\| + // because the MAL lexer only recognizes \\, \", \' etc. as escape sequences + final String body = m.group(1).replace("\\", "\\\\"); + // Preserve leading whitespace from the match + final String leading = m.group().substring(0, m.group().indexOf('/')); + m.appendReplacement(sb, + java.util.regex.Matcher.quoteReplacement( + leading + "\"" + body + "\"")); + } + m.appendTail(sb); + return sb.toString(); + } + + public static Expr parse(final String expression) { + final String preprocessed = preprocessRegexLiterals(expression); + final MALLexer lexer = new MALLexer(CharStreams.fromString(preprocessed)); + final CommonTokenStream tokens = new CommonTokenStream(lexer); + final MALParser parser = new MALParser(tokens); + + final List<String> errors = new ArrayList<>(); + parser.removeErrorListeners(); + parser.addErrorListener(new BaseErrorListener() { + @Override + public void syntaxError(final Recognizer<?, ?> recognizer, + final Object offendingSymbol, + final int line, + final int charPositionInLine, + final String msg, + final RecognitionException e) { + errors.add(line + ":" + charPositionInLine + " " + msg); + } + }); + + final MALParser.ExpressionContext tree = parser.expression(); + if (!errors.isEmpty()) { + throw new IllegalArgumentException( + "MAL expression parsing failed: " + String.join("; ", errors) + + " in expression: " + expression); + } + + return new MALExprVisitor().visit(tree.additiveExpression()); + } + + /** + * Parse a standalone filter closure expression into a {@link ClosureArgument}. + * + * @param filterExpression e.g. {@code "{ tags -> tags.job_name == 'mysql-monitoring' }"} + */ + public static ClosureArgument parseFilter(final String filterExpression) { + final MALLexer lexer = new MALLexer(CharStreams.fromString(filterExpression)); + final CommonTokenStream tokens = new CommonTokenStream(lexer); + final MALParser parser = new MALParser(tokens); + + final List<String> errors = new ArrayList<>(); + parser.removeErrorListeners(); + parser.addErrorListener(new BaseErrorListener() { + @Override + public void syntaxError(final Recognizer<?, ?> recognizer, + final Object offendingSymbol, + final int line, + final int charPositionInLine, + final String msg, + final RecognitionException e) { + errors.add(line + ":" + charPositionInLine + " " + msg); + } + }); + + final MALParser.FilterExpressionContext tree = parser.filterExpression(); + if (!errors.isEmpty()) { + throw new IllegalArgumentException( + "MAL filter expression parsing failed: " + String.join("; ", errors) + + " in expression: " + filterExpression); + } + + return new ClosureVisitor().visitClosure(tree.closureExpression()); + } + + /** + * Visitor transforming ANTLR4 parse tree into MAL expression AST. + */ + private static final class MALExprVisitor extends MALParserBaseVisitor<Expr> { + + @Override + public Expr visitAdditiveExpression(final MALParser.AdditiveExpressionContext ctx) { + Expr result = visit(ctx.multiplicativeExpression(0)); + for (int i = 1; i < ctx.multiplicativeExpression().size(); i++) { + final ArithmeticOp op = ctx.getChild(2 * i - 1).getText().equals("+") + ? ArithmeticOp.ADD : ArithmeticOp.SUB; + result = new BinaryExpr(result, op, visit(ctx.multiplicativeExpression(i))); + } + return result; + } + + @Override + public Expr visitMultiplicativeExpression( + final MALParser.MultiplicativeExpressionContext ctx) { + Expr result = visit(ctx.unaryExpression(0)); + for (int i = 1; i < ctx.unaryExpression().size(); i++) { + final ArithmeticOp op = ctx.getChild(2 * i - 1).getText().equals("*") + ? ArithmeticOp.MUL : ArithmeticOp.DIV; + result = new BinaryExpr(result, op, visit(ctx.unaryExpression(i))); + } + return result; + } + + @Override + public Expr visitUnaryNeg(final MALParser.UnaryNegContext ctx) { + return new UnaryNegExpr(visit(ctx.unaryExpression())); + } + + @Override + public Expr visitUnaryPostfix(final MALParser.UnaryPostfixContext ctx) { + return visit(ctx.postfixExpression()); + } + + @Override + public Expr visitUnaryNumber(final MALParser.UnaryNumberContext ctx) { + return new NumberExpr(Double.parseDouble(ctx.NUMBER().getText())); + } + + @Override + public Expr visitPostfixExpression(final MALParser.PostfixExpressionContext ctx) { + final List<MethodCall> methods = new ArrayList<>(); + for (final MALParser.MethodCallContext mc : ctx.methodCall()) { + methods.add(visitMethodCallNode(mc)); + } + + final MALParser.PrimaryContext primary = ctx.primary(); + if (primary.functionCall() != null) { + final MALParser.FunctionCallContext fc = primary.functionCall(); + final List<Argument> args = fc.argumentList() != null + ? visitArgList(fc.argumentList()) : Collections.emptyList(); + return new FunctionCallExpr(fc.IDENTIFIER().getText(), args, methods); + } + + if (primary.additiveExpression() != null) { + final Expr inner = visit(primary.additiveExpression()); + if (methods.isEmpty()) { + return inner; + } + return new ParenChainExpr(inner, methods); + } + + return new MetricExpr(primary.IDENTIFIER().getText(), methods); + } + + private MethodCall visitMethodCallNode(final MALParser.MethodCallContext ctx) { + final String name = ctx.IDENTIFIER().getText(); + final List<Argument> args = ctx.argumentList() != null + ? visitArgList(ctx.argumentList()) : Collections.emptyList(); + return new MethodCall(name, args); + } + + private List<Argument> visitArgList(final MALParser.ArgumentListContext ctx) { + final List<Argument> args = new ArrayList<>(); + for (final MALParser.ArgumentContext argCtx : ctx.argument()) { + args.add(convertArgument(argCtx)); + } + return args; + } + + private Argument convertArgument(final MALParser.ArgumentContext ctx) { + if (ctx.stringList() != null) { + return convertStringList(ctx.stringList()); + } + if (ctx.numberList() != null) { + return convertNumberList(ctx.numberList()); + } + if (ctx.closureExpression() != null) { + return new ClosureVisitor().visitClosure(ctx.closureExpression()); + } + if (ctx.enumRef() != null) { + return new EnumRefArgument( + ctx.enumRef().IDENTIFIER(0).getText(), + ctx.enumRef().IDENTIFIER(1).getText()); + } + if (ctx.STRING() != null) { + return new StringArgument(stripQuotes(ctx.STRING().getText())); + } + if (ctx.boolLiteral() != null) { + return new BoolArgument(ctx.boolLiteral().TRUE() != null); + } + if (ctx.NULL() != null) { + return new MALExpressionModel.NullArgument(); + } + // additiveExpression — nested expression + return new ExprArgument(visit(ctx.additiveExpression())); + } + + private StringListArgument convertStringList(final MALParser.StringListContext ctx) { + final List<String> values = new ArrayList<>(); + ctx.STRING().forEach(s -> values.add(stripQuotes(s.getText()))); + return new StringListArgument(values); + } + + private NumberListArgument convertNumberList(final MALParser.NumberListContext ctx) { + final List<Double> values = new ArrayList<>(); + ctx.NUMBER().forEach(n -> values.add(Double.parseDouble(n.getText()))); + return new NumberListArgument(values); + } + } + + /** + * Visitor for closure expressions within MAL. + */ + private static final class ClosureVisitor { + + ClosureArgument visitClosure(final MALParser.ClosureExpressionContext ctx) { + final List<String> params = new ArrayList<>(); + if (ctx.closureParams() != null) { + ctx.closureParams().IDENTIFIER().forEach(id -> params.add(id.getText())); + } + final List<ClosureStatement> body = convertClosureBody(ctx.closureBody()); + return new ClosureArgument(params, body); + } + + private List<ClosureStatement> convertClosureBody( + final MALParser.ClosureBodyContext ctx) { + // Bare condition or braced condition: { tags -> tags.x == 'v' } + if (ctx.closureCondition() != null) { + final ClosureCondition cond = convertCondition(ctx.closureCondition()); + return List.of(new ClosureExprStatement(cond)); + } + final List<ClosureStatement> stmts = new ArrayList<>(); + for (final MALParser.ClosureStatementContext stmtCtx : ctx.closureStatement()) { + stmts.add(convertClosureStatement(stmtCtx)); + } + return stmts; + } + + private ClosureStatement convertClosureStatement( + final MALParser.ClosureStatementContext ctx) { + if (ctx.ifStatement() != null) { + return convertIfStatement(ctx.ifStatement()); + } + if (ctx.returnStatement() != null) { + final ClosureExpr value = ctx.returnStatement().closureExpr() != null + ? convertClosureExpr(ctx.returnStatement().closureExpr()) : null; + return new ClosureReturnStatement(value); + } + if (ctx.variableDeclaration() != null) { + final MALParser.VariableDeclarationContext vd = ctx.variableDeclaration(); + final String typeName; + final String varName; + if (vd.DEF() != null) { + // def keyword: def matcher = ... + // Infer type from initializer + varName = vd.IDENTIFIER(0).getText(); + final ClosureExpr init = convertClosureExpr(vd.closureExpr()); + typeName = inferDefType(init); + return new ClosureVarDecl(typeName, varName, init); + } + if (vd.L_BRACKET() != null) { + // Array type: String[] parts = ... + typeName = vd.IDENTIFIER(0).getText() + "[]"; + } else { + typeName = vd.IDENTIFIER(0).getText(); + } + return new ClosureVarDecl( + typeName, + vd.IDENTIFIER(1).getText(), + convertClosureExpr(vd.closureExpr())); + } + if (ctx.assignmentStatement() != null) { + final MALParser.ClosureFieldAccessContext fa = + ctx.assignmentStatement().closureFieldAccess(); + final List<org.antlr.v4.runtime.tree.TerminalNode> ids = fa.IDENTIFIER(); + final String firstId = ids.get(0).getText(); + if (ids.size() == 1 && fa.closureExpr() == null) { + // bare variable assignment: result = '129' + final ClosureExpr value = + convertClosureExpr(ctx.assignmentStatement().closureExpr()); + return new ClosureVarAssign(firstId, value); + } + // Map assignment: tags.field = value or tags[expr] = value + final ClosureExpr keyExpr; + if (fa.closureExpr() != null) { + // tags[expr] = value + keyExpr = convertClosureExpr(fa.closureExpr()); + } else { + // tags.field = value — the key is the last IDENTIFIER + keyExpr = new ClosureStringLiteral(ids.get(ids.size() - 1).getText()); + } + final ClosureExpr value = + convertClosureExpr(ctx.assignmentStatement().closureExpr()); + return new ClosureAssignment(firstId, keyExpr, value); + } + // expressionStatement + return new ClosureExprStatement( + convertClosureExpr(ctx.expressionStatement().closureExpr())); + } + + private ClosureIfStatement convertIfStatement( + final MALParser.IfStatementContext ctx) { + final ClosureCondition condition = convertCondition(ctx.closureCondition()); + final List<ClosureStatement> thenBranch = new ArrayList<>(); + if (ctx.closureBlock(0) != null) { + for (final MALParser.ClosureStatementContext stmtCtx : + ctx.closureBlock(0).closureStatement()) { + thenBranch.add(convertClosureStatement(stmtCtx)); + } + } + List<ClosureStatement> elseBranch = null; + // Check for else-if or else + if (ctx.ifStatement() != null) { + elseBranch = new ArrayList<>(); + elseBranch.add(convertIfStatement(ctx.ifStatement())); + } else if (ctx.closureBlock().size() > 1) { + elseBranch = new ArrayList<>(); + for (final MALParser.ClosureStatementContext stmtCtx : + ctx.closureBlock(1).closureStatement()) { + elseBranch.add(convertClosureStatement(stmtCtx)); + } + } + return new ClosureIfStatement(condition, thenBranch, elseBranch); + } + + private ClosureCondition convertCondition( + final MALParser.ClosureConditionContext ctx) { + return convertConditionOr(ctx.closureConditionOr()); + } + + private ClosureCondition convertConditionOr( + final MALParser.ClosureConditionOrContext ctx) { + ClosureCondition result = convertConditionAnd(ctx.closureConditionAnd(0)); + for (int i = 1; i < ctx.closureConditionAnd().size(); i++) { + result = new MALExpressionModel.ClosureLogical( + result, LogicalOp.OR, convertConditionAnd(ctx.closureConditionAnd(i))); + } + return result; + } + + private ClosureCondition convertConditionAnd( + final MALParser.ClosureConditionAndContext ctx) { + ClosureCondition result = convertConditionPrimary(ctx.closureConditionPrimary(0)); + for (int i = 1; i < ctx.closureConditionPrimary().size(); i++) { + result = new MALExpressionModel.ClosureLogical( + result, LogicalOp.AND, + convertConditionPrimary(ctx.closureConditionPrimary(i))); + } + return result; + } + + private ClosureCondition convertConditionPrimary( + final MALParser.ClosureConditionPrimaryContext ctx) { + if (ctx instanceof MALParser.ConditionEqContext) { + final MALParser.ConditionEqContext eq = (MALParser.ConditionEqContext) ctx; + return new MALExpressionModel.ClosureComparison( + convertClosureExpr(eq.closureExpr(0)), + CompareOp.EQ, + convertClosureExpr(eq.closureExpr(1))); + } + if (ctx instanceof MALParser.ConditionNeqContext) { + final MALParser.ConditionNeqContext neq = (MALParser.ConditionNeqContext) ctx; + return new MALExpressionModel.ClosureComparison( + convertClosureExpr(neq.closureExpr(0)), + CompareOp.NEQ, + convertClosureExpr(neq.closureExpr(1))); + } + if (ctx instanceof MALParser.ConditionGtContext) { + final MALParser.ConditionGtContext gt = (MALParser.ConditionGtContext) ctx; + return new MALExpressionModel.ClosureComparison( + convertClosureExpr(gt.closureExpr(0)), + CompareOp.GT, + convertClosureExpr(gt.closureExpr(1))); + } + if (ctx instanceof MALParser.ConditionLtContext) { + final MALParser.ConditionLtContext lt = (MALParser.ConditionLtContext) ctx; + return new MALExpressionModel.ClosureComparison( + convertClosureExpr(lt.closureExpr(0)), + CompareOp.LT, + convertClosureExpr(lt.closureExpr(1))); + } + if (ctx instanceof MALParser.ConditionNotContext) { + final MALParser.ConditionNotContext not = (MALParser.ConditionNotContext) ctx; + return new MALExpressionModel.ClosureNot( + convertConditionPrimary(not.closureConditionPrimary())); + } + if (ctx instanceof MALParser.ConditionInContext) { + final MALParser.ConditionInContext in = (MALParser.ConditionInContext) ctx; + final List<String> values = new ArrayList<>(); + if (in.closureListLiteral() != null) { + in.closureListLiteral().STRING().forEach( + s -> values.add(stripQuotes(s.getText()))); + } + return new MALExpressionModel.ClosureInCondition( + convertClosureExpr(in.closureExpr()), values); + } + if (ctx instanceof MALParser.ConditionParenContext) { + final MALParser.ConditionParenContext paren = + (MALParser.ConditionParenContext) ctx; + return convertCondition(paren.closureCondition()); + } + // conditionExpr + final MALParser.ConditionExprContext exprCtx = + (MALParser.ConditionExprContext) ctx; + return new ClosureExprCondition(convertClosureExpr(exprCtx.closureExpr())); + } + + /** + * Infer the Java type for a {@code def} variable declaration from its initializer. + * <ul> + * <li>Regex match ({@code =~}) produces {@code String[][]}</li> + * <li>Method chain ending in {@code .split()} produces {@code String[]}</li> + * <li>Otherwise defaults to {@code Object}</li> + * </ul> + */ + private String inferDefType(final ClosureExpr init) { + if (init instanceof MALExpressionModel.ClosureRegexMatchExpr) { + return "String[][]"; + } + final List<MALExpressionModel.ClosureChainSegment> segs; + if (init instanceof ClosureMethodChain) { + segs = ((ClosureMethodChain) init).getSegments(); + } else if (init instanceof MALExpressionModel.ClosureExprChain) { + segs = ((MALExpressionModel.ClosureExprChain) init).getSegments(); + } else { + segs = Collections.emptyList(); + } + if (!segs.isEmpty()) { + final MALExpressionModel.ClosureChainSegment last = + segs.get(segs.size() - 1); + if (last instanceof MALExpressionModel.ClosureMethodCallSeg + && "split".equals( + ((MALExpressionModel.ClosureMethodCallSeg) last).getName())) { + return "String[]"; + } + } + return "Object"; + } + + private CompareOp convertCompOp(final MALParser.CompOpContext ctx) { + if (ctx.GT() != null) { + return CompareOp.GT; + } + if (ctx.LT() != null) { + return CompareOp.LT; + } + if (ctx.GTE() != null) { + return CompareOp.GTE; + } + if (ctx.LTE() != null) { + return CompareOp.LTE; + } + if (ctx.DEQ() != null) { + return CompareOp.EQ; + } + return CompareOp.NEQ; + } + + private ClosureExpr convertClosureExpr(final MALParser.ClosureExprContext ctx) { + if (ctx instanceof MALParser.ClosureTernaryCompContext) { + final MALParser.ClosureTernaryCompContext tc = + (MALParser.ClosureTernaryCompContext) ctx; + return new MALExpressionModel.ClosureCompTernaryExpr( + convertClosureExpr(tc.closureExpr(0)), + convertCompOp(tc.compOp()), + convertClosureExpr(tc.closureExpr(1)), + convertClosureExpr(tc.closureExpr(2)), + convertClosureExpr(tc.closureExpr(3))); + } + if (ctx instanceof MALParser.ClosureTernaryContext) { + final MALParser.ClosureTernaryContext ternary = + (MALParser.ClosureTernaryContext) ctx; + return new MALExpressionModel.ClosureTernaryExpr( + convertClosureExpr(ternary.closureExpr(0)), + convertClosureExpr(ternary.closureExpr(1)), + convertClosureExpr(ternary.closureExpr(2))); + } + if (ctx instanceof MALParser.ClosureRegexMatchContext) { + final MALParser.ClosureRegexMatchContext rm = + (MALParser.ClosureRegexMatchContext) ctx; + final String rawRegex = rm.REGEX_LITERAL().getText(); + // Strip surrounding slashes: /pattern/ → pattern + final String pattern = rawRegex.substring(1, rawRegex.length() - 1); + return new MALExpressionModel.ClosureRegexMatchExpr( + convertClosureExpr(rm.closureExpr()), pattern); + } + if (ctx instanceof MALParser.ClosureElvisContext) { + final MALParser.ClosureElvisContext elvis = + (MALParser.ClosureElvisContext) ctx; + return new MALExpressionModel.ClosureElvisExpr( + convertClosureExpr(elvis.closureExpr(0)), + convertClosureExpr(elvis.closureExpr(1))); + } + if (ctx instanceof MALParser.ClosureAddContext) { + final MALParser.ClosureAddContext add = (MALParser.ClosureAddContext) ctx; + return new ClosureBinaryExpr( + convertClosureExpr(add.closureExpr(0)), + ArithmeticOp.ADD, + convertClosureExpr(add.closureExpr(1))); + } + if (ctx instanceof MALParser.ClosureSubContext) { + final MALParser.ClosureSubContext sub = (MALParser.ClosureSubContext) ctx; + return new ClosureBinaryExpr( + convertClosureExpr(sub.closureExpr(0)), + ArithmeticOp.SUB, + convertClosureExpr(sub.closureExpr(1))); + } + if (ctx instanceof MALParser.ClosureMulContext) { + final MALParser.ClosureMulContext mul = (MALParser.ClosureMulContext) ctx; + return new ClosureBinaryExpr( + convertClosureExpr(mul.closureExpr(0)), + ArithmeticOp.MUL, + convertClosureExpr(mul.closureExpr(1))); + } + if (ctx instanceof MALParser.ClosureDivContext) { + final MALParser.ClosureDivContext div = (MALParser.ClosureDivContext) ctx; + return new ClosureBinaryExpr( + convertClosureExpr(div.closureExpr(0)), + ArithmeticOp.DIV, + convertClosureExpr(div.closureExpr(1))); + } + if (ctx instanceof MALParser.ClosureUnaryMinusContext) { + final MALParser.ClosureUnaryMinusContext um = + (MALParser.ClosureUnaryMinusContext) ctx; + final ClosureExpr inner = + convertClosureExprPrimary(um.closureExprPrimary()); + if (inner instanceof ClosureNumberLiteral) { + return new ClosureNumberLiteral( + -((ClosureNumberLiteral) inner).getValue()); + } + return new ClosureBinaryExpr( + new ClosureNumberLiteral(0), + ArithmeticOp.SUB, + inner); + } + // closurePrimary + final MALParser.ClosurePrimaryContext primary = + (MALParser.ClosurePrimaryContext) ctx; + return convertClosureExprPrimary(primary.closureExprPrimary()); + } + + private ClosureExpr convertClosureExprPrimary( + final MALParser.ClosureExprPrimaryContext ctx) { + if (ctx instanceof MALParser.ClosureStringContext) { + final MALParser.ClosureStringContext sc = + (MALParser.ClosureStringContext) ctx; + final String raw = stripQuotes(sc.STRING().getText()); + final ClosureExpr base = expandGString(raw); + return wrapWithChainAccess(base, sc.closureChainAccess()); + } + if (ctx instanceof MALParser.ClosureNumberContext) { + return new ClosureNumberLiteral( + Double.parseDouble( + ((MALParser.ClosureNumberContext) ctx).NUMBER().getText())); + } + if (ctx instanceof MALParser.ClosureNullContext) { + return new ClosureNullLiteral(); + } + if (ctx instanceof MALParser.ClosureBoolContext) { + final MALParser.ClosureBoolContext bc = (MALParser.ClosureBoolContext) ctx; + return new ClosureBoolLiteral(bc.boolLiteral().TRUE() != null); + } + if (ctx instanceof MALParser.ClosureParenContext) { + final MALParser.ClosureParenContext pc = + (MALParser.ClosureParenContext) ctx; + final ClosureExpr base = convertClosureExpr(pc.closureExpr()); + return wrapWithChainAccess(base, pc.closureChainAccess()); + } + if (ctx instanceof MALParser.ClosureMapContext) { + final MALParser.ClosureMapLiteralContext mapCtx = + ((MALParser.ClosureMapContext) ctx).closureMapLiteral(); + final List<MALExpressionModel.MapEntry> entries = new ArrayList<>(); + for (final MALParser.ClosureMapEntryContext entry : + mapCtx.closureMapEntry()) { + entries.add(new MALExpressionModel.MapEntry( + stripQuotes(entry.STRING().getText()), + convertClosureExpr(entry.closureExpr()))); + } + return new MALExpressionModel.ClosureMapLiteral(entries); + } + // closureChain + final MALParser.ClosureChainContext chain = (MALParser.ClosureChainContext) ctx; + return convertClosureMethodChain(chain.closureMethodChain()); + } + + private ClosureMethodChain convertClosureMethodChain( + final MALParser.ClosureMethodChainContext ctx) { + final String target = ctx.closureTarget().IDENTIFIER().getText(); + final List<ClosureChainSegment> segments = new ArrayList<>(); + + for (final MALParser.ClosureChainAccessContext acc : ctx.closureChainAccess()) { + if (acc.closureChainSegment() != null) { + final boolean isSafeNav = acc.safeNav() != null; + segments.add(convertClosureChainSegment( + acc.closureChainSegment(), isSafeNav)); + } else if (acc.closureExpr() != null) { + // Direct bracket access: tags['key'] or tags[expr] + segments.add(new ClosureIndexAccess( + convertClosureExpr(acc.closureExpr()))); + } + } + + return new ClosureMethodChain(target, segments); + } + + private ClosureExpr wrapWithChainAccess( + final ClosureExpr base, + final List<MALParser.ClosureChainAccessContext> accesses) { + if (accesses == null || accesses.isEmpty()) { + return base; + } + final List<ClosureChainSegment> segments = new ArrayList<>(); + for (final MALParser.ClosureChainAccessContext acc : accesses) { + if (acc.closureChainSegment() != null) { + final boolean isSafeNav = acc.safeNav() != null; + segments.add(convertClosureChainSegment( + acc.closureChainSegment(), isSafeNav)); + } else if (acc.closureExpr() != null) { + segments.add(new ClosureIndexAccess( + convertClosureExpr(acc.closureExpr()))); + } + } + return new MALExpressionModel.ClosureExprChain(base, segments); + } + + private ClosureChainSegment convertClosureChainSegment( + final MALParser.ClosureChainSegmentContext ctx, + final boolean safeNav) { + if (ctx instanceof MALParser.ChainMethodCallContext) { + final MALParser.ChainMethodCallContext mc = + (MALParser.ChainMethodCallContext) ctx; + final List<ClosureExpr> args = new ArrayList<>(); + if (mc.closureArgList() != null) { + for (final MALParser.ClosureExprContext argCtx : + mc.closureArgList().closureExpr()) { + args.add(convertClosureExpr(argCtx)); + } + } + return new ClosureMethodCallSeg(mc.IDENTIFIER().getText(), args, safeNav); + } + if (ctx instanceof MALParser.ChainIndexAccessContext) { + final MALParser.ChainIndexAccessContext idx = + (MALParser.ChainIndexAccessContext) ctx; + return new ClosureIndexAccess(convertClosureExpr(idx.closureExpr())); + } + // chainFieldAccess + final MALParser.ChainFieldAccessContext fa = + (MALParser.ChainFieldAccessContext) ctx; + return new ClosureFieldAccess(fa.IDENTIFIER().getText(), safeNav); + } + } + + /** + * Expand Groovy GString interpolation: {@code "text ${expr} more"} becomes + * a concatenation chain: {@code "text " + expr + " more"}. + * If the string contains no {@code ${...}} patterns, returns a plain + * {@link ClosureStringLiteral}. + */ + static MALExpressionModel.ClosureExpr expandGString(final String raw) { + if (!raw.contains("${")) { + return new MALExpressionModel.ClosureStringLiteral(raw); + } + + final List<MALExpressionModel.ClosureExpr> parts = new ArrayList<>(); + int pos = 0; + while (pos < raw.length()) { + final int dollarBrace = raw.indexOf("${", pos); + if (dollarBrace < 0) { + // Remaining text + parts.add(new MALExpressionModel.ClosureStringLiteral( + raw.substring(pos))); + break; + } + // Text before ${ + if (dollarBrace > pos) { + parts.add(new MALExpressionModel.ClosureStringLiteral( + raw.substring(pos, dollarBrace))); + } + // Find matching } + int braceDepth = 1; + int i = dollarBrace + 2; + while (i < raw.length() && braceDepth > 0) { + if (raw.charAt(i) == '{') { + braceDepth++; + } else if (raw.charAt(i) == '}') { + braceDepth--; + } + i++; + } + final String innerExpr = raw.substring(dollarBrace + 2, i - 1); + // Parse the inner expression as a mini closure expression + parts.add(parseGStringInterpolation(innerExpr)); + pos = i; + } + + // Build concatenation chain + MALExpressionModel.ClosureExpr result = parts.get(0); + for (int i = 1; i < parts.size(); i++) { + result = new MALExpressionModel.ClosureBinaryExpr( + result, MALExpressionModel.ArithmeticOp.ADD, parts.get(i)); + } + return result; + } + + /** + * Parse a GString interpolation expression like {@code tags.service_name} + * or {@code log.service} into a {@link MALExpressionModel.ClosureMethodChain}. + */ + private static MALExpressionModel.ClosureExpr parseGStringInterpolation( + final String expr) { + // Simple dotted path: tags.service_name, me.serviceName, etc. + // Split on dots and build a chain + final String[] dotParts = expr.split("\\."); + if (dotParts.length == 1) { + // Bare variable reference + return new MALExpressionModel.ClosureMethodChain( + dotParts[0], Collections.emptyList()); + } + // Build chain: first part is target, rest are field accesses + final List<MALExpressionModel.ClosureChainSegment> segments = new ArrayList<>(); + for (int i = 1; i < dotParts.length; i++) { + // Check for method call: name() + if (dotParts[i].endsWith("()")) { + final String methodName = dotParts[i].substring( + 0, dotParts[i].length() - 2); + segments.add(new MALExpressionModel.ClosureMethodCallSeg( + methodName, Collections.emptyList(), false)); + } else if (dotParts[i].endsWith(")")) { + // Method with args not supported in GString — treat as field + segments.add(new MALExpressionModel.ClosureFieldAccess( + dotParts[i], false)); + } else { + segments.add(new MALExpressionModel.ClosureFieldAccess( + dotParts[i], false)); + } + } + return new MALExpressionModel.ClosureMethodChain(dotParts[0], segments); + } + + static String stripQuotes(final String s) { + if (s.length() >= 2 && (s.charAt(0) == '\'' || s.charAt(0) == '"')) { + return unescapeString(s.substring(1, s.length() - 1)); + } + return s; + } + + /** + * Interpret Java/Groovy escape sequences in a string literal body. + * ANTLR4 preserves raw source bytes, so {@code "\\|"} yields {@code \\|} + * after quote stripping. This method converts it to the logical value + * {@code \|} so that codegen's escapeJava round-trips correctly. + */ + private static String unescapeString(final String s) { + if (s.indexOf('\\') < 0) { + return s; + } + final StringBuilder sb = new StringBuilder(s.length()); + for (int i = 0; i < s.length(); i++) { + final char c = s.charAt(i); + if (c == '\\' && i + 1 < s.length()) { + final char next = s.charAt(i + 1); + switch (next) { + case '\\': + sb.append('\\'); + break; + case 'n': + sb.append('\n'); + break; + case 'r': + sb.append('\r'); + break; + case 't': + sb.append('\t'); + break; + case '"': + sb.append('"'); + break; + case '\'': + sb.append('\''); + break; + default: + // Unknown escape — preserve as-is + sb.append(c).append(next); + break; + } + i++; + } else { + sb.append(c); + } + } + return sb.toString(); + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/rt/MalExpressionPackageHolder.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/rt/MalExpressionPackageHolder.java new file mode 100644 index 000000000000..b3455543fdc4 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/rt/MalExpressionPackageHolder.java @@ -0,0 +1,26 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.compiler.rt; + +/** + * Empty marker class used as the class loading anchor for Javassist + * {@code CtClass.toClass(Class)} on JDK 16+. + * Generated MAL expression classes are loaded in this package. + */ +public class MalExpressionPackageHolder { +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/rt/MalRuntimeHelper.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/rt/MalRuntimeHelper.java new file mode 100644 index 000000000000..1e8dfc0d5f61 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/rt/MalRuntimeHelper.java @@ -0,0 +1,76 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.compiler.rt; + +import java.util.regex.Matcher; +import java.util.regex.Pattern; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyBuilder; + +/** + * Static helper methods called by v2-generated {@code MalExpression} classes. + * Keeps new runtime behaviour in the v2 compiler package, avoiding modifications + * to the shared {@link SampleFamily} class. + */ +public final class MalRuntimeHelper { + + private MalRuntimeHelper() { + } + + /** + * Groovy regex match ({@code =~}): returns a {@code String[][]} where each row is + * one match with group 0 (full match) and capture groups 1..N. + * Returns {@code null} if the pattern does not match, so that Groovy-style + * truthiness checks ({@code matcher ? matcher[0][1] : "unknown"}) work via null check. + */ + public static String[][] regexMatch(final String input, final String regex) { + if (input == null) { + return null; + } + final Matcher m = Pattern.compile(regex).matcher(input); + if (!m.find()) { + return null; + } + final int groupCount = m.groupCount(); + final String[] row = new String[groupCount + 1]; + for (int i = 0; i <= groupCount; i++) { + row[i] = m.group(i); + } + return new String[][] {row}; + } + + /** + * Reverse division: computes {@code numerator / v} for each sample value {@code v}. + * Used by generated code for {@code Number / SampleFamily} expressions. + */ + public static SampleFamily divReverse(final double numerator, + final SampleFamily sf) { + if (sf == SampleFamily.EMPTY) { + return SampleFamily.EMPTY; + } + final Sample[] original = sf.samples; + final Sample[] result = new Sample[original.length]; + for (int i = 0; i < original.length; i++) { + result[i] = original[i].toBuilder() + .value(numerator / original[i].getValue()) + .build(); + } + return SampleFamilyBuilder.newBuilder(result).build(); + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/DSL.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/DSL.java new file mode 100644 index 000000000000..c29c6ba9c21d --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/DSL.java @@ -0,0 +1,65 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl; + +import lombok.extern.slf4j.Slf4j; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALClassGenerator; + +/** + * DSL compiles MAL expression strings into {@link Expression} objects + * using ANTLR4 parsing and Javassist bytecode generation. + */ +@Slf4j +public final class DSL { + + private static final MALClassGenerator GENERATOR = new MALClassGenerator(); + + /** + * Parse string literal to Expression object, which can be reused. + * + * @param metricName the name of metric defined in mal rule + * @param expression string literal represents the DSL expression. + * @return Expression object could be executed. + */ + public static Expression parse(final String metricName, final String expression) { + return parse(metricName, expression, null); + } + + /** + * Parse string literal to Expression object with YAML source info for + * stack trace diagnostics. + * + * @param metricName the name of metric defined in mal rule + * @param expression string literal represents the DSL expression. + * @param yamlSource YAML source identifier (e.g., "spring-sleuth[3]"), or null. + * @return Expression object could be executed. + */ + public static Expression parse(final String metricName, + final String expression, + final String yamlSource) { + try { + GENERATOR.setYamlSource(yamlSource); + final MalExpression malExpr = GENERATOR.compile(metricName, expression); + return new Expression(metricName, expression, malExpr); + } catch (Exception e) { + throw new IllegalStateException( + "Failed to compile MAL expression for metric: " + metricName + + ", expression: " + expression, e); + } + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/DownsamplingType.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/DownsamplingType.java new file mode 100644 index 000000000000..6a6f64ede7aa --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/DownsamplingType.java @@ -0,0 +1,26 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl; + +/** + * DownsamplingType indicates the downsampling type of meter function + */ +public enum DownsamplingType { + AVG, SUM, LATEST, SUM_PER_MIN, MAX, MIN +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/Expression.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/Expression.java new file mode 100644 index 000000000000..f7b012a96389 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/Expression.java @@ -0,0 +1,95 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl; + +import java.util.Map; +import lombok.ToString; +import lombok.extern.slf4j.Slf4j; + +/** + * Wraps a compiled {@link MalExpression} with runtime state management. + * + * <p>Two-phase usage: + * <ul> + * <li>{@link #parse()} — returns compile-time {@link ExpressionMetadata} extracted from the AST. + * Called once at startup by {@link org.apache.skywalking.oap.meter.analyzer.v2.Analyzer#build} + * to discover sample names, scope type, aggregation labels, and metric type.</li> + * <li>{@link #run(Map)} — executes the compiled expression on actual sample data. + * Called at every ingestion cycle. Pure computation, no side effects.</li> + * </ul> + */ +@Slf4j +@ToString(of = {"literal"}) +public class Expression { + + private final String metricName; + private final String literal; + private final MalExpression expression; + + public Expression(final String metricName, final String literal, final MalExpression expression) { + this.metricName = metricName; + this.literal = literal; + this.expression = expression; + } + + public String generatedClassName() { + return expression.getClass().getName(); + } + + /** + * Returns compile-time metadata extracted from the expression AST. + */ + public ExpressionMetadata parse() { + final ExpressionMetadata metadata = expression.metadata(); + if (metadata.getScopeType() == null) { + throw new ExpressionParsingException( + literal + ": one of service(), instance() or endpoint() should be invoked"); + } + if (log.isDebugEnabled()) { + log.debug("\"{}\" is parsed", literal); + } + return metadata; + } + + /** + * Run the expression with a data map. + * + * @param sampleFamilies a data map includes all of candidates to be analysis. + * @return The result of execution. + */ + public Result run(final Map<String, SampleFamily> sampleFamilies) { + try { + for (final SampleFamily s : sampleFamilies.values()) { + if (s != SampleFamily.EMPTY) { + s.context.setMetricName(metricName); + } + } + final SampleFamily sf = expression.run(sampleFamilies); + if (sf == SampleFamily.EMPTY) { + if (log.isDebugEnabled()) { + log.debug("result of {} is empty by \"{}\"", sampleFamilies, literal); + } + return Result.fail("Parsed result is an EMPTY sample family"); + } + return Result.success(sf); + } catch (Throwable t) { + log.error("failed to run \"{}\"", literal, t); + return Result.fail(t); + } + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/ExpressionMetadata.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/ExpressionMetadata.java new file mode 100644 index 000000000000..ceaa58370bc3 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/ExpressionMetadata.java @@ -0,0 +1,66 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Set; +import lombok.Getter; +import org.apache.skywalking.oap.server.core.analysis.meter.ScopeType; + +/** + * Immutable metadata extracted from a MAL expression at compile time. + * Replaces the ThreadLocal-based {@code ExpressionParsingContext} pattern. + */ +@Getter +public class ExpressionMetadata { + + private final List<String> samples; + private final ScopeType scopeType; + private final Set<String> scopeLabels; + private final Set<String> aggregationLabels; + private final DownsamplingType downsampling; + private final boolean isHistogram; + private final int[] percentiles; + + public ExpressionMetadata(final List<String> samples, + final ScopeType scopeType, + final Set<String> scopeLabels, + final Set<String> aggregationLabels, + final DownsamplingType downsampling, + final boolean isHistogram, + final int[] percentiles) { + this.samples = Collections.unmodifiableList(samples); + this.scopeType = scopeType; + this.scopeLabels = Collections.unmodifiableSet(scopeLabels); + this.aggregationLabels = Collections.unmodifiableSet(aggregationLabels); + this.downsampling = downsampling; + this.isHistogram = isHistogram; + this.percentiles = percentiles; + } + + /** + * Get labels not related to scope (aggregation labels minus scope labels). + */ + public List<String> getLabels() { + final List<String> result = new ArrayList<>(aggregationLabels); + result.removeAll(scopeLabels); + return result; + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/ExpressionParsingException.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/ExpressionParsingException.java new file mode 100644 index 000000000000..8853625e2d88 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/ExpressionParsingException.java @@ -0,0 +1,28 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl; + +/** + * ExpressionParsingException is throw in expression parsing phase. + */ +public class ExpressionParsingException extends RuntimeException { + public ExpressionParsingException(final String message) { + super(message); + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/FilterExpression.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/FilterExpression.java new file mode 100644 index 000000000000..f5fd74696ae6 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/FilterExpression.java @@ -0,0 +1,64 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl; + +import java.util.HashMap; +import java.util.Map; +import java.util.Objects; +import lombok.ToString; +import lombok.extern.slf4j.Slf4j; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALClassGenerator; + +/** + * Compiles a MAL filter closure expression into a {@link MalFilter} + * using ANTLR4 parsing and Javassist bytecode generation. + */ +@Slf4j +@ToString(of = {"literal"}) +public class FilterExpression { + private static final MALClassGenerator GENERATOR = new MALClassGenerator(); + + private final String literal; + private final MalFilter malFilter; + + public FilterExpression(final String literal) { + this.literal = literal; + try { + this.malFilter = GENERATOR.compileFilter(literal); + } catch (Exception e) { + throw new IllegalStateException( + "Failed to compile MAL filter expression: " + literal, e); + } + } + + public Map<String, SampleFamily> filter(final Map<String, SampleFamily> sampleFamilies) { + try { + final Map<String, SampleFamily> result = new HashMap<>(); + for (final Map.Entry<String, SampleFamily> entry : sampleFamilies.entrySet()) { + final SampleFamily afterFilter = entry.getValue().filter(malFilter::test); + if (!Objects.equals(afterFilter, SampleFamily.EMPTY)) { + result.put(entry.getKey(), afterFilter); + } + } + return result; + } catch (Throwable t) { + log.error("failed to run \"{}\"", literal, t); + } + return sampleFamilies; + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/MalExpression.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/MalExpression.java new file mode 100644 index 000000000000..32a24dd766f6 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/MalExpression.java @@ -0,0 +1,34 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl; + +import java.util.Map; + +/** + * Each compiled MAL expression implements this interface. + */ +public interface MalExpression { + SampleFamily run(Map<String, SampleFamily> samples); + + /** + * Returns compile-time metadata extracted from the expression AST: + * sample names, scope type, aggregation labels, downsampling, etc. + */ + ExpressionMetadata metadata(); +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/MalFilter.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/MalFilter.java new file mode 100644 index 000000000000..585bfb2c2424 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/MalFilter.java @@ -0,0 +1,29 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl; + +import java.util.Map; + +/** + * Each compiled MAL filter expression implements this interface. + */ +@FunctionalInterface +public interface MalFilter { + boolean test(Map<String, String> tags); +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/Result.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/Result.java new file mode 100644 index 000000000000..1154499ebb2d --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/Result.java @@ -0,0 +1,80 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl; + +import lombok.AccessLevel; +import lombok.EqualsAndHashCode; +import lombok.Getter; +import lombok.RequiredArgsConstructor; +import lombok.ToString; + +/** + * Result indicates the parsing result of expression. + */ +@RequiredArgsConstructor(access = AccessLevel.PRIVATE) +@EqualsAndHashCode +@ToString +@Getter +public class Result { + + /** + * fail is a static factory method builds failed result based on {@link Throwable}. + * + * @param throwable to build failed result. + * @return failed result. + */ + public static Result fail(final Throwable throwable) { + return new Result(false, throwable.getMessage(), SampleFamily.EMPTY); + } + + /** + * fail is a static factory method builds failed result based on error message. + * + * @param message is the error details why the result is failed. + * @return failed result. + */ + public static Result fail(String message) { + return new Result(false, message, SampleFamily.EMPTY); + } + + /** + * fail is a static factory method builds failed result. + * + * @return failed result. + */ + public static Result fail() { + return new Result(false, null, SampleFamily.EMPTY); + } + + /** + * success is a static factory method builds successful result. + * + * @param sf is the parsed result. + * @return successful result. + */ + public static Result success(SampleFamily sf) { + return new Result(true, null, sf); + } + + private final boolean success; + + private final String error; + + private final SampleFamily data; +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/Sample.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/Sample.java new file mode 100644 index 000000000000..f22d837f77e6 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/Sample.java @@ -0,0 +1,60 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl; + +import com.google.common.collect.ImmutableMap; +import io.vavr.Function2; +import io.vavr.Tuple2; +import java.time.Duration; +import java.util.function.Function; +import lombok.Builder; +import lombok.EqualsAndHashCode; +import lombok.Getter; +import lombok.ToString; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.counter.CounterWindow; + +/** + * Sample represents the metric data point in a range of time. + */ +@Builder(toBuilder = true) +@EqualsAndHashCode +@ToString +@Getter +public class Sample { + final String name; + final ImmutableMap<String, String> labels; + final double value; + final long timestamp; + + Sample newValue(Function<Double, Double> transform) { + return toBuilder().value(transform.apply(value)).build(); + } + + Sample increase(String range, String metricName, Function2<Double, Long, Double> transform) { + Tuple2<Long, Double> i = CounterWindow.INSTANCE.increase(metricName, labels, value, Duration.parse(range).toMillis(), timestamp); + double nv = transform.apply(i._2, i._1); + return newValue(ignored -> nv); + } + + Sample increase(String metricName, Function2<Double, Long, Double> transform) { + Tuple2<Long, Double> i = CounterWindow.INSTANCE.pop(metricName, labels, value, timestamp); + double nv = transform.apply(i._2, i._1); + return newValue(ignored -> nv); + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/SampleFamily.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/SampleFamily.java new file mode 100644 index 000000000000..fa392f9265d7 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/SampleFamily.java @@ -0,0 +1,829 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl; + +import static java.util.function.UnaryOperator.identity; +import static java.util.stream.Collectors.groupingBy; +import static java.util.stream.Collectors.mapping; +import static java.util.stream.Collectors.toList; +import static com.google.common.collect.ImmutableMap.toImmutableMap; + +import org.apache.commons.lang3.StringUtils; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.entity.EndpointEntityDescription; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.entity.EntityDescription; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.entity.InstanceEntityDescription; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.entity.ProcessEntityDescription; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.entity.ProcessRelationEntityDescription; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.entity.ServiceEntityDescription; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.entity.ServiceRelationEntityDescription; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.tagOpt.K8sRetagType; +import org.apache.skywalking.oap.server.core.Const; +import org.apache.skywalking.oap.server.core.UnexpectedException; +import org.apache.skywalking.oap.server.core.analysis.Layer; +import org.apache.skywalking.oap.server.core.analysis.meter.MeterEntity; +import org.apache.skywalking.oap.server.core.analysis.meter.ScopeType; +import org.apache.skywalking.oap.server.core.source.DetectPoint; +import java.util.Arrays; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Optional; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.function.DoubleBinaryOperator; +import java.util.function.Function; +import java.util.stream.Collectors; +import java.util.stream.Stream; +import com.google.common.base.Preconditions; +import com.google.common.base.Strings; +import com.google.common.collect.ImmutableMap; +import com.google.common.collect.Maps; +import io.vavr.Function2; +import io.vavr.Function3; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyFunctions.DecorateFunction; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyFunctions.ForEachFunction; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyFunctions.PropertiesExtractor; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyFunctions.SampleFilter; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyFunctions.TagFunction; +import lombok.AccessLevel; +import lombok.Builder; +import lombok.EqualsAndHashCode; +import lombok.Getter; +import lombok.RequiredArgsConstructor; +import lombok.Setter; +import lombok.ToString; +import lombok.extern.slf4j.Slf4j; +import org.apache.skywalking.oap.server.library.util.StringUtil; + +/** + * SampleFamily represents a collection of {@link Sample}. + */ +@RequiredArgsConstructor(access = AccessLevel.PRIVATE) +@EqualsAndHashCode +@ToString +@Slf4j +public class SampleFamily { + public static final SampleFamily EMPTY = new SampleFamily(new Sample[0], RunningContext.EMPTY); + + static SampleFamily build(RunningContext ctx, Sample... samples) { + Preconditions.checkNotNull(samples); + Preconditions.checkArgument(samples.length > 0); + samples = Arrays.stream(samples).filter(sample -> !Double.isNaN(sample.getValue())).toArray(Sample[]::new); + if (samples.length == 0) { + return EMPTY; + } + return new SampleFamily(samples, Optional.ofNullable(ctx).orElseGet(RunningContext::instance)); + } + + public final Sample[] samples; + + public final RunningContext context; + + /** + * Following operations are used in DSL + */ + + /* tag filter operations*/ + public SampleFamily tagEqual(String... labels) { + return match(labels, InternalOps::stringComp); + } + + public SampleFamily tagNotEqual(String[] labels) { + return match(labels, (sv, lv) -> !InternalOps.stringComp(sv, lv)); + } + + public SampleFamily tagMatch(String[] labels) { + return match(labels, String::matches); + } + + public SampleFamily tagNotMatch(String[] labels) { + return match(labels, (sv, lv) -> !sv.matches(lv)); + } + + /* value filter operations*/ + public SampleFamily valueEqual(double compValue) { + return valueMatch(CompType.EQUAL, compValue, InternalOps::doubleComp); + } + + public SampleFamily valueNotEqual(double compValue) { + return valueMatch(CompType.NOT_EQUAL, compValue, InternalOps::doubleComp); + } + + public SampleFamily valueGreater(double compValue) { + return valueMatch(CompType.GREATER, compValue, InternalOps::doubleComp); + } + + public SampleFamily valueGreaterEqual(double compValue) { + return valueMatch(CompType.GREATER_EQUAL, compValue, InternalOps::doubleComp); + } + + public SampleFamily valueLess(double compValue) { + return valueMatch(CompType.LESS, compValue, InternalOps::doubleComp); + } + + public SampleFamily valueLessEqual(double compValue) { + return valueMatch(CompType.LESS_EQUAL, compValue, InternalOps::doubleComp); + } + + /* Binary operator overloading*/ + public SampleFamily plus(Number number) { + return newValue(v -> v + number.doubleValue()); + } + + public SampleFamily minus(Number number) { + return newValue(v -> v - number.doubleValue()); + } + + public SampleFamily multiply(Number number) { + return newValue(v -> v * number.doubleValue()); + } + + public SampleFamily div(Number number) { + return newValue(v -> v / number.doubleValue()); + } + + public SampleFamily negative() { + return newValue(v -> -v); + } + + public SampleFamily plus(SampleFamily another) { + if (this == EMPTY && another == EMPTY) { + return SampleFamily.EMPTY; + } + if (this == EMPTY) { + return another; + } + if (another == EMPTY) { + return this; + } + return newValue(another, Double::sum); + } + + public SampleFamily minus(SampleFamily another) { + if (this == EMPTY && another == EMPTY) { + return SampleFamily.EMPTY; + } + if (this == EMPTY) { + return another.negative(); + } + if (another == EMPTY) { + return this; + } + return newValue(another, (a, b) -> a - b); + } + + public SampleFamily multiply(SampleFamily another) { + if (this == EMPTY || another == EMPTY) { + return SampleFamily.EMPTY; + } + return newValue(another, (a, b) -> a * b); + } + + public SampleFamily div(SampleFamily another) { + if (this == EMPTY) { + return SampleFamily.EMPTY; + } + if (another == EMPTY) { + return div(0.0); + } + return newValue(another, (a, b) -> a / b); + } + + /* Aggregation operators */ + public SampleFamily sum(List<String> by) { + return aggregate(by, Double::sum); + } + + public SampleFamily max(List<String> by) { + return aggregate(by, Double::max); + } + + public SampleFamily min(List<String> by) { + return aggregate(by, Double::min); + } + + public SampleFamily avg(List<String> by) { + if (this == EMPTY) { + return EMPTY; + } + if (by == null) { + double result = Arrays.stream(samples).mapToDouble(Sample::getValue).average().orElse(0.0D); + return SampleFamily.build( + this.context, InternalOps.newSample(samples[0].name, ImmutableMap.of(), samples[0].timestamp, result)); + } + + return SampleFamily.build( + this.context, + Arrays.stream(samples) + .collect(groupingBy(it -> InternalOps.getLabels(by, it), mapping(identity(), toList()))) + .entrySet().stream() + .map(entry -> InternalOps.newSample( + entry.getValue().get(0).getName(), + entry.getKey(), + entry.getValue().get(0).getTimestamp(), + entry.getValue().stream().mapToDouble(Sample::getValue).average().orElse(0.0D) + )) + .toArray(Sample[]::new) + ); + } + + public SampleFamily count(List<String> by) { + if (this == EMPTY) { + return EMPTY; + } + if (by == null) { + long result = Arrays.stream(samples).count(); + return SampleFamily.build( + this.context, InternalOps.newSample(samples[0].name, ImmutableMap.of(), samples[0].timestamp, result)); + } + + if (by.size() == 1) { + Set<String> set = Arrays + .stream(samples) + .map(sample -> sample.labels.get(by.get(0))) + .filter(StringUtils::isNotBlank) + .collect(Collectors.toSet()); + + return SampleFamily.build( + this.context, InternalOps.newSample(samples[0].name, ImmutableMap.of(), samples[0].timestamp, set.size())); + } + + Stream<Map.Entry<ImmutableMap<String, String>, List<Sample>>> stream = Arrays + .stream(samples) + .filter(sample -> sample.labels.keySet().containsAll(by)) + .collect(groupingBy(it -> InternalOps.getLabels(by, it))) + .entrySet() + .stream() + .map(entry -> InternalOps.newSample( + entry.getValue().get(0).getName(), + entry.getKey(), + entry.getValue().get(0).getTimestamp(), + entry.getValue().size())) + .collect(groupingBy(it -> InternalOps.groupByExcludedLabel(by.get(by.size() - 1), it), mapping(identity(), toList()))) + .entrySet() + .stream(); + + Sample[] array = stream + .map(entry -> InternalOps.newSample( + entry.getValue().get(0).getName(), + entry.getKey(), + entry.getValue().get(0).getTimestamp(), + entry.getValue().size() + )) + .toArray(Sample[]::new); + + SampleFamily sampleFamily = SampleFamily.build( + this.context, + array + ); + return sampleFamily; + } + + protected SampleFamily aggregate(List<String> by, DoubleBinaryOperator aggregator) { + if (this == EMPTY) { + return EMPTY; + } + if (by == null) { + double result = Arrays.stream(samples).mapToDouble(s -> s.value).reduce(aggregator).orElse(0.0D); + return SampleFamily.build( + this.context, InternalOps.newSample(samples[0].name, ImmutableMap.of(), samples[0].timestamp, result)); + } + return SampleFamily.build( + this.context, + Arrays.stream(samples) + .collect(groupingBy(it -> InternalOps.getLabels(by, it), mapping(identity(), toList()))) + .entrySet().stream() + .map(entry -> InternalOps.newSample( + entry.getValue().get(0).getName(), + entry.getKey(), + entry.getValue().get(0).getTimestamp(), + entry.getValue().stream().mapToDouble(Sample::getValue).reduce(aggregator).orElse(0.0D) + )) + .toArray(Sample[]::new) + ); + } + + /* Function */ + public SampleFamily increase(String range) { + Preconditions.checkArgument(!Strings.isNullOrEmpty(range)); + if (this == EMPTY) { + return EMPTY; + } + return SampleFamily.build( + this.context, + Arrays.stream(samples) + .map(sample -> sample.increase( + range, + context.metricName, + (lowerBoundValue, unused) -> sample.value - lowerBoundValue + )) + .toArray(Sample[]::new) + ); + } + + public SampleFamily rate(String range) { + Preconditions.checkArgument(!Strings.isNullOrEmpty(range)); + if (this == EMPTY) { + return EMPTY; + } + return SampleFamily.build( + this.context, + Arrays.stream(samples) + .map(sample -> sample.increase( + range, + context.metricName, + (lowerBoundValue, lowerBoundTime) -> { + final long timeDiff = (sample.timestamp - lowerBoundTime) / 1000; + return timeDiff < 1L ? 0.0 : (sample.value - lowerBoundValue) / timeDiff; + } + )) + .toArray(Sample[]::new) + ); + } + + public SampleFamily irate() { + if (this == EMPTY) { + return EMPTY; + } + return SampleFamily.build( + this.context, + Arrays.stream(samples) + .map(sample -> sample.increase( + context.metricName, + (lowerBoundValue, lowerBoundTime) -> { + final long timeDiff = (sample.timestamp - lowerBoundTime) / 1000; + return timeDiff < 1L ? 0.0 : (sample.value - lowerBoundValue) / timeDiff; + } + )) + .toArray(Sample[]::new) + ); + } + + @SuppressWarnings(value = "unchecked") + public SampleFamily tag(TagFunction fn) { + if (this == EMPTY) { + return EMPTY; + } + return SampleFamily.build( + this.context, + Arrays.stream(samples) + .map(sample -> { + Map<String, String> arg = Maps.newHashMap(sample.labels); + Map<String, String> r = fn.apply(arg); + return sample.toBuilder() + .labels( + ImmutableMap.copyOf( + Optional.ofNullable(r).orElse(arg))) + .build(); + }).toArray(Sample[]::new) + ); + } + + public SampleFamily filter(SampleFilter filter) { + if (this == EMPTY) { + return EMPTY; + } + final Sample[] filtered = Arrays.stream(samples) + .filter(it -> filter.test(it.labels)) + .toArray(Sample[]::new); + if (filtered.length == 0) { + return EMPTY; + } + return SampleFamily.build(context, filtered); + } + + /* k8s retags*/ + public SampleFamily retagByK8sMeta(String newLabelName, + K8sRetagType type, + String existingLabelName, + String namespaceLabelName) { + Preconditions.checkArgument(!Strings.isNullOrEmpty(newLabelName)); + Preconditions.checkArgument(!Strings.isNullOrEmpty(existingLabelName)); + Preconditions.checkArgument(!Strings.isNullOrEmpty(namespaceLabelName)); + if (this == EMPTY) { + return EMPTY; + } + + return SampleFamily.build( + this.context, type.execute(samples, newLabelName, existingLabelName, namespaceLabelName)); + } + + public SampleFamily histogram() { + return histogram("le", this.context.defaultHistogramBucketUnit); + } + + public SampleFamily histogram(String le) { + return histogram(le, this.context.defaultHistogramBucketUnit); + } + + public SampleFamily histogram(String le, TimeUnit unit) { + long scale = unit.toMillis(1); + Preconditions.checkArgument(scale > 0); + if (this == EMPTY) { + return EMPTY; + } + return SampleFamily.build( + this.context, + Stream.concat( + Arrays.stream(samples).filter(s -> !s.labels.containsKey(le)), + Arrays.stream(samples) + .filter(s -> s.labels.containsKey(le)) + .sorted(Comparator.comparingDouble(s -> Double.parseDouble(s.labels.get(le)))) + .map(s -> { + double r = s.value; + ImmutableMap<String, String> ll = ImmutableMap.<String, String>builder() + .putAll(Maps.filterKeys(s.labels, + key -> !Objects.equals( + key, le) + )) + .put( + "le", + String.valueOf((long) ((Double.parseDouble(s.labels.get(le))) * scale))) + .build(); + return InternalOps.newSample(s.name, ll, s.timestamp, r); + }) + ).toArray(Sample[]::new) + ); + } + + public SampleFamily histogram_percentile(List<Integer> percentiles) { + Preconditions.checkArgument(percentiles.size() > 0); + return this; + } + + public SampleFamily service(List<String> labelKeys, Layer layer) { + Preconditions.checkArgument(labelKeys.size() > 0); + if (this == EMPTY) { + return EMPTY; + } + return createMeterSamples(new ServiceEntityDescription(labelKeys, layer, Const.POINT)); + } + + public SampleFamily service(List<String> labelKeys, String delimiter, Layer layer) { + Preconditions.checkArgument(labelKeys.size() > 0); + if (this == EMPTY) { + return EMPTY; + } + return createMeterSamples(new ServiceEntityDescription(labelKeys, layer, delimiter)); + } + + public SampleFamily instance(List<String> serviceKeys, String serviceDelimiter, + List<String> instanceKeys, String instanceDelimiter, + Layer layer, PropertiesExtractor propertiesExtractor) { + Preconditions.checkArgument(serviceKeys.size() > 0); + Preconditions.checkArgument(instanceKeys.size() > 0); + if (this == EMPTY) { + return EMPTY; + } + return createMeterSamples(new InstanceEntityDescription( + serviceKeys, instanceKeys, layer, serviceDelimiter, instanceDelimiter, propertiesExtractor)); + } + + public SampleFamily instance(List<String> serviceKeys, List<String> instanceKeys, Layer layer) { + return instance(serviceKeys, Const.POINT, instanceKeys, Const.POINT, layer, (PropertiesExtractor) null); + } + + public SampleFamily endpoint(List<String> serviceKeys, List<String> endpointKeys, String delimiter, Layer layer) { + Preconditions.checkArgument(serviceKeys.size() > 0); + Preconditions.checkArgument(endpointKeys.size() > 0); + if (this == EMPTY) { + return EMPTY; + } + return createMeterSamples(new EndpointEntityDescription(serviceKeys, endpointKeys, layer, delimiter)); + } + + public SampleFamily endpoint(List<String> serviceKeys, List<String> endpointKeys, Layer layer) { + return endpoint(serviceKeys, endpointKeys, Const.POINT, layer); + } + + public SampleFamily process(List<String> serviceKeys, List<String> serviceInstanceKeys, List<String> processKeys, String layerKey) { + Preconditions.checkArgument(serviceKeys.size() > 0); + Preconditions.checkArgument(serviceInstanceKeys.size() > 0); + Preconditions.checkArgument(processKeys.size() > 0); + if (this == EMPTY) { + return EMPTY; + } + return createMeterSamples(new ProcessEntityDescription(serviceKeys, serviceInstanceKeys, processKeys, layerKey, Const.POINT)); + } + + public SampleFamily serviceRelation(DetectPoint detectPoint, List<String> sourceServiceKeys, List<String> destServiceKeys, Layer layer) { + Preconditions.checkArgument(sourceServiceKeys.size() > 0); + Preconditions.checkArgument(destServiceKeys.size() > 0); + if (this == EMPTY) { + return EMPTY; + } + return createMeterSamples(new ServiceRelationEntityDescription(sourceServiceKeys, destServiceKeys, detectPoint, layer, Const.POINT, null)); + } + + public SampleFamily serviceRelation(DetectPoint detectPoint, List<String> sourceServiceKeys, List<String> destServiceKeys, String delimiter, Layer layer, String componentIdKey) { + Preconditions.checkArgument(sourceServiceKeys.size() > 0); + Preconditions.checkArgument(destServiceKeys.size() > 0); + if (this == EMPTY) { + return EMPTY; + } + return createMeterSamples(new ServiceRelationEntityDescription(sourceServiceKeys, destServiceKeys, detectPoint, layer, delimiter, componentIdKey)); + } + + public SampleFamily forEach(List<String> array, ForEachFunction each) { + if (this == EMPTY) { + return EMPTY; + } + return SampleFamily.build(this.context, Arrays.stream(this.samples).map(sample -> { + Map<String, String> labels = Maps.newHashMap(sample.getLabels()); + for (String element : array) { + each.accept(element, labels); + } + return sample.toBuilder().labels(ImmutableMap.copyOf(labels)).build(); + }).toArray(Sample[]::new)); + } + + public SampleFamily processRelation(String detectPointKey, List<String> serviceKeys, List<String> instanceKeys, String sourceProcessIdKey, String destProcessIdKey, String componentKey) { + Preconditions.checkArgument(serviceKeys.size() > 0); + Preconditions.checkArgument(instanceKeys.size() > 0); + Preconditions.checkArgument(StringUtil.isNotEmpty(sourceProcessIdKey)); + Preconditions.checkArgument(StringUtil.isNotEmpty(destProcessIdKey)); + if (this == EMPTY) { + return EMPTY; + } + return createMeterSamples(new ProcessRelationEntityDescription(serviceKeys, instanceKeys, sourceProcessIdKey, destProcessIdKey, detectPointKey, componentKey, Const.POINT)); + } + + private SampleFamily createMeterSamples(EntityDescription entityDescription) { + Map<MeterEntity, Sample[]> meterSamples = new HashMap<>(); + Arrays.stream(samples) + .collect(groupingBy(it -> InternalOps.getLabels(entityDescription.getLabelKeys(), it), + mapping(identity(), toList()) + )) + .forEach((labels, samples) -> { + MeterEntity meterEntity = InternalOps.buildMeterEntity(samples, entityDescription); + meterSamples.put( + meterEntity, InternalOps.left(samples, entityDescription.getLabelKeys())); + }); + + this.context.setMeterSamples(meterSamples); + //This samples is original, The grouped samples is in context which mapping with MeterEntity + return SampleFamily.build(this.context, samples); + } + + private SampleFamily match(String[] labels, Function2<String, String, Boolean> op) { + Preconditions.checkArgument(labels.length % 2 == 0); + Map<String, String> ll = new HashMap<>(labels.length / 2); + for (int i = 0; i < labels.length; i += 2) { + ll.put(labels[i], labels[i + 1]); + } + Sample[] ss = Arrays.stream(samples) + .filter(sample -> ll.entrySet() + .stream() + .allMatch( + entry -> op.apply(sample.labels.getOrDefault(entry.getKey(), ""), + entry.getValue() + ))) + .toArray(Sample[]::new); + return ss.length > 0 ? SampleFamily.build(this.context, ss) : EMPTY; + } + + private SampleFamily valueMatch(CompType compType, + double compValue, + Function3<CompType, Double, Double, Boolean> op) { + Sample[] ss = Arrays.stream(samples) + .filter(sample -> op.apply(compType, sample.value, compValue)).toArray(Sample[]::new); + return ss.length > 0 ? SampleFamily.build(this.context, ss) : EMPTY; + } + + SampleFamily newValue(Function<Double, Double> transform) { + if (this == EMPTY) { + return EMPTY; + } + Sample[] ss = new Sample[samples.length]; + for (int i = 0; i < ss.length; i++) { + ss[i] = samples[i].newValue(transform); + } + return SampleFamily.build(this.context, ss); + } + + private SampleFamily newValue(SampleFamily another, Function2<Double, Double, Double> transform) { + Sample[] ss = Arrays.stream(samples) + .flatMap(cs -> io.vavr.collection.Stream.of(another.samples) + .find(as -> cs.labels.equals(as.labels)) + .map(as -> cs.toBuilder() + .value(transform.apply(cs.value, + as.value + ))) + .map(Sample.SampleBuilder::build) + .toJavaStream()) + .toArray(Sample[]::new); + return ss.length > 0 ? SampleFamily.build(this.context, ss) : EMPTY; + } + + public SampleFamily downsampling(final DownsamplingType type) { + return this; + } + + public SampleFamily decorate(DecorateFunction c) { + if (this == EMPTY) { + return EMPTY; + } + this.context.getMeterSamples().keySet().forEach(meterEntity -> { + if (meterEntity.getScopeType().equals(ScopeType.SERVICE)) { + c.accept(meterEntity); + } + }); + return this; + } + + /** + * The parsing context holds key results more than sample collection. + */ + @ToString + @EqualsAndHashCode(exclude = "metricName") + @Getter + @Setter + @Builder + public static class RunningContext { + + static RunningContext EMPTY = instance(); + + static RunningContext instance() { + return RunningContext.builder() + .defaultHistogramBucketUnit(TimeUnit.SECONDS) + .build(); + } + + private String metricName; + + @Builder.Default + private Map<MeterEntity, Sample[]> meterSamples = new HashMap<>(); + + private TimeUnit defaultHistogramBucketUnit; + } + + private static class InternalOps { + + private static Sample[] left(List<Sample> samples, List<String> labelKeys) { + return samples.stream().map(s -> { + ImmutableMap<String, String> ll = ImmutableMap.<String, String>builder() + .putAll(Maps.filterKeys(s.labels, + key -> !labelKeys.contains(key) + )) + .build(); + return s.toBuilder().labels(ll).build(); + }).toArray(Sample[]::new); + } + + private static String dim(List<Sample> samples, List<String> labelKeys, String delimiter) { + String name = labelKeys.stream() + .map(k -> samples.get(0).labels.getOrDefault(k, "")) + .filter(v -> !StringUtil.isEmpty(v)) + .collect(Collectors.joining(StringUtil.isEmpty(delimiter) ? Const.POINT : delimiter)); + return name; + } + + private static MeterEntity buildMeterEntity(List<Sample> samples, + EntityDescription entityDescription) { + switch (entityDescription.getScopeType()) { + case SERVICE: + ServiceEntityDescription serviceEntityDescription = (ServiceEntityDescription) entityDescription; + return MeterEntity.newService( + InternalOps.dim(samples, serviceEntityDescription.getServiceKeys(), serviceEntityDescription.getDelimiter()), + serviceEntityDescription.getLayer() + ); + case SERVICE_INSTANCE: + InstanceEntityDescription instanceEntityDescription = (InstanceEntityDescription) entityDescription; + Map<String, String> properties = null; + if (instanceEntityDescription.getPropertiesExtractor() != null) { + properties = instanceEntityDescription.getPropertiesExtractor().apply(samples.get(0).labels); + } + return MeterEntity.newServiceInstance( + InternalOps.dim(samples, instanceEntityDescription.getServiceKeys(), instanceEntityDescription.getServiceDelimiter()), + InternalOps.dim(samples, instanceEntityDescription.getInstanceKeys(), instanceEntityDescription.getInstanceDelimiter()), + instanceEntityDescription.getLayer(), + properties + ); + case ENDPOINT: + EndpointEntityDescription endpointEntityDescription = (EndpointEntityDescription) entityDescription; + return MeterEntity.newEndpoint( + InternalOps.dim(samples, endpointEntityDescription.getServiceKeys(), endpointEntityDescription.getDelimiter()), + InternalOps.dim(samples, endpointEntityDescription.getEndpointKeys(), endpointEntityDescription.getDelimiter()), + endpointEntityDescription.getLayer() + ); + case PROCESS: + final ProcessEntityDescription processEntityDescription = (ProcessEntityDescription) entityDescription; + return MeterEntity.newProcess( + InternalOps.dim(samples, processEntityDescription.getServiceKeys(), processEntityDescription.getDelimiter()), + InternalOps.dim(samples, processEntityDescription.getServiceInstanceKeys(), processEntityDescription.getDelimiter()), + InternalOps.dim(samples, processEntityDescription.getProcessKeys(), processEntityDescription.getDelimiter()), + InternalOps.dim(samples, List.of(processEntityDescription.getLayerKey()), processEntityDescription.getDelimiter()) + ); + case SERVICE_RELATION: + ServiceRelationEntityDescription serviceRelationEntityDescription = (ServiceRelationEntityDescription) entityDescription; + final String serviceRelationComponentValue = InternalOps.dim(samples, + Collections.singletonList(serviceRelationEntityDescription.getComponentIdKey()), serviceRelationEntityDescription.getDelimiter()); + int serviceRelationComponentId = StringUtil.isNotEmpty(serviceRelationComponentValue) ? Integer.parseInt(serviceRelationComponentValue) : 0; + return MeterEntity.newServiceRelation( + InternalOps.dim(samples, serviceRelationEntityDescription.getSourceServiceKeys(), serviceRelationEntityDescription.getDelimiter()), + InternalOps.dim(samples, serviceRelationEntityDescription.getDestServiceKeys(), serviceRelationEntityDescription.getDelimiter()), + serviceRelationEntityDescription.getDetectPoint(), serviceRelationEntityDescription.getLayer(), serviceRelationComponentId + ); + case PROCESS_RELATION: + final ProcessRelationEntityDescription processRelationEntityDescription = (ProcessRelationEntityDescription) entityDescription; + final String detectPointValue = InternalOps.dim(samples, Collections.singletonList(processRelationEntityDescription.getDetectPointKey()), processRelationEntityDescription.getDelimiter()); + DetectPoint point = StringUtils.equalsAnyIgnoreCase(detectPointValue, "server") ? DetectPoint.SERVER : DetectPoint.CLIENT; + final String componentValue = InternalOps.dim(samples, Collections.singletonList(processRelationEntityDescription.getComponentKey()), processRelationEntityDescription.getDelimiter()); + final int componentId = StringUtil.isNotEmpty(componentValue) ? Integer.parseInt(componentValue) : 0; + return MeterEntity.newProcessRelation( + InternalOps.dim(samples, processRelationEntityDescription.getServiceKeys(), processRelationEntityDescription.getDelimiter()), + InternalOps.dim(samples, processRelationEntityDescription.getInstanceKeys(), processRelationEntityDescription.getDelimiter()), + InternalOps.dim(samples, Collections.singletonList(processRelationEntityDescription.getSourceProcessIdKey()), processRelationEntityDescription.getDelimiter()), + InternalOps.dim(samples, Collections.singletonList(processRelationEntityDescription.getDestProcessIdKey()), processRelationEntityDescription.getDelimiter()), + componentId, + point + ); + default: + throw new UnexpectedException( + "Unexpected scope type of entityDescription " + entityDescription); + } + } + + private static Sample newSample(String name, + ImmutableMap<String, String> labels, + long timestamp, + double newValue) { + return Sample.builder() + .value(newValue) + .labels(labels) + .timestamp(timestamp) + .name(name) + .build(); + } + + private static boolean stringComp(String a, String b) { + if (Strings.isNullOrEmpty(a) && Strings.isNullOrEmpty(b)) { + return true; + } + if (Strings.isNullOrEmpty(a)) { + return false; + } + return a.equals(b); + } + + private static boolean doubleComp(CompType compType, double a, double b) { + int result = Double.compare(a, b); + switch (compType) { + case EQUAL: + return result == 0; + case NOT_EQUAL: + return result != 0; + case GREATER: + return result == 1; + case GREATER_EQUAL: + return result == 0 || result == 1; + case LESS: + return result == -1; + case LESS_EQUAL: + return result == 0 || result == -1; + } + + return false; + } + + private static ImmutableMap<String, String> getLabels(final List<String> labelKeys, final Sample sample) { + return labelKeys.stream() + .collect(toImmutableMap( + Function.identity(), + labelKey -> sample.labels.getOrDefault(labelKey, "") + )); + } + + private static ImmutableMap<String, String> groupByExcludedLabel(final String excludedLabelKey, final Sample sample) { + return sample + .labels + .entrySet() + .stream() + .filter(v -> !v.getKey().equals(excludedLabelKey)) + .collect(toImmutableMap(Map.Entry::getKey, Map.Entry::getValue)); + } + } + + private enum CompType { + EQUAL, NOT_EQUAL, LESS, LESS_EQUAL, GREATER, GREATER_EQUAL + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/SampleFamilyBuilder.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/SampleFamilyBuilder.java new file mode 100644 index 000000000000..bf5853bca08b --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/SampleFamilyBuilder.java @@ -0,0 +1,51 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl; + +import java.util.concurrent.TimeUnit; + +/** + * Help to build the {@link SampleFamily}. + */ +public class SampleFamilyBuilder { + private final Sample[] samples; + private final SampleFamily.RunningContext context; + + SampleFamilyBuilder(Sample[] samples, SampleFamily.RunningContext context) { + this.samples = samples; + this.context = context; + } + + public static SampleFamilyBuilder newBuilder(Sample... samples) { + return new SampleFamilyBuilder(samples, SampleFamily.RunningContext.instance()); + } + + public SampleFamilyBuilder defaultHistogramBucketUnit(TimeUnit unit) { + this.context.setDefaultHistogramBucketUnit(unit); + return this; + } + + /** + * Build Sample Family + */ + public SampleFamily build() { + return SampleFamily.build(this.context, this.samples); + } + +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/SampleFamilyFunctions.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/SampleFamilyFunctions.java new file mode 100644 index 000000000000..6c6d240270e1 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/SampleFamilyFunctions.java @@ -0,0 +1,70 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl; + +import java.util.Map; +import java.util.function.Consumer; +import java.util.function.Function; +import java.util.function.Predicate; +import org.apache.skywalking.oap.server.core.analysis.meter.MeterEntity; + +/** + * Functional interfaces used as parameters in {@link SampleFamily} methods. + */ +public final class SampleFamilyFunctions { + + private SampleFamilyFunctions() { + } + + /** + * Receives a mutable label map and returns the (possibly modified) map. + */ + @FunctionalInterface + public interface TagFunction extends Function<Map<String, String>, Map<String, String>> { + } + + /** + * Tests whether a sample's labels match the filter criteria. + */ + @FunctionalInterface + public interface SampleFilter extends Predicate<Map<String, String>> { + } + + /** + * Called for each element in the array with the element value and a mutable labels map. + */ + @FunctionalInterface + public interface ForEachFunction { + void accept(String element, Map<String, String> tags); + } + + /** + * Decorates service meter entities. + */ + @FunctionalInterface + public interface DecorateFunction extends Consumer<MeterEntity> { + } + + /** + * Extracts instance properties from sample labels. + */ + @FunctionalInterface + public interface PropertiesExtractor extends Function<Map<String, String>, Map<String, String>> { + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/counter/CounterWindow.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/counter/CounterWindow.java new file mode 100644 index 000000000000..bf64b7d68bac --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/counter/CounterWindow.java @@ -0,0 +1,88 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl.counter; + +import com.google.common.collect.ImmutableMap; +import io.vavr.Tuple; +import io.vavr.Tuple2; +import java.util.Map; +import java.util.PriorityQueue; +import java.util.Queue; +import java.util.concurrent.ConcurrentHashMap; +import lombok.AccessLevel; +import lombok.EqualsAndHashCode; +import lombok.RequiredArgsConstructor; +import lombok.ToString; + +/** + * CounterWindow stores a series of counter samples in order to calculate the increase + * or instant rate of increase. + * + */ +@RequiredArgsConstructor(access = AccessLevel.PRIVATE) +@ToString +@EqualsAndHashCode +public class CounterWindow { + + public static final CounterWindow INSTANCE = new CounterWindow(); + + private final Map<ID, Tuple2<Long, Double>> lastElementMap = new ConcurrentHashMap<>(); + private final Map<ID, Queue<Tuple2<Long, Double>>> windows = new ConcurrentHashMap<>(); + + public Tuple2<Long, Double> increase(String name, ImmutableMap<String, String> labels, Double value, long windowSize, long now) { + ID id = new ID(name, labels); + Queue<Tuple2<Long, Double>> window = windows.computeIfAbsent(id, unused -> new PriorityQueue<>()); + synchronized (window) { + window.offer(Tuple.of(now, value)); + long waterLevel = now - windowSize; + Tuple2<Long, Double> peek = window.peek(); + if (peek._1 > waterLevel) { + return peek; + } + + Tuple2<Long, Double> result = peek; + while (peek._1 < waterLevel) { + result = window.poll(); + peek = window.element(); + } + + // Choose the closed slot to the expected timestamp + if (waterLevel - result._1 <= peek._1 - waterLevel) { + return result; + } + + return peek; + } + } + + public Tuple2<Long, Double> pop(String name, ImmutableMap<String, String> labels, Double value, long now) { + ID id = new ID(name, labels); + + Tuple2<Long, Double> element = Tuple.of(now, value); + Tuple2<Long, Double> result = lastElementMap.put(id, element); + if (result == null) { + return element; + } + return result; + } + + public void reset() { + windows.clear(); + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/counter/ID.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/counter/ID.java new file mode 100644 index 000000000000..b9bb2340857b --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/counter/ID.java @@ -0,0 +1,34 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl.counter; + +import com.google.common.collect.ImmutableMap; +import lombok.EqualsAndHashCode; +import lombok.RequiredArgsConstructor; +import lombok.ToString; + +@RequiredArgsConstructor +@EqualsAndHashCode +@ToString +class ID { + + private final String name; + + private final ImmutableMap<String, String> labels; +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/EndpointEntityDescription.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/EndpointEntityDescription.java new file mode 100644 index 000000000000..0b80b092814e --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/EndpointEntityDescription.java @@ -0,0 +1,44 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl.entity; + +import java.util.List; +import java.util.stream.Collectors; +import java.util.stream.Stream; +import lombok.Getter; +import lombok.RequiredArgsConstructor; +import lombok.ToString; +import org.apache.skywalking.oap.server.core.analysis.Layer; +import org.apache.skywalking.oap.server.core.analysis.meter.ScopeType; + +@Getter +@RequiredArgsConstructor +@ToString +public class EndpointEntityDescription implements EntityDescription { + private final ScopeType scopeType = ScopeType.ENDPOINT; + private final List<String> serviceKeys; + private final List<String> endpointKeys; + private final Layer layer; + private final String delimiter; + + @Override + public List<String> getLabelKeys() { + return Stream.concat(this.serviceKeys.stream(), this.endpointKeys.stream()).collect(Collectors.toList()); + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/EntityDescription.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/EntityDescription.java new file mode 100644 index 000000000000..b8820a24d7a8 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/EntityDescription.java @@ -0,0 +1,28 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl.entity; + +import java.util.List; +import org.apache.skywalking.oap.server.core.analysis.meter.ScopeType; + +public interface EntityDescription { + ScopeType getScopeType(); + + List<String> getLabelKeys(); +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/InstanceEntityDescription.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/InstanceEntityDescription.java new file mode 100644 index 000000000000..42d59c9bced8 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/InstanceEntityDescription.java @@ -0,0 +1,48 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl.entity; + +import java.util.List; +import java.util.Map; +import java.util.function.Function; +import java.util.stream.Collectors; +import java.util.stream.Stream; +import lombok.Getter; +import lombok.RequiredArgsConstructor; +import lombok.ToString; +import org.apache.skywalking.oap.server.core.analysis.Layer; +import org.apache.skywalking.oap.server.core.analysis.meter.ScopeType; + +@Getter +@RequiredArgsConstructor +@ToString +public class InstanceEntityDescription implements EntityDescription { + private final ScopeType scopeType = ScopeType.SERVICE_INSTANCE; + private final List<String> serviceKeys; + private final List<String> instanceKeys; + private final Layer layer; + private final String serviceDelimiter; + private final String instanceDelimiter; + private final Function<Map<String, String>, Map<String, String>> propertiesExtractor; + + @Override + public List<String> getLabelKeys() { + return Stream.concat(this.serviceKeys.stream(), this.instanceKeys.stream()).collect(Collectors.toList()); + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/ProcessEntityDescription.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/ProcessEntityDescription.java new file mode 100644 index 000000000000..1911fdd0ac56 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/ProcessEntityDescription.java @@ -0,0 +1,49 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl.entity; + +import com.google.common.collect.ImmutableList; +import lombok.Getter; +import lombok.RequiredArgsConstructor; +import lombok.ToString; +import org.apache.skywalking.oap.server.core.analysis.meter.ScopeType; + +import java.util.List; + +@Getter +@RequiredArgsConstructor +@ToString +public class ProcessEntityDescription implements EntityDescription { + private final ScopeType scopeType = ScopeType.PROCESS; + private final List<String> serviceKeys; + private final List<String> serviceInstanceKeys; + private final List<String> processKeys; + private final String layerKey; + private final String delimiter; + + @Override + public List<String> getLabelKeys() { + return ImmutableList.<String>builder() + .addAll(serviceKeys) + .addAll(serviceInstanceKeys) + .addAll(processKeys) + .add(layerKey) + .build(); + } +} \ No newline at end of file diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/ProcessRelationEntityDescription.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/ProcessRelationEntityDescription.java new file mode 100644 index 000000000000..d433ef824aca --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/ProcessRelationEntityDescription.java @@ -0,0 +1,49 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl.entity; + +import com.google.common.collect.ImmutableList; +import lombok.Getter; +import lombok.RequiredArgsConstructor; +import lombok.ToString; +import org.apache.skywalking.oap.server.core.analysis.meter.ScopeType; + +import java.util.List; + +@Getter +@RequiredArgsConstructor +@ToString +public class ProcessRelationEntityDescription implements EntityDescription { + private final ScopeType scopeType = ScopeType.PROCESS_RELATION; + private final List<String> serviceKeys; + private final List<String> instanceKeys; + private final String sourceProcessIdKey; + private final String destProcessIdKey; + private final String detectPointKey; + private final String componentKey; + private final String delimiter; + + @Override + public List<String> getLabelKeys() { + return ImmutableList.<String>builder() + .addAll(serviceKeys) + .addAll(instanceKeys) + .add(detectPointKey, sourceProcessIdKey, destProcessIdKey, componentKey).build(); + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/ServiceEntityDescription.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/ServiceEntityDescription.java new file mode 100644 index 000000000000..ccfcc3199c96 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/ServiceEntityDescription.java @@ -0,0 +1,41 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl.entity; + +import java.util.List; +import lombok.Getter; +import lombok.RequiredArgsConstructor; +import lombok.ToString; +import org.apache.skywalking.oap.server.core.analysis.Layer; +import org.apache.skywalking.oap.server.core.analysis.meter.ScopeType; + +@Getter +@RequiredArgsConstructor +@ToString +public class ServiceEntityDescription implements EntityDescription { + private final ScopeType scopeType = ScopeType.SERVICE; + private final List<String> serviceKeys; + private final Layer layer; + private final String delimiter; + + @Override + public List<String> getLabelKeys() { + return serviceKeys; + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/ServiceRelationEntityDescription.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/ServiceRelationEntityDescription.java new file mode 100644 index 000000000000..be7e82f20557 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/entity/ServiceRelationEntityDescription.java @@ -0,0 +1,53 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl.entity; + +import java.util.List; +import com.google.common.collect.ImmutableList; +import lombok.Getter; +import lombok.RequiredArgsConstructor; +import lombok.ToString; +import org.apache.skywalking.oap.server.core.analysis.Layer; +import org.apache.skywalking.oap.server.core.analysis.meter.ScopeType; +import org.apache.skywalking.oap.server.core.source.DetectPoint; +import org.apache.skywalking.oap.server.library.util.StringUtil; + +@Getter +@RequiredArgsConstructor +@ToString +public class ServiceRelationEntityDescription implements EntityDescription { + private final ScopeType scopeType = ScopeType.SERVICE_RELATION; + private final List<String> sourceServiceKeys; + private final List<String> destServiceKeys; + private final DetectPoint detectPoint; + private final Layer layer; + private final String delimiter; + private final String componentIdKey; + + @Override + public List<String> getLabelKeys() { + final ImmutableList.Builder<String> builder = ImmutableList.<String>builder() + .addAll(this.sourceServiceKeys) + .addAll(this.destServiceKeys); + if (StringUtil.isNotEmpty(componentIdKey)) { + builder.add(componentIdKey); + } + return builder.build(); + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/registry/ProcessRegistry.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/registry/ProcessRegistry.java new file mode 100644 index 000000000000..9a0cd6f95952 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/registry/ProcessRegistry.java @@ -0,0 +1,86 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl.registry; + +import org.apache.commons.lang3.StringUtils; +import org.apache.skywalking.library.kubernetes.ObjectID; +import org.apache.skywalking.oap.meter.analyzer.v2.k8s.K8sInfoRegistry; +import org.apache.skywalking.oap.server.core.Const; +import org.apache.skywalking.oap.server.core.analysis.DownSampling; +import org.apache.skywalking.oap.server.core.analysis.IDManager; +import org.apache.skywalking.oap.server.core.analysis.TimeBucket; +import org.apache.skywalking.oap.server.core.analysis.manual.process.ProcessDetectType; +import org.apache.skywalking.oap.server.core.analysis.manual.process.ProcessTraffic; +import org.apache.skywalking.oap.server.core.analysis.worker.MetricsStreamProcessor; + +/** + * The dynamic entity registry for {@link ProcessTraffic} + */ +public class ProcessRegistry { + + public static final String LOCAL_VIRTUAL_PROCESS = "UNKNOWN_LOCAL"; + public static final String REMOTE_VIRTUAL_PROCESS = "UNKNOWN_REMOTE"; + + /** + * Generate virtual local process under the instance + * @return the process id + */ + public static String generateVirtualLocalProcess(String service, String instance) { + return generateVirtualProcess(service, instance, LOCAL_VIRTUAL_PROCESS); + } + + /** + * Generate virtual remote process under the instance + * trying to generate the name in the kubernetes environment through the remote address + * @return the process id + */ + public static String generateVirtualRemoteProcess(String service, String instance, String remoteAddress) { + // remove port + String ip = StringUtils.substringBeforeLast(remoteAddress, ":"); + + // find remote through k8s metadata + ObjectID metadata = K8sInfoRegistry.getInstance().findPodByIP(ip); + if (metadata == ObjectID.EMPTY) { + metadata = K8sInfoRegistry.getInstance().findServiceByIP(ip); + } + String name = metadata.toString(); + // if not exists, then just use remote unknown + if (StringUtils.isBlank(name)) { + name = REMOTE_VIRTUAL_PROCESS; + } + + return generateVirtualProcess(service, instance, name); + } + + public static String generateVirtualProcess(String service, String instance, String processName) { + final ProcessTraffic traffic = new ProcessTraffic(); + final String serviceId = IDManager.ServiceID.buildId(service, true); + traffic.setServiceId(serviceId); + traffic.setInstanceId(IDManager.ServiceInstanceID.buildId(serviceId, instance)); + traffic.setName(processName); + traffic.setAgentId(Const.EMPTY_STRING); + traffic.setLabelsJson(Const.EMPTY_STRING); + traffic.setDetectType(ProcessDetectType.VIRTUAL.value()); + final long timeBucket = TimeBucket.getTimeBucket(System.currentTimeMillis(), DownSampling.Minute); + traffic.setTimeBucket(timeBucket); + traffic.setLastPingTimestamp(timeBucket); + MetricsStreamProcessor.getInstance().in(traffic); + return traffic.id().build(); + } +} \ No newline at end of file diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/tagOpt/K8sRetagType.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/tagOpt/K8sRetagType.java new file mode 100644 index 000000000000..95218ba1893e --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/tagOpt/K8sRetagType.java @@ -0,0 +1,52 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl.tagOpt; + +import com.google.common.base.Strings; +import com.google.common.collect.ImmutableMap; +import com.google.common.collect.Maps; +import java.util.Arrays; +import java.util.Map; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample; +import org.apache.skywalking.oap.meter.analyzer.v2.k8s.K8sInfoRegistry; + +public enum K8sRetagType implements Retag { + Pod2Service { + @Override + public Sample[] execute(final Sample[] ss, + final String newLabelName, + final String existingLabelName, + final String namespaceLabelName) { + return Arrays.stream(ss).map(sample -> { + String podName = sample.getLabels().get(existingLabelName); + String namespace = sample.getLabels().get(namespaceLabelName); + if (!Strings.isNullOrEmpty(podName) && !Strings.isNullOrEmpty(namespace)) { + String serviceName = K8sInfoRegistry.getInstance().findServiceName(namespace, podName); + if (Strings.isNullOrEmpty(serviceName)) { + serviceName = BLANK; + } + Map<String, String> labels = Maps.newHashMap(sample.getLabels()); + labels.put(newLabelName, serviceName); + return sample.toBuilder().labels(ImmutableMap.copyOf(labels)).build(); + } + return sample; + }).toArray(Sample[]::new); + } + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/tagOpt/Retag.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/tagOpt/Retag.java new file mode 100644 index 000000000000..3a84faf822bb --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/tagOpt/Retag.java @@ -0,0 +1,27 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl.tagOpt; + +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample; + +public interface Retag { + String BLANK = ""; + + Sample[] execute(Sample[] ss, String newLabelName, String existingLabelName, String namespaceLabelName); +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/k8s/K8sInfoRegistry.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/k8s/K8sInfoRegistry.java new file mode 100644 index 000000000000..6fce51feb5b8 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/k8s/K8sInfoRegistry.java @@ -0,0 +1,161 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.k8s; + +import com.google.common.cache.CacheBuilder; +import com.google.common.cache.CacheLoader; +import com.google.common.cache.LoadingCache; +import io.fabric8.kubernetes.api.model.Pod; +import io.fabric8.kubernetes.api.model.Service; +import lombok.SneakyThrows; +import org.apache.skywalking.library.kubernetes.KubernetesPods; +import org.apache.skywalking.library.kubernetes.KubernetesServices; +import org.apache.skywalking.library.kubernetes.ObjectID; + +import java.time.Duration; +import java.util.Collection; +import java.util.Map; +import java.util.Objects; +import java.util.Optional; + +import static java.util.Objects.requireNonNull; + +public class K8sInfoRegistry { + private final static K8sInfoRegistry INSTANCE = new K8sInfoRegistry(); + private final LoadingCache<ObjectID /* Pod */, ObjectID /* Service */> podServiceMap; + private final LoadingCache<String/* podIP */, ObjectID /* Pod */> ipPodMap; + private final LoadingCache<String/* serviceIP */, ObjectID /* Service */> ipServiceMap; + + private K8sInfoRegistry() { + ipPodMap = CacheBuilder.newBuilder() + .expireAfterWrite(Duration.ofMinutes(3)) + .build(CacheLoader.from(ip -> KubernetesPods.INSTANCE + .findByIP(ip) + .map(it -> ObjectID + .builder() + .name(it.getMetadata().getName()) + .namespace(it.getMetadata().getNamespace()) + .build()) + .orElse(ObjectID.EMPTY))); + + ipServiceMap = CacheBuilder.newBuilder() + .expireAfterWrite(Duration.ofMinutes(3)) + .build(CacheLoader.from(ip -> KubernetesServices.INSTANCE + .list() + .stream() + .filter(it -> it.getSpec() != null) + .filter(it -> it.getStatus() != null) + .filter(it -> it.getMetadata() != null) + .filter(it -> (it.getSpec().getClusterIPs() != null && + it.getSpec().getClusterIPs().stream() + .anyMatch(clusterIP -> Objects.equals(clusterIP, ip))) + || (it.getStatus().getLoadBalancer() != null && + it.getStatus().getLoadBalancer().getIngress() != null && + it.getStatus().getLoadBalancer().getIngress().stream() + .anyMatch(ingress -> Objects.equals(ingress.getIp(), ip)))) + .map(it -> ObjectID + .builder() + .name(it.getMetadata().getName()) + .namespace(it.getMetadata().getNamespace()) + .build()) + .findFirst() + .orElse(ObjectID.EMPTY))); + + podServiceMap = CacheBuilder.newBuilder() + .expireAfterWrite(Duration.ofMinutes(3)) + .build(CacheLoader.from(podObjectID -> { + final Optional<Pod> pod = KubernetesPods.INSTANCE + .findByObjectID( + ObjectID + .builder() + .name(podObjectID.name()) + .namespace(podObjectID.namespace()) + .build()); + + if (!pod.isPresent() + || pod.get().getMetadata() == null + || pod.get().getMetadata().getLabels() == null) { + return ObjectID.EMPTY; + } + + final Optional<Service> service = KubernetesServices.INSTANCE + .list() + .stream() + .filter(it -> it.getMetadata() != null) + .filter(it -> Objects.equals(it.getMetadata().getNamespace(), pod.get().getMetadata().getNamespace())) + .filter(it -> it.getSpec() != null) + .filter(it -> requireNonNull(it.getSpec()).getSelector() != null) + .filter(it -> !it.getSpec().getSelector().isEmpty()) + .filter(it -> { + final Map<String, String> labels = pod.get().getMetadata().getLabels(); + final Map<String, String> selector = it.getSpec().getSelector(); + return hasIntersection(selector.entrySet(), labels.entrySet()); + }) + .findFirst(); + if (!service.isPresent()) { + return ObjectID.EMPTY; + } + return ObjectID + .builder() + .name(service.get().getMetadata().getName()) + .namespace(service.get().getMetadata().getNamespace()) + .build(); + })); + } + + public static K8sInfoRegistry getInstance() { + return INSTANCE; + } + + @SneakyThrows + public String findServiceName(String namespace, String podName) { + return findService(namespace, podName).toString(); + } + + @SneakyThrows + public ObjectID findService(String namespace, String podName) { + return this.podServiceMap.get( + ObjectID + .builder() + .name(podName) + .namespace(namespace) + .build()); + } + + @SneakyThrows + public ObjectID findPodByIP(String ip) { + return this.ipPodMap.get(ip); + } + + @SneakyThrows + public ObjectID findServiceByIP(String ip) { + return this.ipServiceMap.get(ip); + } + + private boolean hasIntersection(Collection<?> o, Collection<?> c) { + Objects.requireNonNull(o); + Objects.requireNonNull(c); + for (final Object value : o) { + if (!c.contains(value)) { + return false; + } + } + return true; + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/prometheus/PrometheusMetricConverter.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/prometheus/PrometheusMetricConverter.java new file mode 100644 index 000000000000..d63b4efb9a06 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/prometheus/PrometheusMetricConverter.java @@ -0,0 +1,152 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.prometheus; + +import com.google.common.cache.CacheBuilder; +import com.google.common.cache.CacheLoader; +import com.google.common.cache.LoadingCache; +import com.google.common.collect.ImmutableMap; +import io.vavr.Tuple; +import io.vavr.Tuple2; +import java.util.Collections; +import java.util.Optional; +import java.util.concurrent.ExecutionException; +import java.util.regex.Pattern; +import java.util.stream.Stream; +import lombok.extern.slf4j.Slf4j; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyBuilder; +import org.apache.skywalking.oap.server.library.util.prometheus.metrics.Counter; +import org.apache.skywalking.oap.server.library.util.prometheus.metrics.Gauge; +import org.apache.skywalking.oap.server.library.util.prometheus.metrics.Histogram; +import org.apache.skywalking.oap.server.library.util.prometheus.metrics.Metric; +import org.apache.skywalking.oap.server.library.util.prometheus.metrics.Summary; + +import static com.google.common.collect.ImmutableMap.toImmutableMap; +import static io.vavr.API.$; +import static io.vavr.API.Case; +import static io.vavr.API.Match; +import static io.vavr.Predicates.instanceOf; +import static java.util.stream.Collectors.toList; +import static org.apache.skywalking.oap.meter.analyzer.v2.Analyzer.NIL; + +/** + * PrometheusMetricConverter converts prometheus metrics to meter-system metrics, then store them to backend storage. + */ +@Slf4j +public class PrometheusMetricConverter { + private static final Pattern METRICS_NAME_ESCAPE_PATTERN = Pattern.compile("[/.]"); + + private static final LoadingCache<String, String> ESCAPED_METRICS_NAME_CACHE = + CacheBuilder.newBuilder() + .maximumSize(1000) + .build(new CacheLoader<String, String>() { + @Override + public String load(final String name) { + return METRICS_NAME_ESCAPE_PATTERN.matcher(name).replaceAll("_"); + } + }); + + public static ImmutableMap<String, SampleFamily> convertPromMetricToSampleFamily(Stream<Metric> metricStream) { + return metricStream + .peek(metric -> log.debug("Prom metric to be convert to SampleFamily: {}", metric)) + .flatMap(PrometheusMetricConverter::convertMetric) + .filter(t -> t != NIL && t._2.samples.length > 0) + .peek(t -> log.debug("SampleFamily: {}", t)) + .collect(toImmutableMap(Tuple2::_1, Tuple2::_2, (a, b) -> { + log.debug("merge {} {}", a, b); + Sample[] m = new Sample[a.samples.length + b.samples.length]; + System.arraycopy(a.samples, 0, m, 0, a.samples.length); + System.arraycopy(b.samples, 0, m, a.samples.length, b.samples.length); + return SampleFamilyBuilder.newBuilder(m).build(); + })); + } + + private static Stream<Tuple2<String, SampleFamily>> convertMetric(Metric metric) { + return Match(metric).of( + Case($(instanceOf(Histogram.class)), t -> Stream.of( + Tuple.of(escapedName(metric.getName() + "_count"), SampleFamilyBuilder.newBuilder(Sample.builder().name(escapedName(metric.getName() + "_count")) + .timestamp(metric.getTimestamp()).labels(ImmutableMap.copyOf(metric.getLabels())).value(((Histogram) metric).getSampleCount()).build()).build()), + Tuple.of(escapedName(metric.getName() + "_sum"), SampleFamilyBuilder.newBuilder(Sample.builder().name(escapedName(metric.getName() + "_sum")) + .timestamp(metric.getTimestamp()).labels(ImmutableMap.copyOf(metric.getLabels())).value(((Histogram) metric).getSampleSum()).build()).build()), + convertToSample(metric).orElse(NIL))), + Case($(instanceOf(Summary.class)), t -> Stream.of( + Tuple.of(escapedName(metric.getName() + "_count"), SampleFamilyBuilder.newBuilder(Sample.builder().name(escapedName(metric.getName() + "_count")) + .timestamp(metric.getTimestamp()).labels(ImmutableMap.copyOf(metric.getLabels())).value(((Summary) metric).getSampleCount()).build()).build()), + Tuple.of(escapedName(metric.getName() + "_sum"), SampleFamilyBuilder.newBuilder(Sample.builder().name(escapedName(metric.getName() + "_sum")) + .timestamp(metric.getTimestamp()).labels(ImmutableMap.copyOf(metric.getLabels())).value(((Summary) metric).getSampleSum()).build()).build()), + convertToSample(metric).orElse(NIL))), + Case($(), t -> Stream.of(convertToSample(metric).orElse(NIL))) + ); + } + + private static Optional<Tuple2<String, SampleFamily>> convertToSample(Metric metric) { + Sample[] ss = Match(metric).of( + Case($(instanceOf(Counter.class)), t -> Collections.singletonList(Sample.builder() + .name(escapedName(t.getName())) + .labels(ImmutableMap.copyOf(t.getLabels())) + .timestamp(t.getTimestamp()) + .value(t.getValue()) + .build())), + Case($(instanceOf(Gauge.class)), t -> Collections.singletonList(Sample.builder() + .name(escapedName(t.getName())) + .labels(ImmutableMap.copyOf(t.getLabels())) + .timestamp(t.getTimestamp()) + .value(t.getValue()) + .build())), + Case($(instanceOf(Histogram.class)), t -> t.getBuckets() + .entrySet().stream() + .map(b -> Sample.builder() + .name(escapedName(t.getName())) + .labels(ImmutableMap.<String, String>builder() + .putAll(t.getLabels()) + .put("le", b.getKey().toString()) + .build()) + .timestamp(t.getTimestamp()) + .value(b.getValue()) + .build()).collect(toList())), + Case($(instanceOf(Summary.class)), + t -> t.getQuantiles().entrySet().stream() + .map(b -> Sample.builder() + .name(escapedName(t.getName())) + .labels(ImmutableMap.<String, String>builder() + .putAll(t.getLabels()) + .put("quantile", b.getKey().toString()) + .build()) + .timestamp(t.getTimestamp()) + .value(b.getValue()) + .build()).collect(toList())) + ).toArray(new Sample[0]); + if (ss.length < 1) { + return Optional.empty(); + } + return Optional.of(Tuple.of(escapedName(metric.getName()), SampleFamilyBuilder.newBuilder(ss).build())); + } + + // Returns the escaped name of the given one, with "." and "/" replaced by "_" + protected static String escapedName(final String name) { + try { + return ESCAPED_METRICS_NAME_CACHE.get(name); + } catch (ExecutionException e) { + log.error("Failed to get escaped metrics name from cache", e); + return METRICS_NAME_ESCAPE_PATTERN.matcher(name).replaceAll("_"); + } + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/prometheus/rule/MetricsRule.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/prometheus/rule/MetricsRule.java new file mode 100644 index 000000000000..0a10499f9cda --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/prometheus/rule/MetricsRule.java @@ -0,0 +1,37 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.prometheus.rule; + +import lombok.AllArgsConstructor; +import lombok.Builder; +import lombok.Data; +import lombok.NoArgsConstructor; +import org.apache.skywalking.oap.meter.analyzer.v2.MetricRuleConfig; + +/** + * MetricsRule holds the parsing expression. + */ +@Data +@Builder +@NoArgsConstructor +@AllArgsConstructor +public class MetricsRule implements MetricRuleConfig.RuleConfig { + private String name; + private String exp; +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/prometheus/rule/Rule.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/prometheus/rule/Rule.java new file mode 100644 index 000000000000..930b2e7718a0 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/prometheus/rule/Rule.java @@ -0,0 +1,44 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.prometheus.rule; + +import lombok.Data; +import lombok.NoArgsConstructor; +import org.apache.skywalking.oap.meter.analyzer.v2.MetricRuleConfig; + +import java.util.List; + +/** + * Rule contains the global configuration of prometheus fetcher. + */ +@Data +@NoArgsConstructor +public class Rule implements MetricRuleConfig { + private String name; + private String metricPrefix; + private String expSuffix; + private String expPrefix; + private String filter; + private List<MetricsRule> metricsRules; + + @Override + public String getSourceName() { + return name; + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/prometheus/rule/Rules.java b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/prometheus/rule/Rules.java new file mode 100644 index 000000000000..c179e774a70b --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/v2/prometheus/rule/Rules.java @@ -0,0 +1,120 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.prometheus.rule; + +import java.io.File; + +import java.io.FileReader; +import java.io.IOException; +import java.io.Reader; + +import java.nio.file.FileSystems; +import java.nio.file.Files; +import java.nio.file.Path; + +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.stream.Collectors; + +import java.util.stream.Stream; + +import org.apache.skywalking.oap.server.core.UnexpectedException; +import org.apache.skywalking.oap.server.library.util.ResourceUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.yaml.snakeyaml.Yaml; + +/** + * Rules is factory to instance {@link Rule} from a local file. + */ +public class Rules { + private static final Logger LOG = LoggerFactory.getLogger(Rule.class); + + public static List<Rule> loadRules(final String path) throws IOException { + return loadRules(path, Collections.emptyList()); + } + + public static List<Rule> loadRules(final String path, List<String> enabledRules) throws IOException { + + final Path root = ResourceUtils.getPath(path); + + Map<String, Boolean> formedEnabledRules = enabledRules + .stream() + .map(rule -> { + rule = rule.trim(); + if (rule.startsWith("/")) { + rule = rule.substring(1); + } + if (!rule.endsWith(".yaml") && !rule.endsWith(".yml")) { + return rule + "{.yaml,.yml}"; + } + return rule; + }) + .collect(Collectors.toMap(rule -> rule, $ -> false)); + List<Rule> rules; + try (Stream<Path> stream = Files.walk(root)) { + rules = stream + .filter(it -> formedEnabledRules.keySet().stream() + .anyMatch(rule -> { + boolean matches = FileSystems.getDefault().getPathMatcher("glob:" + rule) + .matches(root.relativize(it)); + if (matches) { + formedEnabledRules.put(rule, true); + } + return matches; + })) + .map(pathPointer -> { + // Use relativized file path without suffix as the rule name. + String relativizePath = root.relativize(pathPointer).toString(); + String ruleName = relativizePath.substring(0, relativizePath.lastIndexOf(".")); + return getRulesFromFile(ruleName, pathPointer); + }) + .filter(Objects::nonNull) + .collect(Collectors.toList()) ; + } + + if (formedEnabledRules.containsValue(false)) { + List<String> rulesNotFound = formedEnabledRules.keySet().stream() + .filter(rule -> !formedEnabledRules.get(rule)) + .collect(Collectors.toList()); + throw new UnexpectedException("Some configuration files of enabled rules are not found, enabled rules: " + rulesNotFound); + } + return rules; + } + + private static Rule getRulesFromFile(String ruleName, Path path) { + File file = path.toFile(); + if (!file.isFile() || file.isHidden()) { + return null; + } + try (Reader r = new FileReader(file)) { + Rule rule = new Yaml().loadAs(r, Rule.class); + if (rule == null) { + return null; + } + rule.setName(ruleName); + return rule; + } catch (IOException e) { + throw new UnexpectedException("Load rule file" + file.getName() + " failed", e); + } + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALClassGeneratorTest.java b/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALClassGeneratorTest.java new file mode 100644 index 000000000000..a850977a010f --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALClassGeneratorTest.java @@ -0,0 +1,453 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.compiler; + +import javassist.ClassPool; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalExpression; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; + +class MALClassGeneratorTest { + + private MALClassGenerator generator; + + @BeforeEach + void setUp() { + generator = new MALClassGenerator(new ClassPool(true)); + } + + @Test + void compileSimpleMetric() throws Exception { + final MalExpression expr = generator.compile( + "test_metric", "instance_jvm_cpu"); + assertNotNull(expr); + // Run returns SampleFamily.EMPTY since no samples are provided + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void compileMethodChain() throws Exception { + final MalExpression expr = generator.compile( + "test_sum", + "instance_jvm_cpu.sum(['service', 'instance'])"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void compileArithmeticAdd() throws Exception { + final MalExpression expr = generator.compile( + "test_add", "metric_a + metric_b"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void compileNumberTimesMetric() throws Exception { + final MalExpression expr = generator.compile( + "test_mul", "100 * process_cpu_seconds_total"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void compileParenChainExpr() throws Exception { + final MalExpression expr = generator.compile( + "test_paren", + "(process_cpu_seconds_total * 100).sum(['service', 'instance']).rate('PT1M')"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void compileWithEnumRef() throws Exception { + final MalExpression expr = generator.compile( + "test_enum", + "instance_jvm_cpu.sum(['service']).service(['service'], Layer.GENERAL)"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void compileWithDownsamplingType() throws Exception { + final MalExpression expr = generator.compile( + "test_ds", + "instance_jvm_cpu.sum(['service']).downsampling(SUM)"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void compileWithClosureTag() throws Exception { + final MalExpression expr = generator.compile( + "test_closure", + "instance_jvm_cpu.tag({tags -> tags.service_name = 'svc1'})"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void generateSourceReturnsJavaCode() { + final String source = generator.generateSource( + "instance_jvm_cpu.sum(['service'])"); + assertNotNull(source); + // Generated source should contain getOrDefault for the metric + org.junit.jupiter.api.Assertions.assertTrue( + source.contains("getOrDefault")); + } + + @Test + void filterSafeNavCompiles() throws Exception { + final String source = generator.generateFilterSource( + "{ tags -> tags.job_name == 'aws-cloud-eks-monitoring'" + + " && tags.Service?.trim() }"); + assertNotNull(source); + assertTrue(source.contains("trim"), "Generated source should contain trim()"); + assertNotNull(generator.compileFilter( + "{ tags -> tags.job_name == 'aws-cloud-eks-monitoring'" + + " && tags.Service?.trim() }")); + } + + @Test + void compileValueEqual() throws Exception { + final MalExpression expr = generator.compile( + "test_value_equal", + "kube_node_status_condition.valueEqual(1).sum(['cluster','node','condition'])"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void compileMethodCallMultiply() throws Exception { + final MalExpression expr = generator.compile( + "test_multiply", + "process_cpu_usage.multiply(100)"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + // ==================== Error handling tests ==================== + + @Test + void emptyExpressionThrows() { + // Demo error: MAL expression parsing failed: 1:0 mismatched input '<EOF>' + // expecting {IDENTIFIER, NUMBER, '(', '-'} + assertThrows(Exception.class, () -> generator.compile("test", "")); + } + + @Test + void malformedExpressionThrows() { + // Demo error: MAL expression parsing failed: 1:7 token recognition error at: '@' + assertThrows(Exception.class, + () -> generator.compile("test", "metric.@invalid")); + } + + @Test + void unclosedParenthesisThrows() { + // Demo error: MAL expression parsing failed: 1:8 mismatched input '<EOF>' + // expecting {')', '+', '-', '*', '/'} + assertThrows(Exception.class, + () -> generator.compile("test", "(metric1 ")); + } + + @Test + void invalidFilterClosureThrows() { + // Demo error: MAL filter parsing failed: 1:0 mismatched input 'invalid' + // expecting '{' + assertThrows(Exception.class, + () -> generator.compileFilter("invalid filter")); + } + + @Test + void emptyFilterBodyThrows() { + // Demo error: MAL filter parsing failed: 1:1 mismatched input '}' + // expecting {IDENTIFIER, ...} + assertThrows(Exception.class, + () -> generator.compileFilter("{ }")); + } + + // ==================== Closure key extraction tests ==================== + + @Test + void tagClosurePutsCorrectKey() throws Exception { + // Issue: tags.cluster = expr should generate tags.put("cluster", ...) + // NOT tags.put("tags.cluster", ...) + final MalExpression expr = generator.compile( + "test_key", + "metric.tag({tags -> tags.cluster = 'activemq::' + tags.cluster})"); + assertNotNull(expr); + final String source = generator.generateSource( + "metric.tag({tags -> tags.cluster = 'activemq::' + tags.cluster})"); + assertTrue(source.contains("this._tag"), + "Generated source should reference pre-compiled closure"); + } + + @Test + void tagClosureKeyExtractionViaGeneratedCode() throws Exception { + // Verify the closure generates correct put("cluster", ...) not put("tags.cluster", ...) + final MalExpression expr = generator.compile( + "test_key_gen", + "metric.tag({tags -> tags.service_name = 'svc1'})"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void tagClosureBracketAssignment() throws Exception { + // tags['key_name'] = 'value' should also use correct key + final MalExpression expr = generator.compile( + "test_bracket", + "metric.tag({tags -> tags['my_key'] = 'my_value'})"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + // ==================== forEach closure tests ==================== + + @Test + void forEachClosureCompiles() throws Exception { + // forEach requires ForEachFunction.accept(String, Map), not TagFunction.apply(Map) + final MalExpression expr = generator.compile( + "test_foreach", + "metric.forEach(['client', 'server'], {prefix, tags ->" + + " tags[prefix + '_name'] = 'value'})"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void forEachClosureWithBareReturn() throws Exception { + // forEach with bare return (void method) — should not throw + final MalExpression expr = generator.compile( + "test_foreach_return", + "metric.forEach(['x'], {prefix, tags ->\n" + + " if (tags[prefix + '_id'] != null) {\n" + + " return\n" + + " }\n" + + " tags[prefix + '_id'] = 'default'\n" + + "})"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void forEachClosureWithVarDeclAndElseIf() throws Exception { + // Full pattern from network-profiling.yaml second closure + final MalExpression expr = generator.compile( + "test_foreach_vars", + "metric.forEach(['component'], {key, tags ->\n" + + " String result = \"\"\n" + + " String protocol = tags['protocol']\n" + + " String ssl = tags['is_ssl']\n" + + " if (protocol == 'http' && ssl == 'true') {\n" + + " result = '129'\n" + + " } else if (protocol == 'http') {\n" + + " result = '49'\n" + + " } else if (ssl == 'true') {\n" + + " result = '130'\n" + + " } else {\n" + + " result = '110'\n" + + " }\n" + + " tags[key] = result\n" + + "})"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + // ==================== ProcessRegistry FQCN resolution tests ==================== + + @Test + void processRegistryResolvedToFQCN() throws Exception { + // ProcessRegistry.generateVirtualLocalProcess() should resolve to FQCN + final MalExpression expr = generator.compile( + "test_registry", + "metric.forEach(['client'], {prefix, tags ->\n" + + " tags[prefix + '_process_id'] = " + + "ProcessRegistry.generateVirtualLocalProcess(tags.service, tags.instance)\n" + + "})"); + assertNotNull(expr); + // We can't easily execute this (needs ProcessRegistry runtime) but compile should succeed + } + + // ==================== Network-profiling full expression tests ==================== + + @Test + void networkProfilingFirstClosureCompiles() throws Exception { + // Full first closure from network-profiling.yaml expPrefix + final MalExpression expr = generator.compile( + "test_np1", + "metric.forEach(['client', 'server'], { prefix, tags ->\n" + + " if (tags[prefix + '_process_id'] != null) {\n" + + " return\n" + + " }\n" + + " if (tags[prefix + '_local'] == 'true') {\n" + + " tags[prefix + '_process_id'] = ProcessRegistry" + + ".generateVirtualLocalProcess(tags.service, tags.instance)\n" + + " return\n" + + " }\n" + + " tags[prefix + '_process_id'] = ProcessRegistry" + + ".generateVirtualRemoteProcess(tags.service, tags.instance," + + " tags[prefix + '_address'])\n" + + " })"); + assertNotNull(expr); + } + + @Test + void networkProfilingSecondClosureCompiles() throws Exception { + // Full second closure from network-profiling.yaml expPrefix + final MalExpression expr = generator.compile( + "test_np2", + "metric.forEach(['component'], { key, tags ->\n" + + " String result = \"\"\n" + + " // protocol are defined in the component-libraries.yml\n" + + " String protocol = tags['protocol']\n" + + " String ssl = tags['is_ssl']\n" + + " if (protocol == 'http' && ssl == 'true') {\n" + + " result = '129'\n" + + " } else if (protocol == 'http') {\n" + + " result = '49'\n" + + " } else if (ssl == 'true') {\n" + + " result = '130'\n" + + " } else {\n" + + " result = '110'\n" + + " }\n" + + " tags[key] = result\n" + + " })"); + assertNotNull(expr); + } + + // ==================== String concatenation in closures ==================== + + @Test + void apisixExpressionCompiles() throws Exception { + // The APISIX expression that originally triggered the E2E failure: + // safe navigation + elvis + bracket access + string concat + final MalExpression expr = generator.compile( + "test_apisix", + "metric.tag({tags -> tags.service_name = 'APISIX::'" + + "+(tags['skywalking_service']?.trim()?:'APISIX')})"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void closureStringConcatenation() throws Exception { + // APISIX-style: tags.service_name = 'APISIX::' + tags.service + final MalExpression expr = generator.compile( + "test_concat", + "metric.tag({tags -> tags.service_name = 'APISIX::' + tags.service})"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void regexMatchWithDefCompiles() throws Exception { + // envoy-ca pattern: def + regex match + ternary with chained indexing + final MalExpression expr = generator.compile( + "test_regex", + "metric.tag({tags ->\n" + + " def matcher = (tags.metrics_name =~ /\\.ssl\\.certificate\\.([^.]+)\\.expiration/)\n" + + " tags.secret_name = matcher ? matcher[0][1] : \"unknown\"\n" + + "})"); + assertNotNull(expr); + assertNotNull(expr.run(java.util.Map.of())); + } + + @Test + void envoyCAExpressionCompiles() throws Exception { + // Full envoy-ca.yaml expression with regex closure, subtraction of time(), and service + final MalExpression expr = generator.compile( + "test_envoy_ca", + "(metric.tagMatch('metrics_name', '.*ssl.*expiration_unix_time_seconds')" + + ".tag({tags ->\n" + + " def matcher = (tags.metrics_name =~ /\\.ssl\\.certificate\\.([^.]+)" + + "\\.expiration_unix_time_seconds/)\n" + + " tags.secret_name = matcher ? matcher[0][1] : \"unknown\"\n" + + "}).min(['app', 'secret_name']) - time())" + + ".downsampling(MIN).service(['app'], Layer.MESH_DP)"); + assertNotNull(expr); + } + + @Test + void timeScalarFunctionHandledInMetadata() throws Exception { + // time() should not appear as a sample name and should be treated as scalar + final MalExpression expr = generator.compile( + "test_time", + "(metric.sum(['app']) - time()).service(['app'], Layer.GENERAL)"); + assertNotNull(expr); + assertNotNull(expr.metadata()); + // time() should not be in sample names + assertTrue(expr.metadata().getSamples().contains("metric")); + assertTrue(expr.metadata().getSamples().size() == 1); + } + + @Test + void runMethodHasLocalVariableTable() throws Exception { + // Compile a class that writes its .class file for inspection + final java.io.File tmpDir = java.nio.file.Files.createTempDirectory("mal-lvt").toFile(); + try { + final ClassPool pool = new ClassPool(true); + final MALClassGenerator gen = new MALClassGenerator(pool); + gen.setClassOutputDir(tmpDir); + final MalExpression expr = gen.compile( + "test_lvt", "instance_jvm_cpu.sum(['service', 'instance'])"); + assertNotNull(expr); + // Read the .class file bytecode and verify LVT + final java.io.File[] classFiles = tmpDir.listFiles((d, n) -> n.endsWith(".class")); + assertNotNull(classFiles); + assertTrue(classFiles.length > 0, "Should have generated .class file"); + // Use javassist to read back and check for LocalVariableTable + final javassist.bytecode.ClassFile cf = + new javassist.bytecode.ClassFile( + new java.io.DataInputStream( + new java.io.FileInputStream(classFiles[0]))); + final javassist.bytecode.MethodInfo runMi = cf.getMethod("run"); + assertNotNull(runMi, "Should have run() method"); + final javassist.bytecode.CodeAttribute code = runMi.getCodeAttribute(); + assertNotNull(code, "run() should have CodeAttribute"); + final javassist.bytecode.LocalVariableAttribute lva = + (javassist.bytecode.LocalVariableAttribute) + code.getAttribute(javassist.bytecode.LocalVariableAttribute.tag); + assertNotNull(lva, "run() should have LocalVariableTable attribute"); + // Check that slot 1 has name "samples" + boolean foundSamples = false; + boolean foundSf = false; + for (int i = 0; i < lva.tableLength(); i++) { + final String name = lva.variableName(i); + if ("samples".equals(name)) { + foundSamples = true; + } + if ("sf".equals(name)) { + foundSf = true; + } + } + assertTrue(foundSamples, "LVT should contain 'samples'"); + assertTrue(foundSf, "LVT should contain 'sf'"); + } finally { + for (final java.io.File f : tmpDir.listFiles()) { + f.delete(); + } + tmpDir.delete(); + } + } + +} diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALScriptParserTest.java b/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALScriptParserTest.java new file mode 100644 index 000000000000..e203c364bed6 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/v2/compiler/MALScriptParserTest.java @@ -0,0 +1,406 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.compiler; + +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.BinaryExpr; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ClosureArgument; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.EnumRefArgument; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ExprArgument; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.MetricExpr; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.NumberExpr; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.NumberListArgument; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.ParenChainExpr; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.StringArgument; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALExpressionModel.StringListArgument; +import org.junit.jupiter.api.Test; + +import java.util.List; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertInstanceOf; +import static org.junit.jupiter.api.Assertions.assertThrows; + +class MALScriptParserTest { + + @Test + void parseSimpleMetric() { + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "instance_golang_heap_alloc"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + assertEquals("instance_golang_heap_alloc", metric.getMetricName()); + assertEquals(0, metric.getMethodChain().size()); + } + + @Test + void parseMethodChain() { + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "jvm_memory_bytes_used.sum(['service', 'host_name', 'area'])"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + assertEquals("jvm_memory_bytes_used", metric.getMetricName()); + assertEquals(1, metric.getMethodChain().size()); + assertEquals("sum", metric.getMethodChain().get(0).getName()); + + final StringListArgument sl = + (StringListArgument) metric.getMethodChain().get(0).getArguments().get(0); + assertEquals(List.of("service", "host_name", "area"), sl.getValues()); + } + + @Test + void parseTagEqualWithRateAndService() { + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "mysql_global_status_commands_total" + + ".tagEqual('command','insert')" + + ".sum(['service_instance_id','host_name'])" + + ".rate('PT1M')"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + assertEquals(3, metric.getMethodChain().size()); + assertEquals("tagEqual", metric.getMethodChain().get(0).getName()); + assertEquals("sum", metric.getMethodChain().get(1).getName()); + assertEquals("rate", metric.getMethodChain().get(2).getName()); + + // Check tagEqual arguments + final StringArgument arg0 = + (StringArgument) metric.getMethodChain().get(0).getArguments().get(0); + assertEquals("command", arg0.getValue()); + final StringArgument arg1 = + (StringArgument) metric.getMethodChain().get(0).getArguments().get(1); + assertEquals("insert", arg1.getValue()); + } + + @Test + void parseHistogramPercentile() { + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "metric.sum(['le']).histogram().histogram_percentile([50,75,90,95,99])"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + assertEquals(3, metric.getMethodChain().size()); + assertEquals("histogram", metric.getMethodChain().get(1).getName()); + assertEquals("histogram_percentile", metric.getMethodChain().get(2).getName()); + + final NumberListArgument nl = + (NumberListArgument) metric.getMethodChain().get(2).getArguments().get(0); + assertEquals(List.of(50.0, 75.0, 90.0, 95.0, 99.0), nl.getValues()); + } + + @Test + void parseArithmeticAdd() { + final MALExpressionModel.Expr ast = MALScriptParser.parse("metric1 + metric2"); + assertInstanceOf(BinaryExpr.class, ast); + final BinaryExpr bin = (BinaryExpr) ast; + assertEquals(MALExpressionModel.ArithmeticOp.ADD, bin.getOp()); + assertEquals("metric1", ((MetricExpr) bin.getLeft()).getMetricName()); + assertEquals("metric2", ((MetricExpr) bin.getRight()).getMetricName()); + } + + @Test + void parseArithmeticMultiply() { + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "(process_cpu_seconds_total * 100).sum(['service', 'host_name']).rate('PT1M')"); + assertInstanceOf(ParenChainExpr.class, ast); + final ParenChainExpr parenChain = (ParenChainExpr) ast; + + // Inner expression is (metric * 100) + assertInstanceOf(BinaryExpr.class, parenChain.getInner()); + final BinaryExpr inner = (BinaryExpr) parenChain.getInner(); + assertEquals(MALExpressionModel.ArithmeticOp.MUL, inner.getOp()); + assertEquals("process_cpu_seconds_total", ((MetricExpr) inner.getLeft()).getMetricName()); + assertEquals(100.0, ((NumberExpr) inner.getRight()).getValue()); + + // Method chain: .sum(['service', 'host_name']).rate('PT1M') + assertEquals(2, parenChain.getMethodChain().size()); + assertEquals("sum", parenChain.getMethodChain().get(0).getName()); + assertEquals("rate", parenChain.getMethodChain().get(1).getName()); + } + + @Test + void parseNumberTimesMetric() { + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "100 * metrics_aggregation_queue_used_percentage" + + ".sum(['service', 'host_name', 'level', 'slot'])"); + assertInstanceOf(BinaryExpr.class, ast); + final BinaryExpr bin = (BinaryExpr) ast; + assertEquals(MALExpressionModel.ArithmeticOp.MUL, bin.getOp()); + assertInstanceOf(NumberExpr.class, bin.getLeft()); + assertEquals(100.0, ((NumberExpr) bin.getLeft()).getValue()); + assertInstanceOf(MetricExpr.class, bin.getRight()); + } + + @Test + void parseEnumRefArgument() { + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "metric.service(['svc'], Layer.GENERAL)"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + final EnumRefArgument enumRef = + (EnumRefArgument) metric.getMethodChain().get(0).getArguments().get(1); + assertEquals("Layer", enumRef.getEnumType()); + assertEquals("GENERAL", enumRef.getEnumValue()); + } + + @Test + void parseDownsampling() { + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "metric.histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM)"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + assertEquals(3, metric.getMethodChain().size()); + assertEquals("downsampling", metric.getMethodChain().get(2).getName()); + // SUM is parsed as an expression argument (identifier) + // In the grammar, it matches additiveExpression -> metric reference + final ExprArgument exprArg = + (ExprArgument) metric.getMethodChain().get(2).getArguments().get(0); + assertInstanceOf(MetricExpr.class, exprArg.getExpr()); + assertEquals("SUM", ((MetricExpr) exprArg.getExpr()).getMetricName()); + } + + @Test + void parseValueEqual() { + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "kube_node_status_condition.valueEqual(1).sum(['cluster','node','condition'])"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + assertEquals(2, metric.getMethodChain().size()); + assertEquals("valueEqual", metric.getMethodChain().get(0).getName()); + } + + @Test + void parseDivTwoMetrics() { + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "metric_sum.div(metric_count)"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + assertEquals(1, metric.getMethodChain().size()); + assertEquals("div", metric.getMethodChain().get(0).getName()); + final ExprArgument divArg = + (ExprArgument) metric.getMethodChain().get(0).getArguments().get(0); + assertInstanceOf(MetricExpr.class, divArg.getExpr()); + assertEquals("metric_count", ((MetricExpr) divArg.getExpr()).getMetricName()); + } + + @Test + void parseClosureTag() { + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "metric.tag({tags -> tags.service_name = 'svc1'})"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + assertEquals(1, metric.getMethodChain().size()); + assertEquals("tag", metric.getMethodChain().get(0).getName()); + + final ClosureArgument closure = + (ClosureArgument) metric.getMethodChain().get(0).getArguments().get(0); + assertEquals(List.of("tags"), closure.getParams()); + assertEquals(1, closure.getBody().size()); + } + + @Test + void parseRetagByK8sMeta() { + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "kube_pod_status_phase" + + ".retagByK8sMeta('service', K8sRetagType.Pod2Service, 'pod', 'namespace')" + + ".tagNotEqual('service', '')" + + ".valueEqual(1)" + + ".sum(['cluster', 'service', 'phase'])"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + assertEquals(4, metric.getMethodChain().size()); + assertEquals("retagByK8sMeta", metric.getMethodChain().get(0).getName()); + + // Check K8sRetagType.Pod2Service argument + final EnumRefArgument enumArg = + (EnumRefArgument) metric.getMethodChain().get(0).getArguments().get(1); + assertEquals("K8sRetagType", enumArg.getEnumType()); + assertEquals("Pod2Service", enumArg.getEnumValue()); + } + + @Test + void parseTagAssignmentExtractsCorrectKey() { + // Issue: tags.cluster = expr should produce key "cluster", not "tags.cluster" + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "metric.tag({tags -> tags.cluster = 'activemq::' + tags.cluster})"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + final ClosureArgument closure = + (ClosureArgument) metric.getMethodChain().get(0).getArguments().get(0); + assertEquals(1, closure.getBody().size()); + + final MALExpressionModel.ClosureAssignment assign = + (MALExpressionModel.ClosureAssignment) closure.getBody().get(0); + assertEquals("tags", assign.getMapVar()); + // Key should be "cluster", not "tags.cluster" + assertInstanceOf(MALExpressionModel.ClosureStringLiteral.class, assign.getKeyExpr()); + assertEquals("cluster", + ((MALExpressionModel.ClosureStringLiteral) assign.getKeyExpr()).getValue()); + } + + @Test + void parseTagBracketAssignment() { + // tags[prefix + '_process_id'] = expr + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "metric.tag({prefix, tags -> tags[prefix + '_id'] = 'val'})"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + final ClosureArgument closure = + (ClosureArgument) metric.getMethodChain().get(0).getArguments().get(0); + assertEquals(List.of("prefix", "tags"), closure.getParams()); + + final MALExpressionModel.ClosureAssignment assign = + (MALExpressionModel.ClosureAssignment) closure.getBody().get(0); + assertEquals("tags", assign.getMapVar()); + // Key is a binary expression (prefix + '_id') + assertInstanceOf(MALExpressionModel.ClosureBinaryExpr.class, assign.getKeyExpr()); + } + + @Test + void parseForEachClosure() { + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "metric.forEach(['client', 'server'], {prefix, tags -> tags.key = prefix})"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + assertEquals("forEach", metric.getMethodChain().get(0).getName()); + + final ClosureArgument closure = + (ClosureArgument) metric.getMethodChain().get(0).getArguments().get(1); + assertEquals(List.of("prefix", "tags"), closure.getParams()); + } + + @Test + void parseVariableDeclaration() { + // String result = "" — Groovy local variable declaration + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "metric.forEach(['x'], {key, tags ->\n" + + " String result = \"\"\n" + + " tags[key] = result\n" + + "})"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + final ClosureArgument closure = + (ClosureArgument) metric.getMethodChain().get(0).getArguments().get(1); + assertEquals(2, closure.getBody().size()); + // First statement: variable declaration + assertInstanceOf(MALExpressionModel.ClosureVarDecl.class, closure.getBody().get(0)); + final MALExpressionModel.ClosureVarDecl vd = + (MALExpressionModel.ClosureVarDecl) closure.getBody().get(0); + assertEquals("String", vd.getTypeName()); + assertEquals("result", vd.getVarName()); + } + + @Test + void parseBareReturn() { + // return with no expression (void return) + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "metric.forEach(['x'], {prefix, tags ->\n" + + " if (tags[prefix + '_id'] != null) {\n" + + " return\n" + + " }\n" + + " tags[prefix + '_id'] = 'default'\n" + + "})"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + final ClosureArgument closure = + (ClosureArgument) metric.getMethodChain().get(0).getArguments().get(1); + // First statement is if, which contains a bare return + assertInstanceOf(MALExpressionModel.ClosureIfStatement.class, closure.getBody().get(0)); + final MALExpressionModel.ClosureIfStatement ifStmt = + (MALExpressionModel.ClosureIfStatement) closure.getBody().get(0); + assertInstanceOf(MALExpressionModel.ClosureReturnStatement.class, + ifStmt.getThenBranch().get(0)); + final MALExpressionModel.ClosureReturnStatement ret = + (MALExpressionModel.ClosureReturnStatement) ifStmt.getThenBranch().get(0); + // Bare return — value should be null + assertEquals(null, ret.getValue()); + } + + @Test + void parseStaticMethodCall() { + // ProcessRegistry.generateVirtualLocalProcess(tags.service, tags.instance) + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "metric.tag({tags -> " + + "tags.pid = ProcessRegistry.generateVirtualLocalProcess(tags.service, tags.instance)" + + "})"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + final ClosureArgument closure = + (ClosureArgument) metric.getMethodChain().get(0).getArguments().get(0); + final MALExpressionModel.ClosureAssignment assign = + (MALExpressionModel.ClosureAssignment) closure.getBody().get(0); + // RHS should be a ClosureMethodChain with target "ProcessRegistry" + assertInstanceOf(MALExpressionModel.ClosureMethodChain.class, assign.getValue()); + final MALExpressionModel.ClosureMethodChain chain = + (MALExpressionModel.ClosureMethodChain) assign.getValue(); + assertEquals("ProcessRegistry", chain.getTarget()); + assertEquals(1, chain.getSegments().size()); + assertInstanceOf(MALExpressionModel.ClosureMethodCallSeg.class, + chain.getSegments().get(0)); + } + + @Test + void parseDefWithRegexMatch() { + // def matcher = (tags.metrics_name =~ /pattern/) + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "metric.tag({tags ->\n" + + " def matcher = (tags.metrics_name =~ /\\.ssl\\.([^.]+)/)\n" + + " tags.secret_name = matcher ? matcher[0][1] : \"unknown\"\n" + + "})"); + assertInstanceOf(MetricExpr.class, ast); + final MetricExpr metric = (MetricExpr) ast; + final ClosureArgument closure = + (ClosureArgument) metric.getMethodChain().get(0).getArguments().get(0); + assertEquals(2, closure.getBody().size()); + + // First statement: def variable declaration + assertInstanceOf(MALExpressionModel.ClosureVarDecl.class, closure.getBody().get(0)); + final MALExpressionModel.ClosureVarDecl vd = + (MALExpressionModel.ClosureVarDecl) closure.getBody().get(0); + assertEquals("String[][]", vd.getTypeName()); + assertEquals("matcher", vd.getVarName()); + // Initializer should be a regex match expression + assertInstanceOf(MALExpressionModel.ClosureRegexMatchExpr.class, vd.getInitializer()); + final MALExpressionModel.ClosureRegexMatchExpr rm = + (MALExpressionModel.ClosureRegexMatchExpr) vd.getInitializer(); + assertEquals("\\.ssl\\.([^.]+)", rm.getPattern()); + + // Second statement: ternary with chained indexing + assertInstanceOf(MALExpressionModel.ClosureAssignment.class, closure.getBody().get(1)); + } + + @Test + void parseTimeFunctionCall() { + // (expr - time()).downsampling(MIN) + final MALExpressionModel.Expr ast = MALScriptParser.parse( + "(metric.min(['app']) - time()).downsampling(MIN).service(['app'], Layer.MESH_DP)"); + assertInstanceOf(ParenChainExpr.class, ast); + final ParenChainExpr pce = (ParenChainExpr) ast; + assertInstanceOf(BinaryExpr.class, pce.getInner()); + final BinaryExpr bin = (BinaryExpr) pce.getInner(); + assertEquals(MALExpressionModel.ArithmeticOp.SUB, bin.getOp()); + assertInstanceOf(MALExpressionModel.FunctionCallExpr.class, bin.getRight()); + final MALExpressionModel.FunctionCallExpr timeFn = + (MALExpressionModel.FunctionCallExpr) bin.getRight(); + assertEquals("time", timeFn.getFunctionName()); + assertEquals(0, timeFn.getArguments().size()); + } + + @Test + void parseSyntaxErrorThrows() { + assertThrows(IllegalArgumentException.class, + () -> MALScriptParser.parse("metric.sum(")); + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/DSLV2Test.java b/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/DSLV2Test.java new file mode 100644 index 000000000000..b1a66e894fd1 --- /dev/null +++ b/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/DSLV2Test.java @@ -0,0 +1,92 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl; + +import com.google.common.collect.ImmutableMap; +import java.util.Map; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; + +class DSLV2Test { + + @Test + void parseCompilesSimpleExpression() { + final Expression expr = DSL.parse("test_metric", "test_metric.sum(['service'])"); + assertNotNull(expr); + } + + @Test + void parseThrowsOnInvalidExpression() { + assertThrows(IllegalStateException.class, + () -> DSL.parse("bad", "??? invalid !!!")); + } + + @Test + void expressionRunWithCompiledExpression() { + final Expression expr = DSL.parse("test_metric", + "test_metric.service(['service'], Layer.GENERAL)"); + + // Run with empty map should return fail (EMPTY) + final Result emptyResult = expr.run(Map.of()); + assertNotNull(emptyResult); + assertFalse(emptyResult.isSuccess()); + } + + @Test + void metadataExtraction() { + final Expression expr = DSL.parse("test_metric", + "test_metric.sum(['service', 'instance']).service(['service'], Layer.GENERAL)"); + + final ExpressionMetadata metadata = expr.parse(); + assertNotNull(metadata); + assertTrue(metadata.getSamples().contains("test_metric")); + assertNotNull(metadata.getScopeType()); + } + + @Test + void filterExpressionWithMalFilter() { + final MalFilter filter = tags -> "svc1".equals(tags.get("service")); + + final Sample sample1 = Sample.builder() + .name("metric") + .labels(ImmutableMap.of("service", "svc1")) + .value(10.0) + .timestamp(System.currentTimeMillis()) + .build(); + final Sample sample2 = Sample.builder() + .name("metric") + .labels(ImmutableMap.of("service", "svc2")) + .value(20.0) + .timestamp(System.currentTimeMillis()) + .build(); + + final SampleFamily sf = SampleFamily.build( + SampleFamily.RunningContext.instance(), sample1, sample2); + + final SampleFamily filtered = sf.filter(filter::test); + assertNotNull(filtered); + assertTrue(filtered != SampleFamily.EMPTY); + assertEquals(1, filtered.samples.length); + assertEquals(10.0, filtered.samples[0].getValue()); + } +} diff --git a/oap-server/analyzer/pom.xml b/oap-server/analyzer/pom.xml index 9dca94257fea..8cad4dff5a9d 100644 --- a/oap-server/analyzer/pom.xml +++ b/oap-server/analyzer/pom.xml @@ -30,9 +30,10 @@ <modules> <module>agent-analyzer</module> - <module>log-analyzer</module> - <module>meter-analyzer</module> <module>event-analyzer</module> + <module>meter-analyzer</module> + <module>log-analyzer</module> + <module>hierarchy</module> </modules> <dependencies> diff --git a/oap-server/exporter/pom.xml b/oap-server/exporter/pom.xml index 85dc662d5177..33392941e077 100644 --- a/oap-server/exporter/pom.xml +++ b/oap-server/exporter/pom.xml @@ -47,6 +47,12 @@ <artifactId>grpc-testing</artifactId> <scope>test</scope> </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> <build> diff --git a/oap-server/exporter/src/test/java/org/apache/skywalking/oap/server/exporter/provider/grpc/GRPCExporterProviderTest.java b/oap-server/exporter/src/test/java/org/apache/skywalking/oap/server/exporter/provider/grpc/GRPCExporterProviderTest.java index d2e473004c11..5a17c50ac027 100644 --- a/oap-server/exporter/src/test/java/org/apache/skywalking/oap/server/exporter/provider/grpc/GRPCExporterProviderTest.java +++ b/oap-server/exporter/src/test/java/org/apache/skywalking/oap/server/exporter/provider/grpc/GRPCExporterProviderTest.java @@ -31,7 +31,7 @@ import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Disabled; import org.junit.jupiter.api.Test; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.Iterator; import java.util.ServiceLoader; @@ -97,7 +97,7 @@ public void notifyAfterCompleted() throws ServiceNotProvidedException, ModuleSta doNothing().when(exporter).fetchSubscriptionList(); grpcExporterProvider.setManager(manager); - Whitebox.setInternalState(grpcExporterProvider, "grpcMetricsExporter", exporter); + ReflectUtil.setInternalState(grpcExporterProvider, "grpcMetricsExporter", exporter); grpcExporterProvider.notifyAfterCompleted(); } diff --git a/oap-server/exporter/src/test/java/org/apache/skywalking/oap/server/exporter/provider/grpc/GRPCExporterTest.java b/oap-server/exporter/src/test/java/org/apache/skywalking/oap/server/exporter/provider/grpc/GRPCExporterTest.java index 770c55465a3e..e7af072752ac 100644 --- a/oap-server/exporter/src/test/java/org/apache/skywalking/oap/server/exporter/provider/grpc/GRPCExporterTest.java +++ b/oap-server/exporter/src/test/java/org/apache/skywalking/oap/server/exporter/provider/grpc/GRPCExporterTest.java @@ -46,7 +46,7 @@ import org.junit.jupiter.api.Test; import org.mockito.MockedStatic; import org.mockito.Mockito; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import static org.apache.skywalking.oap.server.core.exporter.ExportEvent.EventType.INCREMENT; import static org.mockito.Mockito.when; @@ -88,8 +88,8 @@ public void setUp() throws Exception { serviceRegistry.addService(service); blockingStub = MetricExportServiceGrpc.newBlockingStub(channel); futureStub = MetricExportServiceGrpc.newStub(channel); - Whitebox.setInternalState(exporter, "blockingStub", blockingStub); - Whitebox.setInternalState(exporter, "exportServiceFutureStub", futureStub); + ReflectUtil.setInternalState(exporter, "blockingStub", blockingStub); + ReflectUtil.setInternalState(exporter, "exportServiceFutureStub", futureStub); defineMockedStatic = Mockito.mockStatic(DefaultScopeDefine.class); when(DefaultScopeDefine.inServiceCatalog(1)).thenReturn(true); } @@ -120,7 +120,7 @@ public void export() { exporter.fetchSubscriptionList(); ExportEvent event = new ExportEvent(new MockExporterMetrics(), INCREMENT); exporter.export(event); - List<SubscriptionMetric> subscriptionList = Whitebox.getInternalState(exporter, "subscriptionList"); + List<SubscriptionMetric> subscriptionList = ReflectUtil.getInternalState(exporter, "subscriptionList"); Assertions.assertEquals("mock-metrics", subscriptionList.get(0).getMetricName()); Assertions.assertEquals("int-mock-metrics", subscriptionList.get(1).getMetricName()); Assertions.assertEquals("long-mock-metrics", subscriptionList.get(2).getMetricName()); @@ -138,7 +138,7 @@ public MetricsMetaInfo getMeta() { @Test public void initSubscriptionList() { exporter.fetchSubscriptionList(); - List<SubscriptionMetric> subscriptionList = Whitebox.getInternalState(exporter, "subscriptionList"); + List<SubscriptionMetric> subscriptionList = ReflectUtil.getInternalState(exporter, "subscriptionList"); Assertions.assertEquals("mock-metrics", subscriptionList.get(0).getMetricName()); Assertions.assertEquals("int-mock-metrics", subscriptionList.get(1).getMetricName()); Assertions.assertEquals("long-mock-metrics", subscriptionList.get(2).getMetricName()); diff --git a/oap-server/microbench/pom.xml b/oap-server/microbench/pom.xml deleted file mode 100644 index b720d7e84cdc..000000000000 --- a/oap-server/microbench/pom.xml +++ /dev/null @@ -1,105 +0,0 @@ -<?xml version="1.0" encoding="UTF-8"?> -<!-- - ~ Licensed to the Apache Software Foundation (ASF) under one or more - ~ contributor license agreements. See the NOTICE file distributed with - ~ this work for additional information regarding copyright ownership. - ~ The ASF licenses this file to You under the Apache License, Version 2.0 - ~ (the "License"); you may not use this file except in compliance with - ~ the License. You may obtain a copy of the License at - ~ - ~ http://www.apache.org/licenses/LICENSE-2.0 - ~ - ~ Unless required by applicable law or agreed to in writing, software - ~ distributed under the License is distributed on an "AS IS" BASIS, - ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - ~ See the License for the specific language governing permissions and - ~ limitations under the License. - ~ - --> - -<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> - <parent> - <artifactId>oap-server</artifactId> - <groupId>org.apache.skywalking</groupId> - <version>9.6.0-SNAPSHOT</version> - </parent> - <modelVersion>4.0.0</modelVersion> - <artifactId>microbench</artifactId> - - <properties> - <jmh.version>1.36</jmh.version> - <slf4j.version>1.7.30</slf4j.version> - <uberjar.name>benchmarks</uberjar.name> - <maven-shade-plugin.version>3.2.3</maven-shade-plugin.version> - </properties> - - <dependencies> - <dependency> - <groupId>org.apache.skywalking</groupId> - <artifactId>server-core</artifactId> - <version>${project.version}</version> - </dependency> - <dependency> - <groupId>org.apache.skywalking</groupId> - <artifactId>library-util</artifactId> - <version>${project.version}</version> - </dependency> - <!--JMH--> - <dependency> - <groupId>org.openjdk.jmh</groupId> - <artifactId>jmh-core</artifactId> - <version>${jmh.version}</version> - </dependency> - <dependency> - <groupId>org.openjdk.jmh</groupId> - <artifactId>jmh-generator-annprocess</artifactId> - <version>${jmh.version}</version> - <scope>provided</scope> - </dependency> - <!--SLF4j--> - <dependency> - <groupId>org.slf4j</groupId> - <artifactId>slf4j-api</artifactId> - <version>${slf4j.version}</version> - </dependency> - <!--JUNIT--> - <dependency> - <groupId>org.junit.jupiter</groupId> - <artifactId>junit-jupiter</artifactId> - <scope>compile</scope> - </dependency> - </dependencies> - <build> - <plugins> - <plugin> - <groupId>org.apache.maven.plugins</groupId> - <artifactId>maven-shade-plugin</artifactId> - <version>${maven-shade-plugin.version}</version> - <executions> - <execution> - <phase>package</phase> - <goals> - <goal>shade</goal> - </goals> - <configuration> - <finalName>${uberjar.name}</finalName> - <transformers> - <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> - <mainClass>org.openjdk.jmh.Main</mainClass> - </transformer> - </transformers> - <filters> - <filter> - <artifact>*:*</artifact> - <excludes> - <exclude>**/Log4j2Plugins.dat</exclude> - </excludes> - </filter> - </filters> - </configuration> - </execution> - </executions> - </plugin> - </plugins> - </build> -</project> diff --git a/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/base/AbstractMicrobenchmark.java b/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/base/AbstractMicrobenchmark.java deleted file mode 100644 index 6e55b3a05a98..000000000000 --- a/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/base/AbstractMicrobenchmark.java +++ /dev/null @@ -1,116 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package org.apache.skywalking.oap.server.microbench.base; - -import java.io.File; -import java.io.IOException; -import java.util.concurrent.Executors; -import java.util.concurrent.LinkedBlockingQueue; -import java.util.concurrent.ThreadPoolExecutor; -import java.util.concurrent.TimeUnit; - -import org.junit.jupiter.api.Test; -import org.openjdk.jmh.annotations.Fork; -import org.openjdk.jmh.annotations.Measurement; -import org.openjdk.jmh.annotations.Scope; -import org.openjdk.jmh.annotations.State; -import org.openjdk.jmh.annotations.Warmup; -import org.openjdk.jmh.profile.GCProfiler; -import org.openjdk.jmh.results.format.ResultFormatType; -import org.openjdk.jmh.runner.Runner; -import org.openjdk.jmh.runner.options.ChainedOptionsBuilder; -import org.openjdk.jmh.runner.options.OptionsBuilder; - -import lombok.extern.slf4j.Slf4j; - -/** - * All JMH tests need to extend this class to make it easier for you to complete JMHTest, you can also choose to - * customize runtime conditions (Measurement, Fork, Warmup, etc.) - * <p> - * You can run any of the JMH tests as a normal UT, or you can package it and get all the reported results via `java - * -jar benchmark.jar`, or get the results of a particular Test via `java -jar /benchmarks.jar exampleClassName`. - */ -@Warmup(iterations = AbstractMicrobenchmark.DEFAULT_WARMUP_ITERATIONS) -@Measurement(iterations = AbstractMicrobenchmark.DEFAULT_MEASURE_ITERATIONS) -@Fork(AbstractMicrobenchmark.DEFAULT_FORKS) -@State(Scope.Thread) -@Slf4j -public abstract class AbstractMicrobenchmark { - static final int DEFAULT_WARMUP_ITERATIONS = 10; - - static final int DEFAULT_MEASURE_ITERATIONS = 10; - - static final int DEFAULT_FORKS = 2; - - public static class JmhThreadExecutor extends ThreadPoolExecutor { - public JmhThreadExecutor(int size, String name) { - super(size, size, 10, TimeUnit.SECONDS, new LinkedBlockingQueue<>(), Executors.defaultThreadFactory()); - } - } - - private ChainedOptionsBuilder newOptionsBuilder() { - - String className = getClass().getSimpleName(); - - ChainedOptionsBuilder optBuilder = new OptionsBuilder() - // set benchmark class name - .include(".*" + className + ".*") - // add GC profiler - .addProfiler(GCProfiler.class) - //set jvm args - .jvmArgsAppend("-Xmx512m", "-Xms512m", "-XX:MaxDirectMemorySize=512m", - "-XX:BiasedLockingStartupDelay=0", - "-Djmh.executor=CUSTOM", - "-Djmh.executor.class=org.apache.skywalking.oap.server.microbench.base.AbstractMicrobenchmark$JmhThreadExecutor" - ); - - String output = getReportDir(); - if (output != null) { - boolean writeFileStatus; - String filePath = getReportDir() + className + ".json"; - File file = new File(filePath); - - if (file.exists()) { - writeFileStatus = file.delete(); - } else { - writeFileStatus = file.getParentFile().mkdirs(); - try { - writeFileStatus = file.createNewFile(); - } catch (IOException e) { - log.warn("jmh test create file error", e); - } - } - if (writeFileStatus) { - optBuilder.resultFormat(ResultFormatType.JSON) - .result(filePath); - } - } - return optBuilder; - } - - @Test - public void run() throws Exception { - new Runner(newOptionsBuilder().build()).run(); - } - - private static String getReportDir() { - return System.getProperty("perfReportDir"); - } - -} diff --git a/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/core/config/group/openapi/EndpointGrouping4OpenapiBenchmark.java b/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/core/config/group/openapi/EndpointGrouping4OpenapiBenchmark.java deleted file mode 100644 index c5ed373d17bd..000000000000 --- a/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/core/config/group/openapi/EndpointGrouping4OpenapiBenchmark.java +++ /dev/null @@ -1,138 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package org.apache.skywalking.oap.server.microbench.core.config.group.openapi; - -import org.apache.skywalking.oap.server.core.config.group.openapi.EndpointGroupingRule4Openapi; -import org.apache.skywalking.oap.server.core.config.group.openapi.EndpointGroupingRuleReader4Openapi; -import org.apache.skywalking.oap.server.library.util.StringFormatGroup.FormatResult; -import org.apache.skywalking.oap.server.microbench.base.AbstractMicrobenchmark; - -import java.util.Collections; -import java.util.Map; - -import org.openjdk.jmh.annotations.Benchmark; -import org.openjdk.jmh.annotations.BenchmarkMode; -import org.openjdk.jmh.annotations.Mode; -import org.openjdk.jmh.annotations.Scope; -import org.openjdk.jmh.annotations.State; -import org.openjdk.jmh.annotations.Threads; -import org.openjdk.jmh.infra.Blackhole; - -@BenchmarkMode({Mode.Throughput}) -@Threads(4) -public class EndpointGrouping4OpenapiBenchmark extends AbstractMicrobenchmark { - private static final String APT_TEST_DATA = " /products1/{id}/%d:\n" + " get:\n" + " post:\n" - + " /products2/{id}/%d:\n" + " get:\n" + " post:\n" - + " /products3/{id}/%d:\n" + " get:\n"; - - private static Map<String, String> createTestFile(int size) { - StringBuilder stringBuilder = new StringBuilder(); - stringBuilder.append("paths:\n"); - for (int i = 0; i <= size; i++) { - stringBuilder.append(String.format(APT_TEST_DATA, i, i, i)); - } - return Collections.singletonMap("whatever", stringBuilder.toString()); - } - - @State(Scope.Benchmark) - public static class FormatClassPaths20 { - private final EndpointGroupingRule4Openapi rule = new EndpointGroupingRuleReader4Openapi(createTestFile(3)).read(); - - public FormatResult format(String serviceName, String endpointName) { - return rule.format(serviceName, endpointName); - } - } - - @State(Scope.Benchmark) - public static class FormatClassPaths50 { - private final EndpointGroupingRule4Openapi rule = new EndpointGroupingRuleReader4Openapi(createTestFile(9)).read(); - - public FormatResult format(String serviceName, String endpointName) { - return rule.format(serviceName, endpointName); - } - } - - @State(Scope.Benchmark) - public static class FormatClassPaths200 { - private final EndpointGroupingRule4Openapi rule = new EndpointGroupingRuleReader4Openapi(createTestFile(39)).read(); - - public FormatResult format(String serviceName, String endpointName) { - return rule.format(serviceName, endpointName); - } - } - - @Benchmark - public void formatEndpointNameMatchedPaths20(Blackhole bh, FormatClassPaths20 formatClass) { - bh.consume(formatClass.format("serviceA", "GET:/products1/123")); - } - - @Benchmark - public void formatEndpointNameMatchedPaths50(Blackhole bh, FormatClassPaths50 formatClass) { - bh.consume(formatClass.format("serviceA", "GET:/products1/123")); - } - - @Benchmark - public void formatEndpointNameMatchedPaths200(Blackhole bh, FormatClassPaths200 formatClass) { - bh.consume(formatClass.format("serviceA", "GET:/products1/123")); - } - -} - -/* -* The test is assumed each endpoint need to run all match within it's rules group. -* -# JMH version: 1.21 -# VM version: JDK 1.8.0_292, OpenJDK 64-Bit Server VM, 25.292-b10 -# VM invoker: /Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/bin/java -# VM options: -javaagent:/Applications/IntelliJ IDEA CE.app/Contents/lib/idea_rt.jar=58702:/Applications/IntelliJ IDEA CE.app/Contents/bin -Dfile.encoding=UTF-8 -Xmx512m -Xms512m -# Warmup: 5 iterations, 10 s each -# Measurement: 5 iterations, 10 s each -# Timeout: 10 min per iteration -# Threads: 4 threads, will synchronize iterations -# Benchmark mode: Throughput, ops/time - -Benchmark Mode Cnt Score Error Units -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths20 thrpt 5 4318121.026 ± 529374.132 ops/s -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths20:·gc.alloc.rate thrpt 5 4579.740 ± 561.095 MB/sec -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths20:·gc.alloc.rate.norm thrpt 5 1168.000 ± 0.001 B/op -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths20:·gc.churn.PS_Eden_Space thrpt 5 4604.284 ± 560.596 MB/sec -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths20:·gc.churn.PS_Eden_Space.norm thrpt 5 1174.266 ± 6.626 B/op -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths20:·gc.churn.PS_Survivor_Space thrpt 5 0.476 ± 0.122 MB/sec -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths20:·gc.churn.PS_Survivor_Space.norm thrpt 5 0.121 ± 0.031 B/op -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths20:·gc.count thrpt 5 1427.000 counts -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths20:·gc.time thrpt 5 839.000 ms -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths200 thrpt 5 551316.187 ± 60567.899 ops/s -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths200:·gc.alloc.rate thrpt 5 3912.675 ± 429.916 MB/sec -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths200:·gc.alloc.rate.norm thrpt 5 7816.000 ± 0.001 B/op -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths200:·gc.churn.PS_Eden_Space thrpt 5 3932.895 ± 421.307 MB/sec -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths200:·gc.churn.PS_Eden_Space.norm thrpt 5 7856.526 ± 45.989 B/op -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths200:·gc.churn.PS_Survivor_Space thrpt 5 0.396 ± 0.101 MB/sec -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths200:·gc.churn.PS_Survivor_Space.norm thrpt 5 0.791 ± 0.172 B/op -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths200:·gc.count thrpt 5 1219.000 counts -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths200:·gc.time thrpt 5 737.000 ms -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths50 thrpt 5 2163149.470 ± 67179.001 ops/s -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths50:·gc.alloc.rate thrpt 5 4508.870 ± 141.755 MB/sec -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths50:·gc.alloc.rate.norm thrpt 5 2296.000 ± 0.001 B/op -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths50:·gc.churn.PS_Eden_Space thrpt 5 4532.354 ± 146.421 MB/sec -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths50:·gc.churn.PS_Eden_Space.norm thrpt 5 2307.956 ± 10.377 B/op -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths50:·gc.churn.PS_Survivor_Space thrpt 5 0.454 ± 0.116 MB/sec -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths50:·gc.churn.PS_Survivor_Space.norm thrpt 5 0.231 ± 0.066 B/op -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths50:·gc.count thrpt 5 1405.000 counts -EndpointGroupingBenchmark4Openapi.formatEndpointNameMatchedPaths50:·gc.time thrpt 5 841.000 ms - */ diff --git a/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/core/config/group/uri/RegexVSQuickMatchBenchmark.java b/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/core/config/group/uri/RegexVSQuickMatchBenchmark.java deleted file mode 100644 index 60e676783ec6..000000000000 --- a/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/core/config/group/uri/RegexVSQuickMatchBenchmark.java +++ /dev/null @@ -1,195 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package org.apache.skywalking.oap.server.microbench.core.config.group.uri; - -import org.apache.skywalking.oap.server.core.config.group.EndpointGroupingRule; -import org.apache.skywalking.oap.server.core.config.group.uri.quickmatch.QuickUriGroupingRule; -import org.apache.skywalking.oap.server.library.util.StringFormatGroup; -import org.apache.skywalking.oap.server.microbench.base.AbstractMicrobenchmark; -import org.openjdk.jmh.annotations.Benchmark; -import org.openjdk.jmh.annotations.BenchmarkMode; -import org.openjdk.jmh.annotations.Fork; -import org.openjdk.jmh.annotations.Measurement; -import org.openjdk.jmh.annotations.Mode; -import org.openjdk.jmh.annotations.Scope; -import org.openjdk.jmh.annotations.State; -import org.openjdk.jmh.annotations.Threads; -import org.openjdk.jmh.annotations.Warmup; -import org.openjdk.jmh.infra.Blackhole; - -@Warmup(iterations = 1) -@Measurement(iterations = 1) -@Fork(1) -@State(Scope.Thread) -@BenchmarkMode({Mode.Throughput}) -@Threads(4) -public class RegexVSQuickMatchBenchmark extends AbstractMicrobenchmark { - - @State(Scope.Benchmark) - public static class RegexMatch { - private final EndpointGroupingRule rule = new EndpointGroupingRule(); - - public RegexMatch() { - rule.addRule("service1", "/products/{var}", "/products/.+"); - rule.addRule("service1", "/products/{var}/detail", "/products/.+/detail"); - rule.addRule("service1", "/sales/{var}/1", "/sales/.+/1"); - rule.addRule("service1", "/sales/{var}/2", "/sales/.+/2"); - rule.addRule("service1", "/sales/{var}/3", "/sales/.+/3"); - rule.addRule("service1", "/sales/{var}/4", "/sales/.+/4"); - rule.addRule("service1", "/sales/{var}/5", "/sales/.+/5"); - rule.addRule("service1", "/sales/{var}/6", "/sales/.+/6"); - rule.addRule("service1", "/sales/{var}/7", "/sales/.+/7"); - rule.addRule("service1", "/sales/{var}/8", "/sales/.+/8"); - rule.addRule("service1", "/sales/{var}/9", "/sales/.+/9"); - rule.addRule("service1", "/sales/{var}/10", "/sales/.+/10"); - rule.addRule("service1", "/sales/{var}/11", "/sales/.+/11"); - rule.addRule("service1", "/employees/{var}/profile", "/employees/.+/profile"); - } - - public StringFormatGroup.FormatResult match(String serviceName, String endpointName) { - return rule.format(serviceName, endpointName); - } - } - - @State(Scope.Benchmark) - public static class QuickMatch { - private final QuickUriGroupingRule rule = new QuickUriGroupingRule(); - - public QuickMatch() { - rule.addRule("service1", "/products/{var}"); - rule.addRule("service1", "/products/{var}/detail"); - rule.addRule("service1", "/sales/{var}/1"); - rule.addRule("service1", "/sales/{var}/2"); - rule.addRule("service1", "/sales/{var}/3"); - rule.addRule("service1", "/sales/{var}/4"); - rule.addRule("service1", "/sales/{var}/5"); - rule.addRule("service1", "/sales/{var}/6"); - rule.addRule("service1", "/sales/{var}/7"); - rule.addRule("service1", "/sales/{var}/8"); - rule.addRule("service1", "/sales/{var}/9"); - rule.addRule("service1", "/sales/{var}/10"); - rule.addRule("service1", "/sales/{var}/11"); - rule.addRule("service1", "/employees/{var}/profile"); - } - - public StringFormatGroup.FormatResult match(String serviceName, String endpointName) { - return rule.format(serviceName, endpointName); - } - } - - @Benchmark - public void matchFirstRegex(Blackhole bh, RegexVSQuickMatchBenchmark.RegexMatch formatClass) { - bh.consume(formatClass.match("service1", "/products/123")); - } - - @Benchmark - public void matchFirstQuickUriGrouping(Blackhole bh, RegexVSQuickMatchBenchmark.QuickMatch formatClass) { - bh.consume(formatClass.match("service1", "/products/123")); - } - - @Benchmark - public void matchFourthRegex(Blackhole bh, RegexVSQuickMatchBenchmark.RegexMatch formatClass) { - bh.consume(formatClass.match("service1", "/sales/123/2")); - } - - @Benchmark - public void matchFourthQuickUriGrouping(Blackhole bh, RegexVSQuickMatchBenchmark.QuickMatch formatClass) { - bh.consume(formatClass.match("service1", "/sales/123/2")); - } - - @Benchmark - public void notMatchRegex(Blackhole bh, RegexVSQuickMatchBenchmark.RegexMatch formatClass) { - bh.consume(formatClass.match("service1", "/employees/123")); - } - - @Benchmark - public void notMatchQuickUriGrouping(Blackhole bh, RegexVSQuickMatchBenchmark.QuickMatch formatClass) { - bh.consume(formatClass.match("service1", "/employees/123")); - } -} - -/** - * # JMH version: 1.25 - * # VM version: JDK 16.0.1, OpenJDK 64-Bit Server VM, 16.0.1+9-24 - * # VM invoker: C:\Users\Sky\.jdks\openjdk-16.0.1\bin\java.exe - * # VM options: -ea --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED -Didea.test.cyclic.buffer.size=1048576 -javaagent:Y:\jetbrains\apps\IDEA-U\ch-0\231.8109.175\lib\idea_rt.jar=54938:Y:\jetbrains\apps\IDEA-U\ch-0\231.8109.175\bin -Dfile.encoding=UTF-8 -Xmx512m -Xms512m -XX:MaxDirectMemorySize=512m -XX:BiasedLockingStartupDelay=0 -Djmh.executor=CUSTOM -Djmh.executor.class=org.apache.skywalking.oap.server.microbench.base.AbstractMicrobenchmark$JmhThreadExecutor - * # Warmup: 1 iterations, 10 s each - * # Measurement: 1 iterations, 10 s each - * # Timeout: 10 min per iteration - * # Threads: 4 threads, will synchronize iterations - * # Benchmark mode: Throughput, ops/time - * # Benchmark: org.apache.skywalking.oap.server.microbench.core.config.group.uri.RegexVSQuickMatchBenchmark.notMatchRegex - * Benchmark Mode Cnt Score Error Units - * RegexVSQuickMatchBenchmark.matchFirstQuickUriGrouping thrpt 48317763.786 ops/s - * RegexVSQuickMatchBenchmark.matchFirstQuickUriGrouping:·gc.alloc.rate thrpt 8773.225 MB/sec - * RegexVSQuickMatchBenchmark.matchFirstQuickUriGrouping:·gc.alloc.rate.norm thrpt 200.014 B/op - * RegexVSQuickMatchBenchmark.matchFirstQuickUriGrouping:·gc.churn.G1_Eden_Space thrpt 8807.405 MB/sec - * RegexVSQuickMatchBenchmark.matchFirstQuickUriGrouping:·gc.churn.G1_Eden_Space.norm thrpt 200.794 B/op - * RegexVSQuickMatchBenchmark.matchFirstQuickUriGrouping:·gc.churn.G1_Survivor_Space thrpt 0.050 MB/sec - * RegexVSQuickMatchBenchmark.matchFirstQuickUriGrouping:·gc.churn.G1_Survivor_Space.norm thrpt 0.001 B/op - * RegexVSQuickMatchBenchmark.matchFirstQuickUriGrouping:·gc.count thrpt 303.000 counts - * RegexVSQuickMatchBenchmark.matchFirstQuickUriGrouping:·gc.time thrpt 325.000 ms - * RegexVSQuickMatchBenchmark.matchFirstRegex thrpt 41040542.288 ops/s - * RegexVSQuickMatchBenchmark.matchFirstRegex:·gc.alloc.rate thrpt 8348.690 MB/sec - * RegexVSQuickMatchBenchmark.matchFirstRegex:·gc.alloc.rate.norm thrpt 224.016 B/op - * RegexVSQuickMatchBenchmark.matchFirstRegex:·gc.churn.G1_Eden_Space thrpt 8378.454 MB/sec - * RegexVSQuickMatchBenchmark.matchFirstRegex:·gc.churn.G1_Eden_Space.norm thrpt 224.815 B/op - * RegexVSQuickMatchBenchmark.matchFirstRegex:·gc.churn.G1_Survivor_Space thrpt 0.057 MB/sec - * RegexVSQuickMatchBenchmark.matchFirstRegex:·gc.churn.G1_Survivor_Space.norm thrpt 0.002 B/op - * RegexVSQuickMatchBenchmark.matchFirstRegex:·gc.count thrpt 288.000 counts - * RegexVSQuickMatchBenchmark.matchFirstRegex:·gc.time thrpt 282.000 ms - * RegexVSQuickMatchBenchmark.matchFourthQuickUriGrouping thrpt 35658131.267 ops/s - * RegexVSQuickMatchBenchmark.matchFourthQuickUriGrouping:·gc.alloc.rate thrpt 8020.546 MB/sec - * RegexVSQuickMatchBenchmark.matchFourthQuickUriGrouping:·gc.alloc.rate.norm thrpt 248.018 B/op - * RegexVSQuickMatchBenchmark.matchFourthQuickUriGrouping:·gc.churn.G1_Eden_Space thrpt 8043.279 MB/sec - * RegexVSQuickMatchBenchmark.matchFourthQuickUriGrouping:·gc.churn.G1_Eden_Space.norm thrpt 248.721 B/op - * RegexVSQuickMatchBenchmark.matchFourthQuickUriGrouping:·gc.churn.G1_Survivor_Space thrpt 0.045 MB/sec - * RegexVSQuickMatchBenchmark.matchFourthQuickUriGrouping:·gc.churn.G1_Survivor_Space.norm thrpt 0.001 B/op - * RegexVSQuickMatchBenchmark.matchFourthQuickUriGrouping:·gc.count thrpt 277.000 counts - * RegexVSQuickMatchBenchmark.matchFourthQuickUriGrouping:·gc.time thrpt 302.000 ms - * RegexVSQuickMatchBenchmark.matchFourthRegex thrpt 11066068.208 ops/s - * RegexVSQuickMatchBenchmark.matchFourthRegex:·gc.alloc.rate thrpt 8273.312 MB/sec - * RegexVSQuickMatchBenchmark.matchFourthRegex:·gc.alloc.rate.norm thrpt 824.060 B/op - * RegexVSQuickMatchBenchmark.matchFourthRegex:·gc.churn.G1_Eden_Space thrpt 8279.984 MB/sec - * RegexVSQuickMatchBenchmark.matchFourthRegex:·gc.churn.G1_Eden_Space.norm thrpt 824.724 B/op - * RegexVSQuickMatchBenchmark.matchFourthRegex:·gc.churn.G1_Survivor_Space thrpt 0.052 MB/sec - * RegexVSQuickMatchBenchmark.matchFourthRegex:·gc.churn.G1_Survivor_Space.norm thrpt 0.005 B/op - * RegexVSQuickMatchBenchmark.matchFourthRegex:·gc.count thrpt 285.000 counts - * RegexVSQuickMatchBenchmark.matchFourthRegex:·gc.time thrpt 324.000 ms - * RegexVSQuickMatchBenchmark.notMatchQuickUriGrouping thrpt 45843193.472 ops/s - * RegexVSQuickMatchBenchmark.notMatchQuickUriGrouping:·gc.alloc.rate thrpt 8653.215 MB/sec - * RegexVSQuickMatchBenchmark.notMatchQuickUriGrouping:·gc.alloc.rate.norm thrpt 208.015 B/op - * RegexVSQuickMatchBenchmark.notMatchQuickUriGrouping:·gc.churn.G1_Eden_Space thrpt 8652.365 MB/sec - * RegexVSQuickMatchBenchmark.notMatchQuickUriGrouping:·gc.churn.G1_Eden_Space.norm thrpt 207.995 B/op - * RegexVSQuickMatchBenchmark.notMatchQuickUriGrouping:·gc.churn.G1_Survivor_Space thrpt 0.048 MB/sec - * RegexVSQuickMatchBenchmark.notMatchQuickUriGrouping:·gc.churn.G1_Survivor_Space.norm thrpt 0.001 B/op - * RegexVSQuickMatchBenchmark.notMatchQuickUriGrouping:·gc.count thrpt 298.000 counts - * RegexVSQuickMatchBenchmark.notMatchQuickUriGrouping:·gc.time thrpt 358.000 ms - * RegexVSQuickMatchBenchmark.notMatchRegex thrpt 3434953.426 ops/s - * RegexVSQuickMatchBenchmark.notMatchRegex:·gc.alloc.rate thrpt 8898.075 MB/sec - * RegexVSQuickMatchBenchmark.notMatchRegex:·gc.alloc.rate.norm thrpt 2856.206 B/op - * RegexVSQuickMatchBenchmark.notMatchRegex:·gc.churn.G1_Eden_Space thrpt 8886.568 MB/sec - * RegexVSQuickMatchBenchmark.notMatchRegex:·gc.churn.G1_Eden_Space.norm thrpt 2852.512 B/op - * RegexVSQuickMatchBenchmark.notMatchRegex:·gc.churn.G1_Survivor_Space thrpt 0.052 MB/sec - * RegexVSQuickMatchBenchmark.notMatchRegex:·gc.churn.G1_Survivor_Space.norm thrpt 0.017 B/op - * RegexVSQuickMatchBenchmark.notMatchRegex:·gc.count thrpt 306.000 counts - * RegexVSQuickMatchBenchmark.notMatchRegex:·gc.time thrpt 377.000 ms - * - * Process finished with exit code 0 - */ \ No newline at end of file diff --git a/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/core/profiling/ebpf/EBPFProfilingAnalyzerBenchmark.java b/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/core/profiling/ebpf/EBPFProfilingAnalyzerBenchmark.java deleted file mode 100644 index 3ec3fa838817..000000000000 --- a/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/core/profiling/ebpf/EBPFProfilingAnalyzerBenchmark.java +++ /dev/null @@ -1,409 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package org.apache.skywalking.oap.server.microbench.core.profiling.ebpf; - -import org.apache.skywalking.oap.server.core.profiling.ebpf.analyze.EBPFProfilingAnalyzer; -import org.apache.skywalking.oap.server.core.profiling.ebpf.analyze.EBPFProfilingStack; -import org.apache.skywalking.oap.server.core.profiling.ebpf.storage.EBPFProfilingStackType; -import org.apache.skywalking.oap.server.core.query.type.EBPFProfilingAnalyzation; -import org.apache.skywalking.oap.server.microbench.base.AbstractMicrobenchmark; -import org.openjdk.jmh.annotations.Benchmark; -import org.openjdk.jmh.annotations.BenchmarkMode; -import org.openjdk.jmh.annotations.Mode; -import org.openjdk.jmh.annotations.Scope; -import org.openjdk.jmh.annotations.State; -import org.openjdk.jmh.annotations.Threads; - -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.Random; -import java.util.concurrent.TimeUnit; - -@BenchmarkMode({Mode.Throughput}) -@Threads(4) -public class EBPFProfilingAnalyzerBenchmark extends AbstractMicrobenchmark { - - private static final Random RANDOM = new Random(System.currentTimeMillis()); - private static final int SYMBOL_LENGTH = 10; - private static final char[] SYMBOL_TABLE = "abcdefgABCDEFG1234567890_[]<>.".toCharArray(); - private static final EBPFProfilingStackType[] STACK_TYPES = new EBPFProfilingStackType[]{ - EBPFProfilingStackType.KERNEL_SPACE, EBPFProfilingStackType.USER_SPACE}; - - private static List<EBPFProfilingStack> generateStacks(int totalStackCount, - int perStackMinDepth, int perStackMaxDepth, - double[] stackSymbolDuplicateRate, - double stackDuplicateRate) { - int uniqStackCount = (int) (100 / stackDuplicateRate); - final List<EBPFProfilingStack> stacks = new ArrayList<>(totalStackCount); - final StackSymbolGenerator stackSymbolGenerator = new StackSymbolGenerator(stackSymbolDuplicateRate, perStackMaxDepth); - for (int inx = 0; inx < uniqStackCount; inx++) { - final EBPFProfilingStack s = generateStack(perStackMinDepth, perStackMaxDepth, stackSymbolGenerator); - stacks.add(s); - } - for (int inx = uniqStackCount; inx < totalStackCount; inx++) { - stacks.add(stacks.get(RANDOM.nextInt(uniqStackCount))); - } - return stacks; - } - - private static class StackSymbolGenerator { - private final Map<Integer, Integer> stackDepthSymbolCount; - private final Map<Integer, List<String>> existingSymbolMap; - - public StackSymbolGenerator(double[] stackSymbolDuplicateRate, int maxDepth) { - this.stackDepthSymbolCount = new HashMap<>(); - for (int depth = 0; depth < maxDepth; depth++) { - double rate = stackSymbolDuplicateRate[stackSymbolDuplicateRate.length - 1]; - if (stackSymbolDuplicateRate.length > depth) { - rate = stackSymbolDuplicateRate[depth]; - } - int uniqStackCount = (int) (100 / rate); - stackDepthSymbolCount.put(depth, uniqStackCount); - } - this.existingSymbolMap = new HashMap<>(); - } - - public String generate(int depth) { - List<String> symbols = existingSymbolMap.get(depth); - if (symbols == null) { - existingSymbolMap.put(depth, symbols = new ArrayList<>()); - } - final Integer needCount = this.stackDepthSymbolCount.get(depth); - if (symbols.size() < needCount) { - final StringBuilder sb = new StringBuilder(SYMBOL_LENGTH); - for (int j = 0; j < SYMBOL_LENGTH; j++) { - sb.append(SYMBOL_TABLE[RANDOM.nextInt(SYMBOL_TABLE.length)]); - } - final String s = sb.toString(); - symbols.add(s); - return s; - } else { - return symbols.get(RANDOM.nextInt(symbols.size())); - } - } - } - - private static EBPFProfilingStack generateStack(int stackMinDepth, int stackMaxDepth, - StackSymbolGenerator stackSymbolGenerator) { - int stackDepth = stackMinDepth + RANDOM.nextInt(stackMaxDepth - stackMinDepth); - final List<EBPFProfilingStack.Symbol> symbols = new ArrayList<>(stackDepth); - for (int i = 0; i < stackDepth; i++) { - final EBPFProfilingStack.Symbol symbol = new EBPFProfilingStack.Symbol( - stackSymbolGenerator.generate(i), buildStackType(i, stackDepth)); - symbols.add(symbol); - } - final EBPFProfilingStack stack = new EBPFProfilingStack(); - stack.setDumpCount(RANDOM.nextInt(100)); - stack.setSymbols(symbols); - return stack; - } - - private static EBPFProfilingStackType buildStackType(int currentDepth, int totalDepth) { - final int partition = totalDepth / STACK_TYPES.length; - for (int i = 1; i <= STACK_TYPES.length; i++) { - if (currentDepth < i * partition) { - return STACK_TYPES[i - 1]; - } - } - return STACK_TYPES[STACK_TYPES.length - 1]; - } - - public static class DataSource { - private final List<EBPFProfilingStack> stackStream; - - public DataSource(List<EBPFProfilingStack> stackStream) { - this.stackStream = stackStream; - } - - public void analyze() { - new EBPFProfilingAnalyzer(null, 100, 5).generateTrees(new EBPFProfilingAnalyzation(), stackStream.parallelStream()); - } - } - - private static int calculateStackCount(int stackReportPeriodSecond, int totalTimeMinute, int combineInstanceCount) { - return (int) (TimeUnit.MINUTES.toSeconds(totalTimeMinute) / stackReportPeriodSecond * combineInstanceCount); - } - - @State(Scope.Benchmark) - public static class LowDataSource extends DataSource { - // rover report period: 5s - // dump duration: 60m - // 10 instance analyze - // stack depth range: 15, 30 - // stack duplicate rate: 5% - // stack symbol duplicate rate: 100%, 40%, 35%, 30%, 15%, 10%, 7%, 5% - public LowDataSource() { - super(generateStacks(calculateStackCount(5, 60, 10), 15, 30, - new double[]{100, 50, 45, 40, 35, 30, 15, 10, 5}, 5)); - } - } - - @State(Scope.Benchmark) - public static class MedianDatasource extends DataSource { - // rover report period: 5s - // dump duration: 100m - // 200 instance analyze - // stack depth range: 15, 30 - // stack duplicate rate: 3% - // stack symbol duplicate rate: 50%, 40%, 35%, 30%, 20%, 10%, 7%, 5%, 2% - public MedianDatasource() { - super(generateStacks(calculateStackCount(5, 100, 200), 15, 30, - new double[]{50, 40, 35, 30, 20, 10, 7, 5, 2}, 3)); - } - } - - @State(Scope.Benchmark) - public static class HighDatasource extends DataSource { - // rover report period: 5s - // dump time: 2h - // 2000 instance analyze - // stack depth range: 15, 40 - // stack duplicate rate: 1% - // stack symbol duplicate rate: 30%, 27%, 25%, 20%, 17%, 15%, 10%, 7%, 5%, 2%, 1% - public HighDatasource() { - super(generateStacks(calculateStackCount(5, 2 * 60, 2000), 15, 40, - new double[]{30, 27, 25, 20, 17, 15, 10, 7, 5, 2, 1}, 1)); - } - } - - @Benchmark - public void analyzeLowDataSource(LowDataSource lowDataSource) { - lowDataSource.analyze(); - } - - @Benchmark - public void analyzeMedianDataSource(MedianDatasource medianDatasource) { - medianDatasource.analyze(); - } - - @Benchmark - public void analyzeMaxDataSource(HighDatasource highDataSource) { - highDataSource.analyze(); - } - -} - -/* -# JMH version: 1.25 -# VM version: JDK 1.8.0_292, OpenJDK 64-Bit Server VM, 25.292-b10 -# VM invoker: /Users/hanliu/.sdkman/candidates/java/8.0.292.hs-adpt/jre/bin/java -# VM options: <none> -# Warmup: 10 iterations, 10 s each -# Measurement: 10 iterations, 10 s each -# Timeout: 10 min per iteration -# Threads: 4 threads, will synchronize iterations -# Benchmark mode: Throughput, ops/time -# Benchmark: org.apache.skywalking.oap.server.microbench.core.profiling.ebpf.EBPFProfilingAnalyzerBenchmark.analyzeLowDataSource - -# Run progress: 0.00% complete, ETA 00:20:00 -# Fork: 1 of 2 -# Warmup Iteration 1: 2774.619 ops/s -# Warmup Iteration 2: 2652.912 ops/s -# Warmup Iteration 3: 2651.943 ops/s -# Warmup Iteration 4: 2670.755 ops/s -# Warmup Iteration 5: 2632.884 ops/s -# Warmup Iteration 6: 2597.808 ops/s -# Warmup Iteration 7: 2256.900 ops/s -# Warmup Iteration 8: 2105.842 ops/s -# Warmup Iteration 9: 2084.963 ops/s -# Warmup Iteration 10: 2142.089 ops/s -Iteration 1: 2168.913 ops/s -Iteration 2: 2161.030 ops/s -Iteration 3: 2170.136 ops/s -Iteration 4: 2161.335 ops/s -Iteration 5: 2167.978 ops/s -Iteration 6: 2154.508 ops/s -Iteration 7: 2136.985 ops/s -Iteration 8: 2107.246 ops/s -Iteration 9: 2084.855 ops/s -Iteration 10: 2071.664 ops/s - -# Run progress: 16.67% complete, ETA 00:16:44 -# Fork: 2 of 2 -# Warmup Iteration 1: 2094.858 ops/s -# Warmup Iteration 2: 2324.678 ops/s -# Warmup Iteration 3: 2238.370 ops/s -# Warmup Iteration 4: 2252.727 ops/s -# Warmup Iteration 5: 2149.959 ops/s -# Warmup Iteration 6: 2155.332 ops/s -# Warmup Iteration 7: 2141.820 ops/s -# Warmup Iteration 8: 2154.514 ops/s -# Warmup Iteration 9: 2145.600 ops/s -# Warmup Iteration 10: 2129.701 ops/s -Iteration 1: 2157.904 ops/s -Iteration 2: 2145.461 ops/s -Iteration 3: 2155.163 ops/s -Iteration 4: 2154.556 ops/s -Iteration 5: 2161.428 ops/s -Iteration 6: 2150.353 ops/s -Iteration 7: 2161.267 ops/s -Iteration 8: 2092.811 ops/s -Iteration 9: 2059.780 ops/s -Iteration 10: 2061.371 ops/s - - -Result "org.apache.skywalking.oap.server.microbench.core.profiling.ebpf.EBPFProfilingAnalyzerBenchmark.analyzeLowDataSource": - 2134.237 ±(99.9%) 33.583 ops/s [Average] - (min, avg, max) = (2059.780, 2134.237, 2170.136), stdev = 38.674 - CI (99.9%): [2100.654, 2167.820] (assumes normal distribution) - - -# JMH version: 1.25 -# VM version: JDK 1.8.0_292, OpenJDK 64-Bit Server VM, 25.292-b10 -# VM invoker: /Users/hanliu/.sdkman/candidates/java/8.0.292.hs-adpt/jre/bin/java -# VM options: <none> -# Warmup: 10 iterations, 10 s each -# Measurement: 10 iterations, 10 s each -# Timeout: 10 min per iteration -# Threads: 4 threads, will synchronize iterations -# Benchmark mode: Throughput, ops/time -# Benchmark: org.apache.skywalking.oap.server.microbench.core.profiling.ebpf.EBPFProfilingAnalyzerBenchmark.analyzeMaxDataSource - -# Run progress: 33.33% complete, ETA 00:13:24 -# Fork: 1 of 2 -# Warmup Iteration 1: 6.534 ops/s -# Warmup Iteration 2: 6.695 ops/s -# Warmup Iteration 3: 6.722 ops/s -# Warmup Iteration 4: 6.473 ops/s -# Warmup Iteration 5: 6.431 ops/s -# Warmup Iteration 6: 6.391 ops/s -# Warmup Iteration 7: 6.401 ops/s -# Warmup Iteration 8: 6.290 ops/s -# Warmup Iteration 9: 6.087 ops/s -# Warmup Iteration 10: 6.143 ops/s -Iteration 1: 5.989 ops/s -Iteration 2: 6.386 ops/s -Iteration 3: 6.397 ops/s -Iteration 4: 6.395 ops/s -Iteration 5: 6.374 ops/s -Iteration 6: 6.192 ops/s -Iteration 7: 6.111 ops/s -Iteration 8: 6.049 ops/s -Iteration 9: 6.104 ops/s -Iteration 10: 6.130 ops/s - -# Run progress: 50.00% complete, ETA 00:10:20 -# Fork: 2 of 2 -# Warmup Iteration 1: 5.981 ops/s -# Warmup Iteration 2: 6.433 ops/s -# Warmup Iteration 3: 6.421 ops/s -# Warmup Iteration 4: 6.215 ops/s -# Warmup Iteration 5: 6.139 ops/s -# Warmup Iteration 6: 6.165 ops/s -# Warmup Iteration 7: 6.153 ops/s -# Warmup Iteration 8: 6.123 ops/s -# Warmup Iteration 9: 6.107 ops/s -# Warmup Iteration 10: 6.044 ops/s -Iteration 1: 5.869 ops/s -Iteration 2: 5.837 ops/s -Iteration 3: 5.836 ops/s -Iteration 4: 5.994 ops/s -Iteration 5: 6.187 ops/s -Iteration 6: 6.129 ops/s -Iteration 7: 6.111 ops/s -Iteration 8: 6.150 ops/s -Iteration 9: 6.154 ops/s -Iteration 10: 6.165 ops/s - - -Result "org.apache.skywalking.oap.server.microbench.core.profiling.ebpf.EBPFProfilingAnalyzerBenchmark.analyzeMaxDataSource": - 6.128 ±(99.9%) 0.149 ops/s [Average] - (min, avg, max) = (5.836, 6.128, 6.397), stdev = 0.172 - CI (99.9%): [5.979, 6.277] (assumes normal distribution) - - -# JMH version: 1.25 -# VM version: JDK 1.8.0_292, OpenJDK 64-Bit Server VM, 25.292-b10 -# VM invoker: /Users/hanliu/.sdkman/candidates/java/8.0.292.hs-adpt/jre/bin/java -# VM options: <none> -# Warmup: 10 iterations, 10 s each -# Measurement: 10 iterations, 10 s each -# Timeout: 10 min per iteration -# Threads: 4 threads, will synchronize iterations -# Benchmark mode: Throughput, ops/time -# Benchmark: org.apache.skywalking.oap.server.microbench.core.profiling.ebpf.EBPFProfilingAnalyzerBenchmark.analyzeMedianDataSource - -# Run progress: 66.67% complete, ETA 00:06:59 -# Fork: 1 of 2 -# Warmup Iteration 1: 98.581 ops/s -# Warmup Iteration 2: 101.972 ops/s -# Warmup Iteration 3: 102.758 ops/s -# Warmup Iteration 4: 102.755 ops/s -# Warmup Iteration 5: 102.637 ops/s -# Warmup Iteration 6: 102.341 ops/s -# Warmup Iteration 7: 101.472 ops/s -# Warmup Iteration 8: 101.128 ops/s -# Warmup Iteration 9: 97.455 ops/s -# Warmup Iteration 10: 96.327 ops/s -Iteration 1: 95.448 ops/s -Iteration 2: 100.029 ops/s -Iteration 3: 101.103 ops/s -Iteration 4: 101.236 ops/s -Iteration 5: 100.893 ops/s -Iteration 6: 101.052 ops/s -Iteration 7: 100.859 ops/s -Iteration 8: 101.174 ops/s -Iteration 9: 101.237 ops/s -Iteration 10: 101.146 ops/s - -# Run progress: 83.33% complete, ETA 00:03:28 -# Fork: 2 of 2 -# Warmup Iteration 1: 92.453 ops/s -# Warmup Iteration 2: 95.494 ops/s -# Warmup Iteration 3: 95.363 ops/s -# Warmup Iteration 4: 95.391 ops/s -# Warmup Iteration 5: 95.126 ops/s -# Warmup Iteration 6: 94.867 ops/s -# Warmup Iteration 7: 94.034 ops/s -# Warmup Iteration 8: 89.720 ops/s -# Warmup Iteration 9: 87.873 ops/s -# Warmup Iteration 10: 89.747 ops/s -Iteration 1: 93.948 ops/s -Iteration 2: 93.365 ops/s -Iteration 3: 94.219 ops/s -Iteration 4: 94.004 ops/s -Iteration 5: 94.352 ops/s -Iteration 6: 94.299 ops/s -Iteration 7: 94.336 ops/s -Iteration 8: 93.926 ops/s -Iteration 9: 93.592 ops/s -Iteration 10: 92.966 ops/s - - -Result "org.apache.skywalking.oap.server.microbench.core.profiling.ebpf.EBPFProfilingAnalyzerBenchmark.analyzeMedianDataSource": - 97.159 ±(99.9%) 3.105 ops/s [Average] - (min, avg, max) = (92.966, 97.159, 101.237), stdev = 3.575 - CI (99.9%): [94.055, 100.264] (assumes normal distribution) - - -# Run complete. Total time: 00:20:43 - -REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on -why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial -experiments, perform baseline and negative tests that provide experimental control, make sure -the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts. -Do not assume the numbers tell you what you want them to tell. - -Benchmark Mode Cnt Score Error Units -EBPFProfilingAnalyzerBenchmark.analyzeLowDataSource thrpt 20 2134.237 ± 33.583 ops/s -EBPFProfilingAnalyzerBenchmark.analyzeMaxDataSource thrpt 20 6.128 ± 0.149 ops/s -EBPFProfilingAnalyzerBenchmark.analyzeMedianDataSource thrpt 20 97.159 ± 3.105 ops/s - */ diff --git a/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/library/util/StringFormatGroupBenchmark.java b/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/library/util/StringFormatGroupBenchmark.java deleted file mode 100644 index 71837e69dc83..000000000000 --- a/oap-server/microbench/src/main/java/org/apache/skywalking/oap/server/microbench/library/util/StringFormatGroupBenchmark.java +++ /dev/null @@ -1,124 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package org.apache.skywalking.oap.server.microbench.library.util; - -import org.apache.skywalking.oap.server.library.util.StringFormatGroup; -import org.apache.skywalking.oap.server.microbench.base.AbstractMicrobenchmark; -import org.junit.jupiter.api.Assertions; -import org.junit.jupiter.api.Test; -import org.openjdk.jmh.annotations.Benchmark; -import org.openjdk.jmh.annotations.BenchmarkMode; -import org.openjdk.jmh.annotations.Mode; -import org.openjdk.jmh.annotations.OutputTimeUnit; - -import java.util.concurrent.TimeUnit; - -@BenchmarkMode(Mode.AverageTime) -@OutputTimeUnit(TimeUnit.MICROSECONDS) -public class StringFormatGroupBenchmark extends AbstractMicrobenchmark { - @Benchmark - @Test - public void testMatch() { - StringFormatGroup group = new StringFormatGroup(); - group.addRule("/name/*/add", "/name/.+/add"); - Assertions.assertEquals("/name/*/add", group.format("/name/test/add").getName()); - - group = new StringFormatGroup(); - group.addRule("/name/*/add/{orderId}", "/name/.+/add/.*"); - Assertions.assertEquals("/name/*/add/{orderId}", group.format("/name/test/add/12323").getName()); - } - - @Benchmark - @Test - public void test100Rule() { - StringFormatGroup group = new StringFormatGroup(); - group.addRule("/name/*/add/{orderId}", "/name/.+/add/.*"); - for (int i = 0; i < 100; i++) { - group.addRule("/name/*/add/{orderId}" + "/" + 1, "/name/.+/add/.*" + "/abc"); - } - Assertions.assertEquals("/name/*/add/{orderId}", group.format("/name/test/add/12323").getName()); - } - - /********************************* - * # JMH version: 1.21 - * # VM version: JDK 1.8.0_91, Java HotSpot(TM) 64-Bit Server VM, 25.91-b14 - * # VM invoker: /Users/wusheng/Documents/applications/jdk1.8.0_91.jdk/Contents/Home/jre/bin/java - * # VM options: -ea -Didea.test.cyclic.buffer.size=1048576 -javaagent:/Applications/IntelliJ IDEA.app/Contents/lib/idea_rt.jar=54841:/Applications/IntelliJ IDEA.app/Contents/bin -Dfile.encoding=UTF-8 - * # Warmup: <none> - * # Measurement: 5 iterations, 10 s each - * # Timeout: 10 min per iteration - * # Threads: 1 thread, will synchronize iterations - * # Benchmark mode: Throughput, ops/time - * # Benchmark: org.apache.skywalking.apm.util.StringFormatGroupTest.test100Rule - * - * # Run progress: 0.00% complete, ETA 00:01:40 - * # Fork: 1 of 1 - * Iteration 1: 32016.496 ops/s - * Iteration 2: 36703.873 ops/s - * Iteration 3: 37121.543 ops/s - * Iteration 4: 36898.225 ops/s - * Iteration 5: 34712.564 ops/s - * - * - * Result "org.apache.skywalking.apm.util.StringFormatGroupTest.test100Rule": - * 35490.540 ±(99.9%) 8345.368 ops/s [Average] - * (min, avg, max) = (32016.496, 35490.540, 37121.543), stdev = 2167.265 - * CI (99.9%): [27145.173, 43835.908] (assumes normal distribution) - * - * - * # JMH version: 1.21 - * # VM version: JDK 1.8.0_91, Java HotSpot(TM) 64-Bit Server VM, 25.91-b14 - * # VM invoker: /Users/wusheng/Documents/applications/jdk1.8.0_91.jdk/Contents/Home/jre/bin/java - * # VM options: -ea -Didea.test.cyclic.buffer.size=1048576 -javaagent:/Applications/IntelliJ IDEA.app/Contents/lib/idea_rt.jar=54841:/Applications/IntelliJ IDEA.app/Contents/bin -Dfile.encoding=UTF-8 - * # Warmup: <none> - * # Measurement: 5 iterations, 10 s each - * # Timeout: 10 min per iteration - * # Threads: 1 thread, will synchronize iterations - * # Benchmark mode: Throughput, ops/time - * # Benchmark: org.apache.skywalking.apm.util.StringFormatGroupTest.testMatch - * - * # Run progress: 50.00% complete, ETA 00:00:50 - * # Fork: 1 of 1 - * Iteration 1: 1137158.205 ops/s - * Iteration 2: 1192936.161 ops/s - * Iteration 3: 1218773.403 ops/s - * Iteration 4: 1222966.452 ops/s - * Iteration 5: 1235609.354 ops/s - * - * - * Result "org.apache.skywalking.apm.util.StringFormatGroupTest.testMatch": - * 1201488.715 ±(99.9%) 150813.461 ops/s [Average] - * (min, avg, max) = (1137158.205, 1201488.715, 1235609.354), stdev = 39165.777 - * CI (99.9%): [1050675.254, 1352302.176] (assumes normal distribution) - * - * - * # Run complete. Total time: 00:01:41 - * - * REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on - * why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial - * experiments, perform baseline and negative tests that provide experimental control, make sure - * the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts. - * Do not assume the numbers tell you what you want them to tell. - * - * Benchmark Mode Cnt Score Error Units - * StringFormatGroupTest.test100Rule thrpt 5 35490.540 ± 8345.368 ops/s - * StringFormatGroupTest.testMatch thrpt 5 1201488.715 ± 150813.461 ops/s - * - */ -} diff --git a/oap-server/oal-rt/CLAUDE.md b/oap-server/oal-rt/CLAUDE.md index a40ec28db22c..d0c476589e55 100644 --- a/oap-server/oal-rt/CLAUDE.md +++ b/oap-server/oal-rt/CLAUDE.md @@ -88,7 +88,7 @@ model.getMetricsClassName(); // e.g., "LongAvgMetrics" ## Debug Output -When `SW_OAL_ENGINE_DEBUG=true` environment variable is set, generated `.class` files are written to disk for inspection: +When `SW_DYNAMIC_CLASS_ENGINE_DEBUG=true` environment variable is set, generated `.class` files are written to disk for inspection: ``` {skywalking}/oal-rt/ diff --git a/oap-server/oal-rt/src/main/java/org/apache/skywalking/oal/v2/generator/OALClassGeneratorV2.java b/oap-server/oal-rt/src/main/java/org/apache/skywalking/oal/v2/generator/OALClassGeneratorV2.java index 8cc9b414f273..25aefe9018d7 100644 --- a/oap-server/oal-rt/src/main/java/org/apache/skywalking/oal/v2/generator/OALClassGeneratorV2.java +++ b/oap-server/oal-rt/src/main/java/org/apache/skywalking/oal/v2/generator/OALClassGeneratorV2.java @@ -109,7 +109,7 @@ public OALClassGeneratorV2(OALDefine define) { * Constructor with custom ClassPool for test isolation. */ public OALClassGeneratorV2(OALDefine define, ClassPool classPool) { - openEngineDebug = StringUtil.isNotEmpty(System.getenv("SW_OAL_ENGINE_DEBUG")); + openEngineDebug = StringUtil.isNotEmpty(System.getenv("SW_DYNAMIC_CLASS_ENGINE_DEBUG")); this.classPool = classPool; oalDefine = define; @@ -257,7 +257,9 @@ private Class generateMetricsClass(CodeGenModel model) throws OALCompileExceptio StringWriter methodEntity = new StringWriter(); try { configuration.getTemplate("metrics/" + method + ".ftl").process(model, methodEntity); - metricsClass.addMethod(CtNewMethod.make(methodEntity.toString(), metricsClass)); + javassist.CtMethod m = CtNewMethod.make(methodEntity.toString(), metricsClass); + metricsClass.addMethod(m); + addLineNumberTable(m, 1); } catch (Exception e) { log.error("Can't generate method " + method + " for " + className + ".", e); throw new OALCompileException(e.getMessage(), e); @@ -277,6 +279,8 @@ private Class generateMetricsClass(CodeGenModel model) throws OALCompileExceptio annotationsAttribute.addAnnotation(streamAnnotation); metricsClassClassFile.addAttribute(annotationsAttribute); + setSourceFile(metricsClass, formatSourceFileName(model, "Metrics")); + Class targetClass; try { targetClass = metricsClass.toClass(MetricClassPackageHolder.class); @@ -322,13 +326,17 @@ private void generateMetricsBuilderClass(CodeGenModel model) throws OALCompileEx configuration .getTemplate(storageBuilderFactory.builderTemplate().getTemplatePath() + "/" + method + ".ftl") .process(model, methodEntity); - metricsBuilderClass.addMethod(CtNewMethod.make(methodEntity.toString(), metricsBuilderClass)); + javassist.CtMethod m = CtNewMethod.make(methodEntity.toString(), metricsBuilderClass); + metricsBuilderClass.addMethod(m); + addLineNumberTable(m, 1); } catch (Exception e) { log.error("Can't generate method " + method + " for " + className + ".", e); throw new OALCompileException(e.getMessage(), e); } } + setSourceFile(metricsBuilderClass, formatSourceFileName(model, "MetricsBuilder")); + try { metricsBuilderClass.toClass(MetricBuilderClassPackageHolder.class); } catch (CannotCompileException e) { @@ -377,7 +385,9 @@ private Class generateDispatcherClass(String scopeName, DispatcherContextV2 disp StringWriter methodEntity = new StringWriter(); try { configuration.getTemplate("dispatcher/doMetrics.ftl").process(metric, methodEntity); - dispatcherClass.addMethod(CtNewMethod.make(methodEntity.toString(), dispatcherClass)); + javassist.CtMethod m = CtNewMethod.make(methodEntity.toString(), dispatcherClass); + dispatcherClass.addMethod(m); + addLineNumberTable(m, 1); } catch (Exception e) { log.error("Can't generate method do" + metric.getMetricsName() + " for " + className + ".", e); log.error("Method body: {}", methodEntity); @@ -389,12 +399,27 @@ private Class generateDispatcherClass(String scopeName, DispatcherContextV2 disp try { StringWriter methodEntity = new StringWriter(); configuration.getTemplate("dispatcher/dispatch.ftl").process(dispatcherContext, methodEntity); - dispatcherClass.addMethod(CtNewMethod.make(methodEntity.toString(), dispatcherClass)); + javassist.CtMethod m = CtNewMethod.make(methodEntity.toString(), dispatcherClass); + dispatcherClass.addMethod(m); + addLineNumberTable(m, 1); } catch (Exception e) { log.error("Can't generate method dispatch for " + className + ".", e); throw new OALCompileException(e.getMessage(), e); } + // Use first metric's location for dispatcher SourceFile + if (!dispatcherContext.getMetrics().isEmpty()) { + final CodeGenModel first = dispatcherContext.getMetrics().get(0); + final org.apache.skywalking.oal.v2.model.SourceLocation loc = + first.getMetricDefinition().getLocation(); + final String dispatcherFile = scopeName + "Dispatcher.java"; + if (loc != null && loc != org.apache.skywalking.oal.v2.model.SourceLocation.UNKNOWN) { + setSourceFile(dispatcherClass, "(" + loc.getFileName() + ")" + dispatcherFile); + } else { + setSourceFile(dispatcherClass, dispatcherFile); + } + } + Class targetClass; try { targetClass = dispatcherClass.toClass(DispatcherClassPackageHolder.class); @@ -437,6 +462,96 @@ public void prepareRTTempFolder() { } } + /** + * Builds the SourceFile name for a generated metrics/builder class. + * Format: {@code (core.oal:20)ServiceRespTime.java} when location is known, + * or {@code ServiceRespTime.java} as fallback. + */ + private String formatSourceFileName(final CodeGenModel model, final String classSuffix) { + final String classFile = model.getMetricsName() + classSuffix + ".java"; + final org.apache.skywalking.oal.v2.model.SourceLocation loc = + model.getMetricDefinition().getLocation(); + if (loc != null && loc != org.apache.skywalking.oal.v2.model.SourceLocation.UNKNOWN) { + return "(" + loc.getFileName() + ":" + loc.getLine() + ")" + classFile; + } + return classFile; + } + + /** + * Sets the {@code SourceFile} attribute of the class to the given name. + */ + private static void setSourceFile(final CtClass ctClass, final String name) { + try { + final javassist.bytecode.ClassFile cf = ctClass.getClassFile(); + final javassist.bytecode.AttributeInfo sf = cf.getAttribute("SourceFile"); + if (sf != null) { + final javassist.bytecode.ConstPool cp = cf.getConstPool(); + final int idx = cp.addUtf8Info(name); + sf.set(new byte[]{(byte) (idx >> 8), (byte) idx}); + } + } catch (Exception e) { + // best-effort + } + } + + /** + * Adds a {@code LineNumberTable} attribute by scanning bytecode for + * store instructions to local variable slots ≥ {@code firstResultSlot}. + */ + private void addLineNumberTable(final javassist.CtMethod method, + final int firstResultSlot) { + try { + final javassist.bytecode.MethodInfo mi = method.getMethodInfo(); + final javassist.bytecode.CodeAttribute code = mi.getCodeAttribute(); + if (code == null) { + return; + } + + final java.util.ArrayList<int[]> entries = new java.util.ArrayList<>(); + int line = 1; + boolean nextIsNewLine = true; + + final javassist.bytecode.CodeIterator ci = code.iterator(); + while (ci.hasNext()) { + final int pc = ci.next(); + if (nextIsNewLine) { + entries.add(new int[]{pc, line++}); + nextIsNewLine = false; + } + final int op = ci.byteAt(pc) & 0xFF; + int slot = -1; + if (op >= 59 && op <= 78) { + slot = (op - 59) % 4; + } else if (op >= 54 && op <= 58) { + slot = ci.byteAt(pc + 1) & 0xFF; + } + if (slot >= firstResultSlot) { + nextIsNewLine = true; + } + } + + if (entries.isEmpty()) { + return; + } + + final javassist.bytecode.ConstPool cp = mi.getConstPool(); + final byte[] info = new byte[2 + entries.size() * 4]; + info[0] = (byte) (entries.size() >> 8); + info[1] = (byte) entries.size(); + for (int i = 0; i < entries.size(); i++) { + final int off = 2 + i * 4; + info[off] = (byte) (entries.get(i)[0] >> 8); + info[off + 1] = (byte) entries.get(i)[0]; + info[off + 2] = (byte) (entries.get(i)[1] >> 8); + info[off + 3] = (byte) entries.get(i)[1]; + } + code.getAttributes().add( + new javassist.bytecode.AttributeInfo(cp, "LineNumberTable", info)); + } catch (Exception e) { + log.warn("Failed to add LineNumberTable: {}", e.getMessage()); + } + } + private void writeGeneratedFile(CtClass ctClass, String type) throws OALCompileException { if (openEngineDebug) { String className = ctClass.getSimpleName(); diff --git a/oap-server/pom.xml b/oap-server/pom.xml index c457b9039172..93cc75452b97 100755 --- a/oap-server/pom.xml +++ b/oap-server/pom.xml @@ -51,15 +51,6 @@ <module>mqe-rt</module> </modules> - <profiles> - <profile> - <id>benchmark</id> - <modules> - <module>microbench</module> - </modules> - </profile> - </profiles> - <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> diff --git a/oap-server/server-alarm-plugin/pom.xml b/oap-server/server-alarm-plugin/pom.xml index b31548980aec..5fd7206274c3 100644 --- a/oap-server/server-alarm-plugin/pom.xml +++ b/oap-server/server-alarm-plugin/pom.xml @@ -58,6 +58,12 @@ <artifactId>armeria-junit5</artifactId> <scope>test</scope> </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> <build> diff --git a/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/AlarmCoreTest.java b/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/AlarmCoreTest.java index 68a415d5d8d7..2919602b3035 100644 --- a/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/AlarmCoreTest.java +++ b/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/AlarmCoreTest.java @@ -22,7 +22,7 @@ import org.junit.jupiter.api.Assertions; import org.junit.jupiter.api.Test; import org.mockito.stubbing.Answer; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.ArrayList; import java.util.LinkedList; @@ -51,7 +51,7 @@ public void testTriggerTimePoint() throws InterruptedException { emptyRules.setRules(new ArrayList<>(0)); AlarmCore core = new AlarmCore(new AlarmRulesWatcher(emptyRules, null, null)); - Map<String, List<RunningRule>> runningContext = Whitebox.getInternalState(core, "runningContext"); + Map<String, List<RunningRule>> runningContext = ReflectUtil.getInternalState(core, "runningContext"); List<RunningRule> rules = new ArrayList<>(1); RunningRule mockRule = mock(RunningRule.class); diff --git a/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/AlarmModuleProviderTest.java b/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/AlarmModuleProviderTest.java index e1c35965f5bd..6eb3f3628c13 100644 --- a/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/AlarmModuleProviderTest.java +++ b/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/AlarmModuleProviderTest.java @@ -29,7 +29,7 @@ import org.apache.skywalking.oap.server.library.module.ModuleProvider; import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import static org.junit.jupiter.api.Assertions.assertArrayEquals; import static org.junit.jupiter.api.Assertions.assertEquals; @@ -73,7 +73,7 @@ public void notifyAfterCompleted() throws Exception { NotifyHandler handler = mock(NotifyHandler.class); - Whitebox.setInternalState(moduleProvider, "notifyHandler", handler); + ReflectUtil.setInternalState(moduleProvider, "notifyHandler", handler); moduleProvider.notifyAfterCompleted(); } diff --git a/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/NotifyHandlerTest.java b/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/NotifyHandlerTest.java index 0e3583a1ecd5..3c80dbd7c5fd 100644 --- a/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/NotifyHandlerTest.java +++ b/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/NotifyHandlerTest.java @@ -43,7 +43,7 @@ import org.mockito.junit.jupiter.MockitoExtension; import org.mockito.junit.jupiter.MockitoSettings; import org.mockito.quality.Strictness; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.List; @@ -289,7 +289,7 @@ public void doAlarmRecovery(List<AlarmMessage> alarmResolvedMessages) throws Exc when(core.findRunningRule(anyString())).thenReturn(Lists.newArrayList(rule)); - Whitebox.setInternalState(notifyHandler, "core", core); + ReflectUtil.setInternalState(notifyHandler, "core", core); } public abstract static class MockMetrics extends Metrics implements WithMetadata { diff --git a/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/RunningRuleTest.java b/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/RunningRuleTest.java index 69d425cc43fc..0b7c3b8f090f 100644 --- a/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/RunningRuleTest.java +++ b/oap-server/server-alarm-plugin/src/test/java/org/apache/skywalking/oap/server/core/alarm/provider/RunningRuleTest.java @@ -42,7 +42,7 @@ import org.junit.jupiter.api.Assertions; import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.HashMap; import java.util.LinkedList; @@ -88,12 +88,12 @@ public void testInitAndStart() throws IllegalExpressionException { runningRule.in(getMetaInAlarm(123), getMetrics(timeInPeriod1, 70)); - Map<AlarmEntity, RunningRule.Window> windows = Whitebox.getInternalState(runningRule, "windows"); + Map<AlarmEntity, RunningRule.Window> windows = ReflectUtil.getInternalState(runningRule, "windows"); RunningRule.Window window = windows.get(getAlarmEntity(123)); - LocalDateTime endTime = Whitebox.getInternalState(window, "endTime"); - int additionalPeriod = Whitebox.getInternalState(window, "additionalPeriod"); - LinkedList<Metrics> metricsBuffer = Whitebox.getInternalState(window, "values"); + LocalDateTime endTime = ReflectUtil.getInternalState(window, "endTime"); + int additionalPeriod = ReflectUtil.getInternalState(window, "additionalPeriod"); + LinkedList<Metrics> metricsBuffer = ReflectUtil.getInternalState(window, "values"); Assertions.assertTrue(targetTime.equals(endTime.toDateTime())); Assertions.assertEquals(5, additionalPeriod); diff --git a/oap-server/server-cluster-plugin/cluster-consul-plugin/pom.xml b/oap-server/server-cluster-plugin/cluster-consul-plugin/pom.xml index 549696a697e7..27fcdc9bb31f 100644 --- a/oap-server/server-cluster-plugin/cluster-consul-plugin/pom.xml +++ b/oap-server/server-cluster-plugin/cluster-consul-plugin/pom.xml @@ -48,5 +48,11 @@ </exclusion> </exclusions> </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> </project> diff --git a/oap-server/server-cluster-plugin/cluster-consul-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/consul/ClusterModuleConsulProviderFunctionalIT.java b/oap-server/server-cluster-plugin/cluster-consul-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/consul/ClusterModuleConsulProviderFunctionalIT.java index 7905979c6b90..03685436a247 100644 --- a/oap-server/server-cluster-plugin/cluster-consul-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/consul/ClusterModuleConsulProviderFunctionalIT.java +++ b/oap-server/server-cluster-plugin/cluster-consul-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/consul/ClusterModuleConsulProviderFunctionalIT.java @@ -41,7 +41,7 @@ import org.mockito.Mock; import org.mockito.Mockito; import org.mockito.junit.jupiter.MockitoExtension; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import org.testcontainers.containers.GenericContainer; import org.testcontainers.containers.wait.strategy.Wait; import org.testcontainers.junit.jupiter.Container; @@ -79,7 +79,7 @@ public void before() { Mockito.when(telemetryProvider.getService(MetricsCreator.class)) .thenReturn(new MetricsCreatorNoop()); TelemetryModule telemetryModule = Mockito.spy(TelemetryModule.class); - Whitebox.setInternalState(telemetryModule, "loadedProvider", telemetryProvider); + ReflectUtil.setInternalState(telemetryModule, "loadedProvider", telemetryProvider); Mockito.when(moduleManager.find(TelemetryModule.NAME)).thenReturn(telemetryModule); consulAddress = container.getHost() + ":" + container.getMappedPort(8500); } @@ -220,7 +220,7 @@ public void unregisterRemoteOfCluster() throws Exception { assertEquals(2, queryRemoteNodes(providerB, 2).size()); // unregister A - Consul client = Whitebox.getInternalState(providerA, "client"); + Consul client = ReflectUtil.getInternalState(providerA, "client"); AgentClient agentClient = client.agentClient(); agentClient.deregister(instanceA.getAddress().toString()); diff --git a/oap-server/server-cluster-plugin/cluster-consul-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/consul/ClusterModuleConsulProviderTest.java b/oap-server/server-cluster-plugin/cluster-consul-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/consul/ClusterModuleConsulProviderTest.java index 6316b8948f29..c6112aef5bae 100644 --- a/oap-server/server-cluster-plugin/cluster-consul-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/consul/ClusterModuleConsulProviderTest.java +++ b/oap-server/server-cluster-plugin/cluster-consul-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/consul/ClusterModuleConsulProviderTest.java @@ -35,7 +35,7 @@ import org.mockito.MockedStatic; import org.mockito.Mockito; import org.mockito.junit.jupiter.MockitoExtension; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.Collection; import java.util.List; @@ -64,9 +64,9 @@ public class ClusterModuleConsulProviderTest { @BeforeEach public void before() { TelemetryModule telemetryModule = Mockito.spy(TelemetryModule.class); - Whitebox.setInternalState(telemetryModule, "loadedProvider", telemetryProvider); + ReflectUtil.setInternalState(telemetryModule, "loadedProvider", telemetryProvider); provider.setManager(moduleManager); - Whitebox.setInternalState(provider, "config", new ClusterModuleConsulConfig()); + ReflectUtil.setInternalState(provider, "config", new ClusterModuleConsulConfig()); } @Test @@ -89,7 +89,7 @@ public void prepareWithNonHost() throws Exception { public void prepare() throws Exception { ClusterModuleConsulConfig consulConfig = new ClusterModuleConsulConfig(); consulConfig.setHostPort("10.0.0.1:1000,10.0.0.2:1001"); - Whitebox.setInternalState(provider, "config", consulConfig); + ReflectUtil.setInternalState(provider, "config", consulConfig); Consul consulClient = mock(Consul.class); Consul.Builder builder = mock(Consul.Builder.class); @@ -119,7 +119,7 @@ public void prepare() throws Exception { public void prepareSingle() throws Exception { ClusterModuleConsulConfig consulConfig = new ClusterModuleConsulConfig(); consulConfig.setHostPort("10.0.0.1:1000"); - Whitebox.setInternalState(provider, "config", consulConfig); + ReflectUtil.setInternalState(provider, "config", consulConfig); Consul consulClient = mock(Consul.class); Consul.Builder builder = mock(Consul.Builder.class); diff --git a/oap-server/server-cluster-plugin/cluster-consul-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/consul/ConsulCoordinatorTest.java b/oap-server/server-cluster-plugin/cluster-consul-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/consul/ConsulCoordinatorTest.java index fe8662f3f418..b2d94e39c07d 100644 --- a/oap-server/server-cluster-plugin/cluster-consul-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/consul/ConsulCoordinatorTest.java +++ b/oap-server/server-cluster-plugin/cluster-consul-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/consul/ConsulCoordinatorTest.java @@ -32,7 +32,7 @@ import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; import org.mockito.ArgumentCaptor; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.Collections; import java.util.LinkedList; @@ -71,7 +71,7 @@ public void setUp() { consulConfig.setServiceName(SERVICE_NAME); ModuleDefineHolder manager = mock(ModuleDefineHolder.class); coordinator = new ConsulCoordinator(manager, consulConfig, consul); - Whitebox.setInternalState(coordinator, "healthChecker", healthChecker); + ReflectUtil.setInternalState(coordinator, "healthChecker", healthChecker); consulResponse = mock(ConsulResponse.class); HealthClient healthClient = mock(HealthClient.class); diff --git a/oap-server/server-cluster-plugin/cluster-etcd-plugin/pom.xml b/oap-server/server-cluster-plugin/cluster-etcd-plugin/pom.xml index ce2d3ab655b6..810ab6f64f24 100644 --- a/oap-server/server-cluster-plugin/cluster-etcd-plugin/pom.xml +++ b/oap-server/server-cluster-plugin/cluster-etcd-plugin/pom.xml @@ -80,5 +80,11 @@ <groupId>org.yaml</groupId> <artifactId>snakeyaml</artifactId> </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> </project> diff --git a/oap-server/server-cluster-plugin/cluster-etcd-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/etcd/ClusterEtcdPluginIT.java b/oap-server/server-cluster-plugin/cluster-etcd-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/etcd/ClusterEtcdPluginIT.java index 79941b6a23cb..8404e7555976 100644 --- a/oap-server/server-cluster-plugin/cluster-etcd-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/etcd/ClusterEtcdPluginIT.java +++ b/oap-server/server-cluster-plugin/cluster-etcd-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/etcd/ClusterEtcdPluginIT.java @@ -29,7 +29,7 @@ import org.apache.skywalking.oap.server.telemetry.api.HealthCheckMetrics; import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import org.testcontainers.containers.GenericContainer; import org.testcontainers.containers.wait.strategy.Wait; import org.testcontainers.junit.jupiter.Container; @@ -89,8 +89,8 @@ public void before() throws Exception { ModuleDefineHolder manager = mock(ModuleDefineHolder.class); coordinator = new EtcdCoordinator(manager, etcdConfig); - client = Whitebox.getInternalState(coordinator, "client"); - Whitebox.setInternalState(coordinator, "healthChecker", healthChecker); + client = ReflectUtil.getInternalState(coordinator, "client"); + ReflectUtil.setInternalState(coordinator, "healthChecker", healthChecker); } @Test diff --git a/oap-server/server-cluster-plugin/cluster-etcd-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/etcd/ClusterModuleEtcdProviderFunctionalIT.java b/oap-server/server-cluster-plugin/cluster-etcd-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/etcd/ClusterModuleEtcdProviderFunctionalIT.java index 5192e4ca3cce..ea1f9aecdf97 100644 --- a/oap-server/server-cluster-plugin/cluster-etcd-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/etcd/ClusterModuleEtcdProviderFunctionalIT.java +++ b/oap-server/server-cluster-plugin/cluster-etcd-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/etcd/ClusterModuleEtcdProviderFunctionalIT.java @@ -36,7 +36,7 @@ import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; import org.mockito.Mockito; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import org.testcontainers.containers.GenericContainer; import org.testcontainers.containers.wait.strategy.Wait; import org.testcontainers.junit.jupiter.Container; @@ -211,7 +211,7 @@ public void unregisterRemoteOfCluster() throws Exception { assertEquals(2, queryRemoteNodes(providerB, 2).size()); // unregister A - Client client = Whitebox.getInternalState(coordinatorA, "client"); + Client client = ReflectUtil.getInternalState(coordinatorA, "client"); client.close(); // only B @@ -247,7 +247,7 @@ private ClusterModuleEtcdProvider createProvider(String serviceName, String inte config.setInternalComPort(internalComPort); } TelemetryModule telemetryModule = Mockito.spy(TelemetryModule.class); - Whitebox.setInternalState(telemetryModule, "loadedProvider", telemetryProvider); + ReflectUtil.setInternalState(telemetryModule, "loadedProvider", telemetryProvider); ModuleManager manager = mock(ModuleManager.class); Mockito.when(manager.find(TelemetryModule.NAME)).thenReturn(telemetryModule); diff --git a/oap-server/server-cluster-plugin/cluster-kubernetes-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/kubernetes/ClusterModuleKubernetesProviderTest.java b/oap-server/server-cluster-plugin/cluster-kubernetes-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/kubernetes/ClusterModuleKubernetesProviderTest.java index 11d5da56b1f2..13f35df6b7cb 100644 --- a/oap-server/server-cluster-plugin/cluster-kubernetes-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/kubernetes/ClusterModuleKubernetesProviderTest.java +++ b/oap-server/server-cluster-plugin/cluster-kubernetes-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/kubernetes/ClusterModuleKubernetesProviderTest.java @@ -29,7 +29,7 @@ import org.mockito.Mock; import org.mockito.Mockito; import org.mockito.junit.jupiter.MockitoExtension; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import static org.junit.jupiter.api.Assertions.assertArrayEquals; import static org.junit.jupiter.api.Assertions.assertEquals; @@ -49,9 +49,9 @@ public void before() { final var config = new ClusterModuleKubernetesConfig(); config.setLabelSelector("app=oap"); TelemetryModule telemetryModule = Mockito.spy(TelemetryModule.class); - Whitebox.setInternalState(telemetryModule, "loadedProvider", telemetryProvider); + ReflectUtil.setInternalState(telemetryModule, "loadedProvider", telemetryProvider); provider.setManager(moduleManager); - Whitebox.setInternalState(provider, "config", config); + ReflectUtil.setInternalState(provider, "config", config); } @Test diff --git a/oap-server/server-cluster-plugin/cluster-nacos-plugin/pom.xml b/oap-server/server-cluster-plugin/cluster-nacos-plugin/pom.xml index 8e37fc189f46..f2a0242ba464 100644 --- a/oap-server/server-cluster-plugin/cluster-nacos-plugin/pom.xml +++ b/oap-server/server-cluster-plugin/cluster-nacos-plugin/pom.xml @@ -41,5 +41,11 @@ <groupId>org.apache.httpcomponents</groupId> <artifactId>httpcore-nio</artifactId> </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> </project> diff --git a/oap-server/server-cluster-plugin/cluster-nacos-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/nacos/ClusterModuleNacosProviderFunctionalIT.java b/oap-server/server-cluster-plugin/cluster-nacos-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/nacos/ClusterModuleNacosProviderFunctionalIT.java index 322d87de3e62..839fe7ad308f 100644 --- a/oap-server/server-cluster-plugin/cluster-nacos-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/nacos/ClusterModuleNacosProviderFunctionalIT.java +++ b/oap-server/server-cluster-plugin/cluster-nacos-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/nacos/ClusterModuleNacosProviderFunctionalIT.java @@ -43,7 +43,7 @@ import org.mockito.Mock; import org.mockito.Mockito; import org.mockito.junit.jupiter.MockitoExtension; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import org.testcontainers.containers.GenericContainer; import org.testcontainers.containers.wait.strategy.Wait; import org.testcontainers.junit.jupiter.Container; @@ -78,7 +78,7 @@ public void before() { Mockito.when(telemetryProvider.getService(MetricsCreator.class)) .thenReturn(new MetricsCreatorNoop()); TelemetryModule telemetryModule = Mockito.spy(TelemetryModule.class); - Whitebox.setInternalState(telemetryModule, "loadedProvider", telemetryProvider); + ReflectUtil.setInternalState(telemetryModule, "loadedProvider", telemetryProvider); Mockito.when(moduleManager.find(TelemetryModule.NAME)).thenReturn(telemetryModule); nacosAddress = container.getHost() + ":" + container.getMappedPort(8848); Integer nacosPortOffset = container.getMappedPort(9848) - container.getMappedPort(8848); @@ -219,7 +219,7 @@ public void deregisterRemoteOfCluster() throws Exception { assertEquals(2, queryRemoteNodes(providerB, 2).size()); // deregister A - NamingService namingServiceA = Whitebox.getInternalState(coordinatorA, "namingService"); + NamingService namingServiceA = ReflectUtil.getInternalState(coordinatorA, "namingService"); namingServiceA.deregisterInstance(serviceName, addressA.getHost(), addressA.getPort()); // only B diff --git a/oap-server/server-cluster-plugin/cluster-nacos-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/nacos/NacosCoordinatorTest.java b/oap-server/server-cluster-plugin/cluster-nacos-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/nacos/NacosCoordinatorTest.java index 326dd7e15a2d..f0e7ec4b30a6 100644 --- a/oap-server/server-cluster-plugin/cluster-nacos-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/nacos/NacosCoordinatorTest.java +++ b/oap-server/server-cluster-plugin/cluster-nacos-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/nacos/NacosCoordinatorTest.java @@ -28,7 +28,7 @@ import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; import org.mockito.ArgumentCaptor; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.ArrayList; import java.util.Collections; @@ -60,7 +60,7 @@ public void setUp() throws NacosException { nacosConfig.setServiceName(SERVICE_NAME); ModuleDefineHolder manager = mock(ModuleDefineHolder.class); coordinator = new NacosCoordinator(manager, namingService, nacosConfig); - Whitebox.setInternalState(coordinator, "healthChecker", healthChecker); + ReflectUtil.setInternalState(coordinator, "healthChecker", healthChecker); } @Test diff --git a/oap-server/server-cluster-plugin/cluster-zookeeper-plugin/pom.xml b/oap-server/server-cluster-plugin/cluster-zookeeper-plugin/pom.xml index c6b2f4328049..86ef4215fa84 100644 --- a/oap-server/server-cluster-plugin/cluster-zookeeper-plugin/pom.xml +++ b/oap-server/server-cluster-plugin/cluster-zookeeper-plugin/pom.xml @@ -46,5 +46,11 @@ <groupId>org.apache.curator</groupId> <artifactId>curator-test</artifactId> </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> </project> diff --git a/oap-server/server-cluster-plugin/cluster-zookeeper-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/zookeeper/ClusterModuleZookeeperProviderFunctionalIT.java b/oap-server/server-cluster-plugin/cluster-zookeeper-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/zookeeper/ClusterModuleZookeeperProviderFunctionalIT.java index 171f125aa5b4..beac2532d5b6 100644 --- a/oap-server/server-cluster-plugin/cluster-zookeeper-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/zookeeper/ClusterModuleZookeeperProviderFunctionalIT.java +++ b/oap-server/server-cluster-plugin/cluster-zookeeper-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/zookeeper/ClusterModuleZookeeperProviderFunctionalIT.java @@ -41,7 +41,7 @@ import org.mockito.Mock; import org.mockito.Mockito; import org.mockito.junit.jupiter.MockitoExtension; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import org.testcontainers.containers.GenericContainer; import org.testcontainers.containers.wait.strategy.Wait; import org.testcontainers.junit.jupiter.Container; @@ -74,7 +74,7 @@ public void init() { Mockito.when(telemetryProvider.getService(MetricsCreator.class)) .thenReturn(new MetricsCreatorNoop()); TelemetryModule telemetryModule = Mockito.spy(TelemetryModule.class); - Whitebox.setInternalState(telemetryModule, "loadedProvider", telemetryProvider); + ReflectUtil.setInternalState(telemetryModule, "loadedProvider", telemetryProvider); Mockito.when(moduleManager.find(TelemetryModule.NAME)).thenReturn(telemetryModule); zkAddress = container.getHost() + ":" + container.getMappedPort(2181); } @@ -210,7 +210,7 @@ public void unregisterRemoteOfCluster() throws Exception { assertEquals(2, queryRemoteNodes(providerB, 2).size()); // unregister A - ServiceDiscovery<RemoteInstance> discoveryA = Whitebox.getInternalState(providerA, "serviceDiscovery"); + ServiceDiscovery<RemoteInstance> discoveryA = ReflectUtil.getInternalState(providerA, "serviceDiscovery"); discoveryA.close(); // only B diff --git a/oap-server/server-cluster-plugin/cluster-zookeeper-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/zookeeper/ZookeeperCoordinatorTest.java b/oap-server/server-cluster-plugin/cluster-zookeeper-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/zookeeper/ZookeeperCoordinatorTest.java index d2a883289593..6bfe608640d0 100644 --- a/oap-server/server-cluster-plugin/cluster-zookeeper-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/zookeeper/ZookeeperCoordinatorTest.java +++ b/oap-server/server-cluster-plugin/cluster-zookeeper-plugin/src/test/java/org/apache/skywalking/oap/server/cluster/plugin/zookeeper/ZookeeperCoordinatorTest.java @@ -30,7 +30,7 @@ import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; import org.mockito.ArgumentCaptor; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import static org.junit.jupiter.api.Assertions.assertEquals; import static org.junit.jupiter.api.Assertions.assertTrue; @@ -69,7 +69,7 @@ public void setUp() throws Exception { doNothing().when(healthChecker).health(); ModuleDefineHolder manager = mock(ModuleDefineHolder.class); coordinator = new ZookeeperCoordinator(manager, config, serviceDiscovery); - Whitebox.setInternalState(coordinator, "healthChecker", healthChecker); + ReflectUtil.setInternalState(coordinator, "healthChecker", healthChecker); } @Test diff --git a/oap-server/server-core/pom.xml b/oap-server/server-core/pom.xml index 056f3c9fa3d0..484507e63bd0 100644 --- a/oap-server/server-core/pom.xml +++ b/oap-server/server-core/pom.xml @@ -104,12 +104,13 @@ <scope>test</scope> </dependency> <dependency> - <groupId>io.zipkin.zipkin2</groupId> - <artifactId>zipkin</artifactId> + <groupId>org.openjdk.jmh</groupId> + <artifactId>jmh-generator-annprocess</artifactId> + <scope>test</scope> </dependency> <dependency> - <groupId>org.apache.groovy</groupId> - <artifactId>groovy</artifactId> + <groupId>io.zipkin.zipkin2</groupId> + <artifactId>zipkin</artifactId> </dependency> </dependencies> diff --git a/oap-server/server-core/src/main/java/org/apache/skywalking/oap/server/core/config/HierarchyDefinitionService.java b/oap-server/server-core/src/main/java/org/apache/skywalking/oap/server/core/config/HierarchyDefinitionService.java index fbec34cf6cb4..9a114f84415f 100644 --- a/oap-server/server-core/src/main/java/org/apache/skywalking/oap/server/core/config/HierarchyDefinitionService.java +++ b/oap-server/server-core/src/main/java/org/apache/skywalking/oap/server/core/config/HierarchyDefinitionService.java @@ -18,55 +18,132 @@ package org.apache.skywalking.oap.server.core.config; -import groovy.lang.Closure; -import groovy.lang.GroovyShell; import java.io.FileNotFoundException; import java.io.Reader; import java.util.HashMap; import java.util.Map; +import java.util.ServiceLoader; +import java.util.function.BiFunction; import lombok.Getter; import lombok.extern.slf4j.Slf4j; import org.apache.skywalking.oap.server.core.CoreModuleConfig; import org.apache.skywalking.oap.server.core.UnexpectedException; import org.apache.skywalking.oap.server.core.analysis.Layer; +import org.apache.skywalking.oap.server.core.query.type.Service; import org.apache.skywalking.oap.server.library.util.ResourceUtils; import org.yaml.snakeyaml.Yaml; import static java.util.stream.Collectors.toMap; +/** + * Loads hierarchy definitions from {@code hierarchy-definition.yml} and compiles + * matching rules into executable {@code BiFunction<Service, Service, Boolean>} + * matchers via a pluggable {@link HierarchyRuleProvider} (discovered through Java SPI). + * + * <p>Initialization (at startup, in CoreModuleProvider): + * <ol> + * <li>Reads {@code hierarchy-definition.yml} containing three sections: + * {@code hierarchy} (layer-to-lower-layer mapping with rule names), + * {@code auto-matching-rules} (rule name to expression string), + * and {@code layer-levels} (layer to numeric level).</li> + * <li>Discovers a {@link HierarchyRuleProvider} via {@code ServiceLoader} + * (e.g., {@code CompiledHierarchyRuleProvider} from the hierarchy module).</li> + * <li>Calls {@link HierarchyRuleProvider#buildRules} which compiles each rule + * expression (e.g., {@code "{ (u, l) -> u.name == l.name }"}) into a + * {@code BiFunction} via ANTLR4 + Javassist.</li> + * <li>Wraps each compiled matcher in a {@link MatchingRule} and maps them + * to the layer hierarchy structure.</li> + * <li>Validates all layers exist in the {@code Layer} enum and that upper + * layers have higher level numbers than their lower layers.</li> + * </ol> + * + * <p>The resulting {@link #getHierarchyDefinition()} map is consumed by + * {@link org.apache.skywalking.oap.server.core.hierarchy.HierarchyService} + * for runtime service matching. + */ @Slf4j public class HierarchyDefinitionService implements org.apache.skywalking.oap.server.library.module.Service { + /** + * Functional interface for building hierarchy matching rules. + * Discovered via Java SPI ({@code ServiceLoader}). + */ + @FunctionalInterface + public interface HierarchyRuleProvider { + Map<String, BiFunction<Service, Service, Boolean>> buildRules(Map<String, String> ruleExpressions); + } + @Getter private final Map<String, Map<String, MatchingRule>> hierarchyDefinition; @Getter private Map<String, Integer> layerLevels; private Map<String, MatchingRule> matchingRules; - public HierarchyDefinitionService(CoreModuleConfig moduleConfig) { + public HierarchyDefinitionService(final CoreModuleConfig moduleConfig, + final HierarchyRuleProvider ruleProvider) { this.hierarchyDefinition = new HashMap<>(); this.layerLevels = new HashMap<>(); if (moduleConfig.isEnableHierarchy()) { - this.init(); + this.init(ruleProvider); this.checkLayers(); } } + /** + * Convenience constructor that discovers a {@link HierarchyRuleProvider} + * via Java SPI ({@code ServiceLoader}). Only loads the provider when + * hierarchy is enabled. + */ + public HierarchyDefinitionService(final CoreModuleConfig moduleConfig) { + this.hierarchyDefinition = new HashMap<>(); + this.layerLevels = new HashMap<>(); + if (moduleConfig.isEnableHierarchy()) { + this.init(loadProvider()); + this.checkLayers(); + } + } + + /** + * Discovers a {@link HierarchyRuleProvider} via Java SPI. The provider is registered in + * {@code META-INF/services/...HierarchyDefinitionService$HierarchyRuleProvider} by the + * hierarchy analyzer module ({@code CompiledHierarchyRuleProvider}). + * Takes the first provider found; fails fast if none is on the classpath. + */ + private static HierarchyRuleProvider loadProvider() { + final ServiceLoader<HierarchyRuleProvider> loader = + ServiceLoader.load(HierarchyRuleProvider.class); + for (final HierarchyRuleProvider provider : loader) { + log.info("Using hierarchy rule provider: {}", provider.getClass().getName()); + return provider; + } + throw new IllegalStateException( + "No HierarchyRuleProvider found on classpath. " + + "Ensure the hierarchy analyzer module is included."); + } + @SuppressWarnings("unchecked") - private void init() { + private void init(final HierarchyRuleProvider ruleProvider) { try { - Reader applicationReader = ResourceUtils.read("hierarchy-definition.yml"); - Yaml yaml = new Yaml(); - Map<String, Map> config = yaml.loadAs(applicationReader, Map.class); - Map<String, Map<String, String>> hierarchy = (Map<String, Map<String, String>>) config.get("hierarchy"); - Map<String, String> matchingRules = (Map<String, String>) config.get("auto-matching-rules"); + final Reader applicationReader = ResourceUtils.read("hierarchy-definition.yml"); + final Yaml yaml = new Yaml(); + final Map<String, Map> config = yaml.loadAs(applicationReader, Map.class); + final Map<String, Map<String, String>> hierarchy = (Map<String, Map<String, String>>) config.get("hierarchy"); + final Map<String, String> ruleExpressions = (Map<String, String>) config.get("auto-matching-rules"); this.layerLevels = (Map<String, Integer>) config.get("layer-levels"); - this.matchingRules = matchingRules.entrySet().stream().map(entry -> { - MatchingRule matchingRule = new MatchingRule(entry.getKey(), entry.getValue()); + + final Map<String, BiFunction<Service, Service, Boolean>> builtRules = ruleProvider.buildRules(ruleExpressions); + + this.matchingRules = ruleExpressions.entrySet().stream().map(entry -> { + final BiFunction<Service, Service, Boolean> matcher = builtRules.get(entry.getKey()); + if (matcher == null) { + throw new IllegalStateException( + "HierarchyRuleProvider did not produce a matcher for rule: " + entry.getKey()); + } + final MatchingRule matchingRule = new MatchingRule(entry.getKey(), entry.getValue(), matcher); return Map.entry(entry.getKey(), matchingRule); }).collect(toMap(Map.Entry::getKey, Map.Entry::getValue)); hierarchy.forEach((layer, lowerLayers) -> { - Map<String, MatchingRule> rules = new HashMap<>(); + final Map<String, MatchingRule> rules = new HashMap<>(); lowerLayers.forEach((lowerLayer, ruleName) -> { rules.put(lowerLayer, this.matchingRules.get(ruleName)); }); @@ -85,14 +162,14 @@ private void checkLayers() { } }); this.hierarchyDefinition.forEach((layer, lowerLayers) -> { - Integer layerLevel = this.layerLevels.get(layer); + final Integer layerLevel = this.layerLevels.get(layer); if (this.layerLevels.get(layer) == null) { throw new IllegalArgumentException( "hierarchy-definition.yml layer-levels: " + layer + " is not defined"); } - for (String lowerLayer : lowerLayers.keySet()) { - Integer lowerLayerLevel = this.layerLevels.get(lowerLayer); + for (final String lowerLayer : lowerLayers.keySet()) { + final Integer lowerLayerLevel = this.layerLevels.get(lowerLayer); if (lowerLayerLevel == null) { throw new IllegalArgumentException( "hierarchy-definition.yml layer-levels: " + lowerLayer + " is not defined."); @@ -109,14 +186,31 @@ private void checkLayers() { public static class MatchingRule { private final String name; private final String expression; - private final Closure<Boolean> closure; + private final BiFunction<Service, Service, Boolean> matcher; - @SuppressWarnings("unchecked") - public MatchingRule(final String name, final String expression) { + public MatchingRule(final String name, final String expression, + final BiFunction<Service, Service, Boolean> matcher) { this.name = name; this.expression = expression; - GroovyShell sh = new GroovyShell(); - closure = (Closure<Boolean>) sh.evaluate(expression); + this.matcher = matcher; + } + + /** + * Evaluates the compiled matching rule against two services. + * The {@code matcher} is a Javassist-generated {@code BiFunction} compiled + * from the expression in {@code hierarchy-definition.yml} at startup. + * If the expression throws at runtime (e.g., NPE from null shortName), + * the exception propagates to the caller + * ({@link org.apache.skywalking.oap.server.core.hierarchy.HierarchyService}). + */ + public boolean match(final Service upper, final Service lower) { + if (log.isDebugEnabled()) { + log.debug("[Hierarchy] rule={}, class={}, upper=[{}, {}], lower=[{}, {}]", + name, matcher.getClass().getName(), + upper.getName(), upper.getShortName(), + lower.getName(), lower.getShortName()); + } + return matcher.apply(upper, lower); } } } diff --git a/oap-server/server-core/src/main/java/org/apache/skywalking/oap/server/core/hierarchy/HierarchyService.java b/oap-server/server-core/src/main/java/org/apache/skywalking/oap/server/core/hierarchy/HierarchyService.java index 02e4014229ae..6f18f28995ea 100644 --- a/oap-server/server-core/src/main/java/org/apache/skywalking/oap/server/core/hierarchy/HierarchyService.java +++ b/oap-server/server-core/src/main/java/org/apache/skywalking/oap/server/core/hierarchy/HierarchyService.java @@ -38,6 +38,33 @@ import org.apache.skywalking.oap.server.library.module.ModuleManager; import org.apache.skywalking.oap.server.library.util.RunnableWithExceptionProtection; +/** + * Runtime service that builds hierarchy relations between services and instances. + * + * <p>Uses the compiled matching rules from + * {@link HierarchyDefinitionService} to determine if two services are + * hierarchically related (e.g., a MESH service sitting above a K8S_SERVICE). + * + * <p>Two paths for creating relations: + * <ol> + * <li><b>Explicit</b> (from agent telemetry): receivers call + * {@link #toServiceHierarchyRelation} or {@link #toInstanceHierarchyRelation} + * when agents report detected service-to-service relationships.</li> + * <li><b>Auto-matching</b> (scheduled background task): {@link #startAutoMatchingServiceHierarchy()} + * starts a background task that runs every 20 seconds, comparing all known + * service pairs against the compiled hierarchy rules: + * <ul> + * <li>Retrieves all services from {@link MetadataQueryService}.</li> + * <li>For each pair (i, j), checks if hierarchy rules exist for + * layer[i]→layer[j] or layer[j]→layer[i].</li> + * <li>Invokes {@link HierarchyDefinitionService.MatchingRule#match} + * which executes the compiled {@code BiFunction}.</li> + * <li>If matched, creates a {@link ServiceHierarchyRelation} and sends it + * to {@link SourceReceiver} for persistence.</li> + * </ul> + * </li> + * </ol> + */ @Slf4j public class HierarchyService implements org.apache.skywalking.oap.server.library.module.Service { private final ModuleManager moduleManager; @@ -199,8 +226,7 @@ private void autoMatchingServiceRelation() { if (lowerLayers != null && lowerLayers.get(comparedServiceLayer) != null) { try { if (lowerLayers.get(comparedServiceLayer) - .getClosure() - .call(service, comparedService)) { + .match(service, comparedService)) { autoMatchingServiceRelation(service.getName(), Layer.nameOf(serviceLayer), comparedService.getName(), Layer.nameOf(comparedServiceLayer) @@ -208,7 +234,7 @@ private void autoMatchingServiceRelation() { } } catch (Throwable e) { log.error( - "Auto matching service hierarchy from service traffic failure. Upper layer {}, lower layer {}, closure{}", + "Auto matching service hierarchy from service traffic failure. Upper layer {}, lower layer {}, rule {}", serviceLayer, comparedServiceLayer, lowerLayers.get(comparedServiceLayer).getExpression(), e @@ -218,8 +244,7 @@ private void autoMatchingServiceRelation() { } else if (comparedLowerLayers != null && comparedLowerLayers.get(serviceLayer) != null) { try { if (comparedLowerLayers.get(serviceLayer) - .getClosure() - .call(comparedService, service)) { + .match(comparedService, service)) { autoMatchingServiceRelation( comparedService.getName(), Layer.nameOf(comparedServiceLayer), @@ -229,7 +254,7 @@ private void autoMatchingServiceRelation() { } } catch (Throwable e) { log.error( - "Auto matching service hierarchy from service traffic failure. Upper layer {}, lower layer {}, closure{}", + "Auto matching service hierarchy from service traffic failure. Upper layer {}, lower layer {}, rule {}", comparedServiceLayer, serviceLayer, comparedLowerLayers.get(serviceLayer).getExpression(), e diff --git a/oap-server/server-core/src/test/java/org/apache/skywalking/oap/server/core/config/group/openapi/EndpointGroupingBenchmark4Openapi.java b/oap-server/server-core/src/test/java/org/apache/skywalking/oap/server/core/config/group/openapi/EndpointGroupingBenchmark4Openapi.java new file mode 100644 index 000000000000..15c1e7a62306 --- /dev/null +++ b/oap-server/server-core/src/test/java/org/apache/skywalking/oap/server/core/config/group/openapi/EndpointGroupingBenchmark4Openapi.java @@ -0,0 +1,110 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.server.core.config.group.openapi; + +import org.apache.skywalking.oap.server.library.util.StringFormatGroup.FormatResult; + +import java.util.Collections; +import java.util.Map; + +import org.junit.jupiter.api.Test; +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.BenchmarkMode; +import org.openjdk.jmh.annotations.Fork; +import org.openjdk.jmh.annotations.Measurement; +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.annotations.Threads; +import org.openjdk.jmh.annotations.Warmup; +import org.openjdk.jmh.infra.Blackhole; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.options.OptionsBuilder; + +@Warmup(iterations = 10) +@Measurement(iterations = 10) +@Fork(2) +@State(Scope.Thread) +@BenchmarkMode({Mode.Throughput}) +@Threads(4) +public class EndpointGroupingBenchmark4Openapi { + private static final String APT_TEST_DATA = " /products1/{id}/%d:\n" + " get:\n" + " post:\n" + + " /products2/{id}/%d:\n" + " get:\n" + " post:\n" + + " /products3/{id}/%d:\n" + " get:\n"; + + private static Map<String, String> createTestFile(int size) { + StringBuilder stringBuilder = new StringBuilder(); + stringBuilder.append("paths:\n"); + for (int i = 0; i <= size; i++) { + stringBuilder.append(String.format(APT_TEST_DATA, i, i, i)); + } + return Collections.singletonMap("whatever", stringBuilder.toString()); + } + + @State(Scope.Benchmark) + public static class FormatClassPaths20 { + private final EndpointGroupingRule4Openapi rule = new EndpointGroupingRuleReader4Openapi(createTestFile(3)).read(); + + public FormatResult format(String serviceName, String endpointName) { + return rule.format(serviceName, endpointName); + } + } + + @State(Scope.Benchmark) + public static class FormatClassPaths50 { + private final EndpointGroupingRule4Openapi rule = new EndpointGroupingRuleReader4Openapi(createTestFile(9)).read(); + + public FormatResult format(String serviceName, String endpointName) { + return rule.format(serviceName, endpointName); + } + } + + @State(Scope.Benchmark) + public static class FormatClassPaths200 { + private final EndpointGroupingRule4Openapi rule = new EndpointGroupingRuleReader4Openapi(createTestFile(39)).read(); + + public FormatResult format(String serviceName, String endpointName) { + return rule.format(serviceName, endpointName); + } + } + + @Benchmark + public void formatEndpointNameMatchedPaths20(Blackhole bh, FormatClassPaths20 formatClass) { + bh.consume(formatClass.format("serviceA", "GET:/products1/123")); + } + + @Benchmark + public void formatEndpointNameMatchedPaths50(Blackhole bh, FormatClassPaths50 formatClass) { + bh.consume(formatClass.format("serviceA", "GET:/products1/123")); + } + + @Benchmark + public void formatEndpointNameMatchedPaths200(Blackhole bh, FormatClassPaths200 formatClass) { + bh.consume(formatClass.format("serviceA", "GET:/products1/123")); + } + + @Test + public void run() throws Exception { + new Runner(new OptionsBuilder() + .include(".*" + getClass().getSimpleName() + ".*") + .jvmArgsAppend("-Xmx512m", "-Xms512m") + .build()).run(); + } + +} diff --git a/oap-server/server-core/src/test/java/org/apache/skywalking/oap/server/core/config/group/uri/RegexVSQuickMatchBenchmark.java b/oap-server/server-core/src/test/java/org/apache/skywalking/oap/server/core/config/group/uri/RegexVSQuickMatchBenchmark.java new file mode 100644 index 000000000000..d4525bfe5aa0 --- /dev/null +++ b/oap-server/server-core/src/test/java/org/apache/skywalking/oap/server/core/config/group/uri/RegexVSQuickMatchBenchmark.java @@ -0,0 +1,135 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.server.core.config.group.uri; + +import org.apache.skywalking.oap.server.core.config.group.EndpointGroupingRule; +import org.apache.skywalking.oap.server.core.config.group.uri.quickmatch.QuickUriGroupingRule; +import org.apache.skywalking.oap.server.library.util.StringFormatGroup; +import org.junit.jupiter.api.Test; +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.BenchmarkMode; +import org.openjdk.jmh.annotations.Fork; +import org.openjdk.jmh.annotations.Measurement; +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.annotations.Threads; +import org.openjdk.jmh.annotations.Warmup; +import org.openjdk.jmh.infra.Blackhole; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.options.OptionsBuilder; + +@Warmup(iterations = 1) +@Measurement(iterations = 1) +@Fork(1) +@State(Scope.Thread) +@BenchmarkMode({Mode.Throughput}) +@Threads(4) +public class RegexVSQuickMatchBenchmark { + + @State(Scope.Benchmark) + public static class RegexMatch { + private final EndpointGroupingRule rule = new EndpointGroupingRule(); + + public RegexMatch() { + rule.addRule("service1", "/products/{var}", "/products/.+"); + rule.addRule("service1", "/products/{var}/detail", "/products/.+/detail"); + rule.addRule("service1", "/sales/{var}/1", "/sales/.+/1"); + rule.addRule("service1", "/sales/{var}/2", "/sales/.+/2"); + rule.addRule("service1", "/sales/{var}/3", "/sales/.+/3"); + rule.addRule("service1", "/sales/{var}/4", "/sales/.+/4"); + rule.addRule("service1", "/sales/{var}/5", "/sales/.+/5"); + rule.addRule("service1", "/sales/{var}/6", "/sales/.+/6"); + rule.addRule("service1", "/sales/{var}/7", "/sales/.+/7"); + rule.addRule("service1", "/sales/{var}/8", "/sales/.+/8"); + rule.addRule("service1", "/sales/{var}/9", "/sales/.+/9"); + rule.addRule("service1", "/sales/{var}/10", "/sales/.+/10"); + rule.addRule("service1", "/sales/{var}/11", "/sales/.+/11"); + rule.addRule("service1", "/employees/{var}/profile", "/employees/.+/profile"); + } + + public StringFormatGroup.FormatResult match(String serviceName, String endpointName) { + return rule.format(serviceName, endpointName); + } + } + + @State(Scope.Benchmark) + public static class QuickMatch { + private final QuickUriGroupingRule rule = new QuickUriGroupingRule(); + + public QuickMatch() { + rule.addRule("service1", "/products/{var}"); + rule.addRule("service1", "/products/{var}/detail"); + rule.addRule("service1", "/sales/{var}/1"); + rule.addRule("service1", "/sales/{var}/2"); + rule.addRule("service1", "/sales/{var}/3"); + rule.addRule("service1", "/sales/{var}/4"); + rule.addRule("service1", "/sales/{var}/5"); + rule.addRule("service1", "/sales/{var}/6"); + rule.addRule("service1", "/sales/{var}/7"); + rule.addRule("service1", "/sales/{var}/8"); + rule.addRule("service1", "/sales/{var}/9"); + rule.addRule("service1", "/sales/{var}/10"); + rule.addRule("service1", "/sales/{var}/11"); + rule.addRule("service1", "/employees/{var}/profile"); + } + + public StringFormatGroup.FormatResult match(String serviceName, String endpointName) { + return rule.format(serviceName, endpointName); + } + } + + @Benchmark + public void matchFirstRegex(Blackhole bh, RegexMatch formatClass) { + bh.consume(formatClass.match("service1", "/products/123")); + } + + @Benchmark + public void matchFirstQuickUriGrouping(Blackhole bh, QuickMatch formatClass) { + bh.consume(formatClass.match("service1", "/products/123")); + } + + @Benchmark + public void matchFourthRegex(Blackhole bh, RegexMatch formatClass) { + bh.consume(formatClass.match("service1", "/sales/123/2")); + } + + @Benchmark + public void matchFourthQuickUriGrouping(Blackhole bh, QuickMatch formatClass) { + bh.consume(formatClass.match("service1", "/sales/123/2")); + } + + @Benchmark + public void notMatchRegex(Blackhole bh, RegexMatch formatClass) { + bh.consume(formatClass.match("service1", "/employees/123")); + } + + @Benchmark + public void notMatchQuickUriGrouping(Blackhole bh, QuickMatch formatClass) { + bh.consume(formatClass.match("service1", "/employees/123")); + } + + @Test + public void run() throws Exception { + new Runner(new OptionsBuilder() + .include(".*" + getClass().getSimpleName() + ".*") + .jvmArgsAppend("-Xmx512m", "-Xms512m") + .build()).run(); + } +} diff --git a/oap-server/server-core/src/test/java/org/apache/skywalking/oap/server/core/profiling/ebpf/analyze/EBPFProfilingAnalyzerBenchmark.java b/oap-server/server-core/src/test/java/org/apache/skywalking/oap/server/core/profiling/ebpf/analyze/EBPFProfilingAnalyzerBenchmark.java new file mode 100644 index 000000000000..c359f4cdc193 --- /dev/null +++ b/oap-server/server-core/src/test/java/org/apache/skywalking/oap/server/core/profiling/ebpf/analyze/EBPFProfilingAnalyzerBenchmark.java @@ -0,0 +1,199 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.server.core.profiling.ebpf.analyze; + +import org.apache.skywalking.oap.server.core.profiling.ebpf.storage.EBPFProfilingStackType; +import org.apache.skywalking.oap.server.core.query.type.EBPFProfilingAnalyzation; +import org.junit.jupiter.api.Test; +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.BenchmarkMode; +import org.openjdk.jmh.annotations.Fork; +import org.openjdk.jmh.annotations.Measurement; +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.annotations.Threads; +import org.openjdk.jmh.annotations.Warmup; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.options.OptionsBuilder; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Random; +import java.util.concurrent.TimeUnit; + +@Warmup(iterations = 10) +@Measurement(iterations = 10) +@Fork(2) +@State(Scope.Thread) +@BenchmarkMode({Mode.Throughput}) +@Threads(4) +public class EBPFProfilingAnalyzerBenchmark { + + private static final Random RANDOM = new Random(System.currentTimeMillis()); + private static final int SYMBOL_LENGTH = 10; + private static final char[] SYMBOL_TABLE = "abcdefgABCDEFG1234567890_[]<>.".toCharArray(); + private static final EBPFProfilingStackType[] STACK_TYPES = new EBPFProfilingStackType[]{ + EBPFProfilingStackType.KERNEL_SPACE, EBPFProfilingStackType.USER_SPACE}; + + private static List<EBPFProfilingStack> generateStacks(int totalStackCount, + int perStackMinDepth, int perStackMaxDepth, + double[] stackSymbolDuplicateRate, + double stackDuplicateRate) { + int uniqStackCount = (int) (100 / stackDuplicateRate); + final List<EBPFProfilingStack> stacks = new ArrayList<>(totalStackCount); + final StackSymbolGenerator stackSymbolGenerator = new StackSymbolGenerator(stackSymbolDuplicateRate, perStackMaxDepth); + for (int inx = 0; inx < uniqStackCount; inx++) { + final EBPFProfilingStack s = generateStack(perStackMinDepth, perStackMaxDepth, stackSymbolGenerator); + stacks.add(s); + } + for (int inx = uniqStackCount; inx < totalStackCount; inx++) { + stacks.add(stacks.get(RANDOM.nextInt(uniqStackCount))); + } + return stacks; + } + + private static class StackSymbolGenerator { + private final Map<Integer, Integer> stackDepthSymbolCount; + private final Map<Integer, List<String>> existingSymbolMap; + + public StackSymbolGenerator(double[] stackSymbolDuplicateRate, int maxDepth) { + this.stackDepthSymbolCount = new HashMap<>(); + for (int depth = 0; depth < maxDepth; depth++) { + double rate = stackSymbolDuplicateRate[stackSymbolDuplicateRate.length - 1]; + if (stackSymbolDuplicateRate.length > depth) { + rate = stackSymbolDuplicateRate[depth]; + } + int uniqStackCount = (int) (100 / rate); + stackDepthSymbolCount.put(depth, uniqStackCount); + } + this.existingSymbolMap = new HashMap<>(); + } + + public String generate(int depth) { + List<String> symbols = existingSymbolMap.get(depth); + if (symbols == null) { + existingSymbolMap.put(depth, symbols = new ArrayList<>()); + } + final Integer needCount = this.stackDepthSymbolCount.get(depth); + if (symbols.size() < needCount) { + final StringBuilder sb = new StringBuilder(SYMBOL_LENGTH); + for (int j = 0; j < SYMBOL_LENGTH; j++) { + sb.append(SYMBOL_TABLE[RANDOM.nextInt(SYMBOL_TABLE.length)]); + } + final String s = sb.toString(); + symbols.add(s); + return s; + } else { + return symbols.get(RANDOM.nextInt(symbols.size())); + } + } + } + + private static EBPFProfilingStack generateStack(int stackMinDepth, int stackMaxDepth, + StackSymbolGenerator stackSymbolGenerator) { + int stackDepth = stackMinDepth + RANDOM.nextInt(stackMaxDepth - stackMinDepth); + final List<EBPFProfilingStack.Symbol> symbols = new ArrayList<>(stackDepth); + for (int i = 0; i < stackDepth; i++) { + final EBPFProfilingStack.Symbol symbol = new EBPFProfilingStack.Symbol( + stackSymbolGenerator.generate(i), buildStackType(i, stackDepth)); + symbols.add(symbol); + } + final EBPFProfilingStack stack = new EBPFProfilingStack(); + stack.setDumpCount(RANDOM.nextInt(100)); + stack.setSymbols(symbols); + return stack; + } + + private static EBPFProfilingStackType buildStackType(int currentDepth, int totalDepth) { + final int partition = totalDepth / STACK_TYPES.length; + for (int i = 1; i <= STACK_TYPES.length; i++) { + if (currentDepth < i * partition) { + return STACK_TYPES[i - 1]; + } + } + return STACK_TYPES[STACK_TYPES.length - 1]; + } + + public static class DataSource { + private final List<EBPFProfilingStack> stackStream; + + public DataSource(List<EBPFProfilingStack> stackStream) { + this.stackStream = stackStream; + } + + public void analyze() { + new EBPFProfilingAnalyzer(null, 100, 5).generateTrees(new EBPFProfilingAnalyzation(), stackStream.parallelStream()); + } + } + + private static int calculateStackCount(int stackReportPeriodSecond, int totalTimeMinute, int combineInstanceCount) { + return (int) (TimeUnit.MINUTES.toSeconds(totalTimeMinute) / stackReportPeriodSecond * combineInstanceCount); + } + + @State(Scope.Benchmark) + public static class LowDataSource extends DataSource { + public LowDataSource() { + super(generateStacks(calculateStackCount(5, 60, 10), 15, 30, + new double[]{100, 50, 45, 40, 35, 30, 15, 10, 5}, 5)); + } + } + + @State(Scope.Benchmark) + public static class MedianDatasource extends DataSource { + public MedianDatasource() { + super(generateStacks(calculateStackCount(5, 100, 200), 15, 30, + new double[]{50, 40, 35, 30, 20, 10, 7, 5, 2}, 3)); + } + } + + @State(Scope.Benchmark) + public static class HighDatasource extends DataSource { + public HighDatasource() { + super(generateStacks(calculateStackCount(5, 2 * 60, 2000), 15, 40, + new double[]{30, 27, 25, 20, 17, 15, 10, 7, 5, 2, 1}, 1)); + } + } + + @Benchmark + public void analyzeLowDataSource(LowDataSource lowDataSource) { + lowDataSource.analyze(); + } + + @Benchmark + public void analyzeMedianDataSource(MedianDatasource medianDatasource) { + medianDatasource.analyze(); + } + + @Benchmark + public void analyzeMaxDataSource(HighDatasource highDataSource) { + highDataSource.analyze(); + } + + @Test + public void run() throws Exception { + new Runner(new OptionsBuilder() + .include(".*" + getClass().getSimpleName() + ".*") + .jvmArgsAppend("-Xmx512m", "-Xms512m") + .build()).run(); + } + +} diff --git a/oap-server/server-fetcher-plugin/cilium-fetcher-plugin/pom.xml b/oap-server/server-fetcher-plugin/cilium-fetcher-plugin/pom.xml index 8323c63081d4..231b537a3975 100644 --- a/oap-server/server-fetcher-plugin/cilium-fetcher-plugin/pom.xml +++ b/oap-server/server-fetcher-plugin/cilium-fetcher-plugin/pom.xml @@ -49,5 +49,11 @@ <groupId>com.linecorp.armeria</groupId> <artifactId>armeria-grpc</artifactId> </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> </project> \ No newline at end of file diff --git a/oap-server/server-fetcher-plugin/cilium-fetcher-plugin/src/test/java/org/apache/skywalking/oap/server/fetcher/cilium/nodes/CiliumNodeManagerTest.java b/oap-server/server-fetcher-plugin/cilium-fetcher-plugin/src/test/java/org/apache/skywalking/oap/server/fetcher/cilium/nodes/CiliumNodeManagerTest.java index acec03ff1aa2..69f4694c40b7 100644 --- a/oap-server/server-fetcher-plugin/cilium-fetcher-plugin/src/test/java/org/apache/skywalking/oap/server/fetcher/cilium/nodes/CiliumNodeManagerTest.java +++ b/oap-server/server-fetcher-plugin/cilium-fetcher-plugin/src/test/java/org/apache/skywalking/oap/server/fetcher/cilium/nodes/CiliumNodeManagerTest.java @@ -29,7 +29,7 @@ import org.junit.jupiter.params.provider.MethodSource; import org.mockito.Mock; import org.mockito.junit.jupiter.MockitoExtension; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.ArrayList; import java.util.Arrays; @@ -114,8 +114,8 @@ public void test(String name, List<RemoteInstance> allOAPInstances, List<CiliumNode> allCiliumNodes, List<CiliumNode> shouldMonitorNodeBySelf) { - Whitebox.setInternalState(ciliumNodeManager, "remoteInstances", allOAPInstances); - Whitebox.setInternalState(ciliumNodeManager, "allNodes", allCiliumNodes); + ReflectUtil.setInternalState(ciliumNodeManager, "remoteInstances", allOAPInstances); + ReflectUtil.setInternalState(ciliumNodeManager, "allNodes", allCiliumNodes); ciliumNodeManager.refreshUsingNodes(); final List<CiliumNode> nodes = nodeUpdateListener.getNodes(); nodes.sort(Comparator.comparing(CiliumNode::getAddress)); diff --git a/oap-server/server-fetcher-plugin/kafka-fetcher-plugin/src/main/java/org/apache/skywalking/oap/server/analyzer/agent/kafka/provider/KafkaFetcherProvider.java b/oap-server/server-fetcher-plugin/kafka-fetcher-plugin/src/main/java/org/apache/skywalking/oap/server/analyzer/agent/kafka/provider/KafkaFetcherProvider.java index e45953c62cf5..14187e0178b8 100644 --- a/oap-server/server-fetcher-plugin/kafka-fetcher-plugin/src/main/java/org/apache/skywalking/oap/server/analyzer/agent/kafka/provider/KafkaFetcherProvider.java +++ b/oap-server/server-fetcher-plugin/kafka-fetcher-plugin/src/main/java/org/apache/skywalking/oap/server/analyzer/agent/kafka/provider/KafkaFetcherProvider.java @@ -19,7 +19,7 @@ package org.apache.skywalking.oap.server.analyzer.agent.kafka.provider; import lombok.extern.slf4j.Slf4j; -import org.apache.skywalking.oap.log.analyzer.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; import org.apache.skywalking.oap.server.analyzer.agent.kafka.KafkaFetcherHandlerRegister; import org.apache.skywalking.oap.server.analyzer.agent.kafka.module.KafkaFetcherConfig; import org.apache.skywalking.oap.server.analyzer.agent.kafka.module.KafkaFetcherModule; diff --git a/oap-server/server-fetcher-plugin/kafka-fetcher-plugin/src/main/java/org/apache/skywalking/oap/server/analyzer/agent/kafka/provider/handler/LogHandler.java b/oap-server/server-fetcher-plugin/kafka-fetcher-plugin/src/main/java/org/apache/skywalking/oap/server/analyzer/agent/kafka/provider/handler/LogHandler.java index c563d7a5cc60..9a1f67113124 100644 --- a/oap-server/server-fetcher-plugin/kafka-fetcher-plugin/src/main/java/org/apache/skywalking/oap/server/analyzer/agent/kafka/provider/handler/LogHandler.java +++ b/oap-server/server-fetcher-plugin/kafka-fetcher-plugin/src/main/java/org/apache/skywalking/oap/server/analyzer/agent/kafka/provider/handler/LogHandler.java @@ -21,8 +21,8 @@ import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.common.utils.Bytes; import org.apache.skywalking.apm.network.logging.v3.LogData; -import org.apache.skywalking.oap.log.analyzer.module.LogAnalyzerModule; -import org.apache.skywalking.oap.log.analyzer.provider.log.ILogAnalyzerService; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.ILogAnalyzerService; import org.apache.skywalking.oap.server.analyzer.agent.kafka.module.KafkaFetcherConfig; import org.apache.skywalking.oap.server.library.module.ModuleManager; import org.apache.skywalking.oap.server.telemetry.TelemetryModule; diff --git a/oap-server/server-fetcher-plugin/kafka-fetcher-plugin/src/test/java/org/apache/skywalking/oap/server/analyzer/agent/kafka/provider/handler/LogHandlerTest.java b/oap-server/server-fetcher-plugin/kafka-fetcher-plugin/src/test/java/org/apache/skywalking/oap/server/analyzer/agent/kafka/provider/handler/LogHandlerTest.java index 43ae1c729a70..9f9f474e9f51 100644 --- a/oap-server/server-fetcher-plugin/kafka-fetcher-plugin/src/test/java/org/apache/skywalking/oap/server/analyzer/agent/kafka/provider/handler/LogHandlerTest.java +++ b/oap-server/server-fetcher-plugin/kafka-fetcher-plugin/src/test/java/org/apache/skywalking/oap/server/analyzer/agent/kafka/provider/handler/LogHandlerTest.java @@ -18,8 +18,8 @@ package org.apache.skywalking.oap.server.analyzer.agent.kafka.provider.handler; -import org.apache.skywalking.oap.log.analyzer.module.LogAnalyzerModule; -import org.apache.skywalking.oap.log.analyzer.provider.log.ILogAnalyzerService; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.ILogAnalyzerService; import org.apache.skywalking.oap.server.analyzer.agent.kafka.module.KafkaFetcherConfig; import org.apache.skywalking.oap.server.library.module.ModuleManager; import org.apache.skywalking.oap.server.telemetry.TelemetryModule; diff --git a/oap-server/server-library/library-batch-queue/src/main/java/org/apache/skywalking/oap/server/library/batchqueue/BatchQueue.java b/oap-server/server-library/library-batch-queue/src/main/java/org/apache/skywalking/oap/server/library/batchqueue/BatchQueue.java index f0b3ff560d08..b1d16e6ad8d1 100644 --- a/oap-server/server-library/library-batch-queue/src/main/java/org/apache/skywalking/oap/server/library/batchqueue/BatchQueue.java +++ b/oap-server/server-library/library-batch-queue/src/main/java/org/apache/skywalking/oap/server/library/batchqueue/BatchQueue.java @@ -39,7 +39,7 @@ /** * A partitioned, self-draining queue with type-based dispatch. * - * <h3>Usage</h3> + * <h2>Usage</h2> * <pre> * BatchQueue queue = BatchQueueManager.create(name, config); * queue.addHandler(TypeA.class, handlerA); // register metric types @@ -52,7 +52,7 @@ * partition array as needed. The thread count is resolved at construction time * and remains fixed. * - * <h3>Produce workflow</h3> + * <h2>Produce workflow</h2> * <pre> * produce(data) * | @@ -68,7 +68,7 @@ * +-- return true/false * </pre> * - * <h3>Consume workflow (drain loop, runs on scheduler threads)</h3> + * <h2>Consume workflow (drain loop, runs on scheduler threads)</h2> * <pre> * scheduleDrain(taskIndex) // schedule with adaptive backoff delay * | @@ -91,11 +91,11 @@ * +-- finally: scheduleDrain(taskIndex) // re-schedule self * </pre> * - * <h3>Adaptive backoff</h3> + * <h2>Adaptive backoff</h2> * Delay doubles on each consecutive idle cycle: {@code minIdleMs * 2^idleCount}, * capped at {@code maxIdleMs}. Resets to {@code minIdleMs} on first non-empty drain. * - * <h3>Use case examples</h3> + * <h2>Use case examples</h2> * <pre> * dedicated fixed(1), partitions=1, one consumer --> I/O queue (gRPC, Kafka, JDBC) * dedicated fixed(1), partitions=1, many handlers --> TopN (all types share 1 thread) diff --git a/oap-server/server-library/library-batch-queue/src/test/java/org/apache/skywalking/oap/server/library/batchqueue/BatchQueueBenchmark.java b/oap-server/server-library/library-batch-queue/src/test/java/org/apache/skywalking/oap/server/library/batchqueue/BatchQueueBenchmark.java index ba65a94a30f2..d98913b90250 100644 --- a/oap-server/server-library/library-batch-queue/src/test/java/org/apache/skywalking/oap/server/library/batchqueue/BatchQueueBenchmark.java +++ b/oap-server/server-library/library-batch-queue/src/test/java/org/apache/skywalking/oap/server/library/batchqueue/BatchQueueBenchmark.java @@ -36,7 +36,7 @@ * <p>Run with: mvn test -pl oap-server/server-library/library-batch-queue * -Dtest=BatchQueueBenchmark -DfailIfNoTests=false * - * <h3>Reference results (Apple M3 Max, 128 GB RAM, macOS 26.2, JDK 17)</h3> + * <h2>Reference results (Apple M3 Max, 128 GB RAM, macOS 26.2, JDK 17)</h2> * * <p><b>Fixed partitions (typeHash selector):</b> * <pre> diff --git a/oap-server/server-library/library-batch-queue/src/test/java/org/apache/skywalking/oap/server/library/batchqueue/RebalanceBenchmark.java b/oap-server/server-library/library-batch-queue/src/test/java/org/apache/skywalking/oap/server/library/batchqueue/RebalanceBenchmark.java index ef3ee33c5b7f..69cd2f9c25ad 100644 --- a/oap-server/server-library/library-batch-queue/src/test/java/org/apache/skywalking/oap/server/library/batchqueue/RebalanceBenchmark.java +++ b/oap-server/server-library/library-batch-queue/src/test/java/org/apache/skywalking/oap/server/library/batchqueue/RebalanceBenchmark.java @@ -35,7 +35,7 @@ * Benchmark comparing throughput with and without partition rebalancing * under skewed load simulating OAP L2 persistence. * - * <h3>Scenario: L2 entity-count-driven imbalance</h3> + * <h2>Scenario: L2 entity-count-driven imbalance</h2> * After L1 merge, each metric type produces one item per unique entity per minute. * Endpoint-scoped metrics see many more entities than service-scoped metrics: * <pre> @@ -50,7 +50,7 @@ * The throughput-weighted rebalancer fixes this by reassigning partitions based * on observed throughput. * - * <h3>What this benchmark measures</h3> + * <h2>What this benchmark measures</h2> * <ol> * <li><b>Static vs rebalanced throughput:</b> total consumed items/sec with * BLOCKING strategy and simulated consumer work (~500ns/item). With imbalance, @@ -61,7 +61,7 @@ * per-thread load ratio over multiple intervals.</li> * </ol> * - * <h3>Results (4 drain threads, 16 producers, 100 types, 500 LCG iters/item)</h3> + * <h2>Results (4 drain threads, 16 producers, 100 types, 500 LCG iters/item)</h2> * <pre> * Static Rebalanced * Throughput: 7,211,794 8,729,310 items/sec @@ -69,7 +69,7 @@ * Improvement: +21.0% * </pre> * - * <h3>Stability (20 sec, sampled every 2 sec after initial rebalance)</h3> + * <h2>Stability (20 sec, sampled every 2 sec after initial rebalance)</h2> * <pre> * Interval Throughput Ratio * 0- 2s 8,915,955 1.00x diff --git a/oap-server/server-library/library-server/src/main/java/org/apache/skywalking/oap/server/library/server/grpc/GRPCServer.java b/oap-server/server-library/library-server/src/main/java/org/apache/skywalking/oap/server/library/server/grpc/GRPCServer.java index a1ec4ff4ae1c..af39628c3ef8 100644 --- a/oap-server/server-library/library-server/src/main/java/org/apache/skywalking/oap/server/library/server/grpc/GRPCServer.java +++ b/oap-server/server-library/library-server/src/main/java/org/apache/skywalking/oap/server/library/server/grpc/GRPCServer.java @@ -44,7 +44,7 @@ * gRPC server backed by Netty. Used by up to 4 OAP server endpoints (core-grpc, * receiver-grpc, ebpf-grpc, als-grpc). gRPC is the primary telemetry ingestion path. * - * <h3>Thread model</h3> + * <h2>Thread model</h2> * gRPC-netty uses a three-tier thread model: * <ol> * <li><b>Boss event loop</b> — 1 thread. Accepts TCP connections, creates Netty channels, @@ -61,7 +61,7 @@ * between messages the thread returns to the pool.</li> * </ol> * - * <h3>Application executor</h3> + * <h2>Application executor</h2> * gRPC's default application executor is an <b>unbounded {@code CachedThreadPool}</b> * ({@code Executors.newCachedThreadPool()}, named {@code grpc-default-executor}). * gRPC chose this for safety — application code may block (JDBC, file I/O, synchronized), @@ -81,7 +81,7 @@ * {@code BatchQueue.produce()} with {@code BLOCKING} strategy which can block the thread * — that would freeze the event loop and stall all connections. * - * <h3>Thread policies</h3> + * <h2>Thread policies</h2> * <pre> * gRPC default SkyWalking * Boss EL: 1, shared (unchanged) @@ -90,7 +90,7 @@ * JDK <25: gRPC default (unchanged) * </pre> * - * <h4>Worker event loop: {@code cores}, shared by gRPC (default, unchanged)</h4> + * <h2>Worker event loop: {@code cores}, shared by gRPC (default, unchanged)</h2> * <pre> * cores: 2 4 8 10 24 * threads: 2 4 8 10 24 @@ -100,7 +100,7 @@ * all {@code NettyServerBuilder} instances that use the default. No custom configuration * needed. * - * <h3>Comparison with HTTP (Armeria)</h3> + * <h2>Comparison with HTTP (Armeria)</h2> * <pre> * gRPC HTTP (Armeria) * Event loop: cores, shared (gRPC default) min(5, cores), shared @@ -111,7 +111,7 @@ * handlers may block on long I/O (storage queries, extension callbacks). On JDK 25+, * virtual threads replace both pools. * - * <h3>User-configured thread pool</h3> + * <h2>User-configured thread pool</h2> * When {@code threadPoolSize > 0} is set via config, it overrides the default with a * per-server fixed pool of that size. On JDK 25+ it is ignored — virtual threads * are always used. diff --git a/oap-server/server-library/library-server/src/main/java/org/apache/skywalking/oap/server/library/server/http/HTTPServer.java b/oap-server/server-library/library-server/src/main/java/org/apache/skywalking/oap/server/library/server/http/HTTPServer.java index 9c1b0ac389ec..440bd9e2dd0e 100644 --- a/oap-server/server-library/library-server/src/main/java/org/apache/skywalking/oap/server/library/server/http/HTTPServer.java +++ b/oap-server/server-library/library-server/src/main/java/org/apache/skywalking/oap/server/library/server/http/HTTPServer.java @@ -50,7 +50,7 @@ * Armeria-based HTTP server shared by all OAP HTTP endpoints (core-http, receiver-http, * promql-http, logql-http, zipkin-query-http, zipkin-http, firehose-http — up to 7 servers). * - * <h3>Thread model</h3> + * <h2>Thread model</h2> * Armeria uses a two-tier thread model: * <ul> * <li><b>Event loop threads</b> — non-blocking I/O multiplexers (epoll/kqueue). Handle @@ -66,7 +66,7 @@ * requests spend most of their time (waiting on I/O), while event loop threads just * shuttle bytes and are immediately available for the next connection. * - * <h3>Thread policies</h3> + * <h2>Thread policies</h2> * <pre> * Armeria default SkyWalking * Event loop: cores * 2 per server min(5, cores) shared across all servers @@ -74,7 +74,7 @@ * JDK <25: Armeria default (unchanged) * </pre> * - * <h4>Event loop: {@code min(5, cores)}, shared</h4> + * <h2>Event loop: {@code min(5, cores)}, shared</h2> * <pre> * cores: 2 4 8 10 24 * threads: 2 4 5 5 5 @@ -83,7 +83,7 @@ * HTTP servers means 7 * cores * 2 = 140 threads on 10-core — far more than needed for * HTTP traffic. All servers share one {@link EventLoopGroup} with min(5, cores) threads. * - * <h4>Blocking executor: Armeria default on JDK <25, virtual threads on JDK 25+</h4> + * <h2>Blocking executor: Armeria default on JDK <25, virtual threads on JDK 25+</h2> * On JDK <25, Armeria's default cached pool (up to 200 on-demand threads) is kept * unchanged. HTTP handlers block on storage/DB queries (BanyanDB, Elasticsearch) which * can take 10ms–seconds. A bounded pool would cause request queuing and UI timeouts @@ -92,7 +92,7 @@ * On JDK 25+, virtual threads replace this pool entirely — each blocking request * gets its own virtual thread backed by ~cores shared carrier threads. * - * <h3>Comparison with gRPC</h3> + * <h2>Comparison with gRPC</h2> * gRPC is the primary telemetry ingestion path. HTTP is secondary (UI queries, PromQL, * LogQL, and optionally telemetry), so it uses fewer event loop threads. * <pre> diff --git a/oap-server/server-library/library-util/pom.xml b/oap-server/server-library/library-util/pom.xml index 56481e27b300..82f81740a8a7 100644 --- a/oap-server/server-library/library-util/pom.xml +++ b/oap-server/server-library/library-util/pom.xml @@ -61,5 +61,10 @@ <artifactId>system-stubs-jupiter</artifactId> <scope>test</scope> </dependency> + <dependency> + <groupId>org.openjdk.jmh</groupId> + <artifactId>jmh-generator-annprocess</artifactId> + <scope>test</scope> + </dependency> </dependencies> </project> diff --git a/oap-server/server-library/library-util/src/test/java/org/apache/skywalking/oap/server/library/util/StringFormatGroupBenchmark.java b/oap-server/server-library/library-util/src/test/java/org/apache/skywalking/oap/server/library/util/StringFormatGroupBenchmark.java new file mode 100644 index 000000000000..ab7005b4179a --- /dev/null +++ b/oap-server/server-library/library-util/src/test/java/org/apache/skywalking/oap/server/library/util/StringFormatGroupBenchmark.java @@ -0,0 +1,74 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.server.library.util; + +import org.junit.jupiter.api.Assertions; +import org.junit.jupiter.api.Test; +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.BenchmarkMode; +import org.openjdk.jmh.annotations.Fork; +import org.openjdk.jmh.annotations.Measurement; +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.annotations.OutputTimeUnit; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.annotations.Warmup; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.options.OptionsBuilder; + +import java.util.concurrent.TimeUnit; + +@Warmup(iterations = 10) +@Measurement(iterations = 10) +@Fork(2) +@State(Scope.Thread) +@BenchmarkMode(Mode.AverageTime) +@OutputTimeUnit(TimeUnit.MICROSECONDS) +public class StringFormatGroupBenchmark { + @Benchmark + @Test + public void testMatch() { + StringFormatGroup group = new StringFormatGroup(); + group.addRule("/name/*/add", "/name/.+/add"); + Assertions.assertEquals("/name/*/add", group.format("/name/test/add").getName()); + + group = new StringFormatGroup(); + group.addRule("/name/*/add/{orderId}", "/name/.+/add/.*"); + Assertions.assertEquals("/name/*/add/{orderId}", group.format("/name/test/add/12323").getName()); + } + + @Benchmark + @Test + public void test100Rule() { + StringFormatGroup group = new StringFormatGroup(); + group.addRule("/name/*/add/{orderId}", "/name/.+/add/.*"); + for (int i = 0; i < 100; i++) { + group.addRule("/name/*/add/{orderId}" + "/" + 1, "/name/.+/add/.*" + "/abc"); + } + Assertions.assertEquals("/name/*/add/{orderId}", group.format("/name/test/add/12323").getName()); + } + + @Test + public void run() throws Exception { + new Runner(new OptionsBuilder() + .include(".*" + getClass().getSimpleName() + ".*") + .jvmArgsAppend("-Xmx512m", "-Xms512m") + .build()).run(); + } +} diff --git a/oap-server/server-query-plugin/query-graphql-plugin/src/main/java/org/apache/skywalking/oap/query/graphql/resolver/LogTestQuery.java b/oap-server/server-query-plugin/query-graphql-plugin/src/main/java/org/apache/skywalking/oap/query/graphql/resolver/LogTestQuery.java index bc57bb0a4688..e537613422e8 100644 --- a/oap-server/server-query-plugin/query-graphql-plugin/src/main/java/org/apache/skywalking/oap/query/graphql/resolver/LogTestQuery.java +++ b/oap-server/server-query-plugin/query-graphql-plugin/src/main/java/org/apache/skywalking/oap/query/graphql/resolver/LogTestQuery.java @@ -28,11 +28,11 @@ import lombok.RequiredArgsConstructor; import org.apache.skywalking.apm.network.logging.v3.LogData; import org.apache.skywalking.apm.network.logging.v3.LogTags; -import org.apache.skywalking.oap.log.analyzer.dsl.Binding; -import org.apache.skywalking.oap.log.analyzer.dsl.DSL; -import org.apache.skywalking.oap.log.analyzer.module.LogAnalyzerModule; -import org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleConfig; -import org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleProvider; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext; +import org.apache.skywalking.oap.log.analyzer.v2.dsl.DSL; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleProvider; import org.apache.skywalking.oap.query.graphql.GraphQLQueryConfig; import org.apache.skywalking.oap.query.graphql.type.LogTestRequest; import org.apache.skywalking.oap.query.graphql.type.LogTestResponse; @@ -69,20 +69,19 @@ public LogTestResponse test(LogTestRequest request) throws Exception { .provider(); final LogAnalyzerModuleConfig config = provider.getModuleConfig(); final DSL dsl = DSL.of(moduleManager, config, request.getDsl()); - final Binding binding = new Binding(); + final ExecutionContext ctx = new ExecutionContext(); final LogData.Builder log = LogData.newBuilder(); ProtoBufJsonUtils.fromJSON(request.getLog(), log); - binding.log(log); + ctx.log(log); - binding.logContainer(new AtomicReference<>()); - binding.metricsContainer(new ArrayList<>()); + ctx.logContainer(new AtomicReference<>()); + ctx.metricsContainer(new ArrayList<>()); - dsl.bind(binding); - dsl.evaluate(); + dsl.evaluate(ctx); final LogTestResponse.LogTestResponseBuilder builder = LogTestResponse.builder(); - binding.logContainer().map(AtomicReference::get).ifPresent(it -> { + ctx.logContainer().map(AtomicReference::get).ifPresent(it -> { final Log l = new Log(); if (isNotBlank(it.getServiceId())) { @@ -118,7 +117,7 @@ public LogTestResponse test(LogTestRequest request) throws Exception { builder.log(l); }); - binding.metricsContainer().ifPresent(it -> { + ctx.metricsContainer().ifPresent(it -> { final List<Metrics> samples = it.stream() .flatMap(s -> Arrays.stream(s.samples)) diff --git a/oap-server/server-query-plugin/query-graphql-plugin/src/main/java/org/apache/skywalking/oap/query/graphql/resolver/PprofQuery.java b/oap-server/server-query-plugin/query-graphql-plugin/src/main/java/org/apache/skywalking/oap/query/graphql/resolver/PprofQuery.java index 9b1096cfbdec..3cd830875c08 100644 --- a/oap-server/server-query-plugin/query-graphql-plugin/src/main/java/org/apache/skywalking/oap/query/graphql/resolver/PprofQuery.java +++ b/oap-server/server-query-plugin/query-graphql-plugin/src/main/java/org/apache/skywalking/oap/query/graphql/resolver/PprofQuery.java @@ -19,7 +19,7 @@ package org.apache.skywalking.oap.query.graphql.resolver; import org.apache.skywalking.oap.server.core.CoreModule; -import groovy.util.logging.Slf4j; +import lombok.extern.slf4j.Slf4j; import org.apache.skywalking.oap.server.library.module.ModuleManager; import graphql.kickstart.tools.GraphQLQueryResolver; import org.apache.skywalking.oap.server.core.profiling.pprof.PprofQueryService; diff --git a/oap-server/server-query-plugin/query-graphql-plugin/src/test/java/org/apache/skywalking/oap/query/graphql/resolver/LogTestQueryTest.java b/oap-server/server-query-plugin/query-graphql-plugin/src/test/java/org/apache/skywalking/oap/query/graphql/resolver/LogTestQueryTest.java index 035e6568e467..5c4403e07f8f 100644 --- a/oap-server/server-query-plugin/query-graphql-plugin/src/test/java/org/apache/skywalking/oap/query/graphql/resolver/LogTestQueryTest.java +++ b/oap-server/server-query-plugin/query-graphql-plugin/src/test/java/org/apache/skywalking/oap/query/graphql/resolver/LogTestQueryTest.java @@ -18,8 +18,8 @@ package org.apache.skywalking.oap.query.graphql.resolver; -import org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleConfig; -import org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleProvider; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleProvider; import org.apache.skywalking.oap.query.graphql.GraphQLQueryConfig; import org.apache.skywalking.oap.query.graphql.type.LogTestRequest; import org.apache.skywalking.oap.query.graphql.type.LogTestResponse; @@ -120,7 +120,7 @@ public void test() throws Exception { " extractor {\n" + " metrics {\n" + " timestamp log.timestamp as Long\n" + - " labels level: parsed.level, service: log.service, instance: log.serviceInstance\n" + + " labels service: log.service, instance: log.serviceInstance\n" + " name 'log_count'\n" + " value 1\n" + " }\n" + diff --git a/oap-server/server-receiver-plugin/configuration-discovery-receiver-plugin/pom.xml b/oap-server/server-receiver-plugin/configuration-discovery-receiver-plugin/pom.xml index 734cbefbe307..9992c7ca62fd 100644 --- a/oap-server/server-receiver-plugin/configuration-discovery-receiver-plugin/pom.xml +++ b/oap-server/server-receiver-plugin/configuration-discovery-receiver-plugin/pom.xml @@ -32,5 +32,11 @@ <artifactId>skywalking-sharing-server-plugin</artifactId> <version>${project.version}</version> </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> </project> diff --git a/oap-server/server-receiver-plugin/configuration-discovery-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/configuration/discovery/AgentConfigurationsWatcherTest.java b/oap-server/server-receiver-plugin/configuration-discovery-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/configuration/discovery/AgentConfigurationsWatcherTest.java index 90c5336ba0c2..335bd29d2a6d 100644 --- a/oap-server/server-receiver-plugin/configuration-discovery-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/configuration/discovery/AgentConfigurationsWatcherTest.java +++ b/oap-server/server-receiver-plugin/configuration-discovery-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/configuration/discovery/AgentConfigurationsWatcherTest.java @@ -25,7 +25,7 @@ import org.junit.jupiter.api.Test; import org.mockito.MockitoAnnotations; import org.mockito.Spy; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.io.IOException; import java.io.Reader; @@ -45,7 +45,7 @@ public void setUp() { @Test public void testConfigModifyEvent() throws IOException { - AgentConfigurationsTable agentConfigurationsTable = Whitebox.getInternalState( + AgentConfigurationsTable agentConfigurationsTable = ReflectUtil.getInternalState( agentConfigurationsWatcher, "agentConfigurationsTable"); assertTrue(agentConfigurationsTable.getAgentConfigurationsCache().isEmpty()); @@ -58,7 +58,7 @@ public void testConfigModifyEvent() throws IOException { ConfigChangeWatcher.EventType.MODIFY )); - AgentConfigurationsTable modifyAgentConfigurationsTable = Whitebox.getInternalState( + AgentConfigurationsTable modifyAgentConfigurationsTable = ReflectUtil.getInternalState( agentConfigurationsWatcher, "agentConfigurationsTable"); Map<String, AgentConfigurations> configurationCache = modifyAgentConfigurationsTable.getAgentConfigurationsCache(); Assertions.assertEquals(2, configurationCache.size()); @@ -90,7 +90,7 @@ public void testConfigDeleteEvent() throws IOException { Reader reader = ResourceUtils.read("agent-dynamic-configuration.yml"); agentConfigurationsWatcher = spy(new AgentConfigurationsWatcher(null)); - Whitebox.setInternalState( + ReflectUtil.setInternalState( agentConfigurationsWatcher, "agentConfigurationsTable", new AgentConfigurationsReader(reader).readAgentConfigurationsTable() ); @@ -98,7 +98,7 @@ public void testConfigDeleteEvent() throws IOException { agentConfigurationsWatcher.notify( new ConfigChangeWatcher.ConfigChangeEvent("whatever", ConfigChangeWatcher.EventType.DELETE)); - AgentConfigurationsTable agentConfigurationsTable = Whitebox.getInternalState( + AgentConfigurationsTable agentConfigurationsTable = ReflectUtil.getInternalState( agentConfigurationsWatcher, "agentConfigurationsTable"); Map<String, AgentConfigurations> configurationCache = agentConfigurationsTable.getAgentConfigurationsCache(); diff --git a/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/pom.xml b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/pom.xml index 34666f9b6980..c8d73ea91850 100644 --- a/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/pom.xml +++ b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/pom.xml @@ -81,5 +81,11 @@ <groupId>commons-net</groupId> <artifactId>commons-net</artifactId> </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> </project> diff --git a/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/EnvoyHTTPLALSourceTypeProvider.java b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/EnvoyHTTPLALSourceTypeProvider.java new file mode 100644 index 000000000000..9d42d0992762 --- /dev/null +++ b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/EnvoyHTTPLALSourceTypeProvider.java @@ -0,0 +1,39 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.receiver.envoy; + +import io.envoyproxy.envoy.data.accesslog.v3.HTTPAccessLogEntry; +import org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider; +import org.apache.skywalking.oap.server.core.analysis.Layer; + +/** + * Declares {@link HTTPAccessLogEntry} as the extra log type for the + * {@link Layer#MESH} layer, enabling the LAL compiler to generate direct + * proto getter calls for envoy access log rules. + */ +public class EnvoyHTTPLALSourceTypeProvider implements LALSourceTypeProvider { + @Override + public Layer layer() { + return Layer.MESH; + } + + @Override + public Class<?> extraLogType() { + return HTTPAccessLogEntry.class; + } +} diff --git a/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/EnvoyMetricReceiverConfig.java b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/EnvoyMetricReceiverConfig.java index f325a85d21b9..e8fa61b883d2 100644 --- a/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/EnvoyMetricReceiverConfig.java +++ b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/EnvoyMetricReceiverConfig.java @@ -28,8 +28,8 @@ import java.util.Set; import java.util.stream.Collectors; import lombok.Getter; -import org.apache.skywalking.oap.meter.analyzer.prometheus.rule.Rule; -import org.apache.skywalking.oap.meter.analyzer.prometheus.rule.Rules; +import org.apache.skywalking.oap.meter.analyzer.v2.prometheus.rule.Rule; +import org.apache.skywalking.oap.meter.analyzer.v2.prometheus.rule.Rules; import org.apache.skywalking.oap.server.library.module.ModuleConfig; import org.apache.skywalking.oap.server.library.module.ModuleStartException; import org.apache.skywalking.oap.server.receiver.envoy.metrics.adapters.ClusterManagerMetricsAdapter; diff --git a/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/MetricServiceGRPCHandler.java b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/MetricServiceGRPCHandler.java index 921edfee10c4..f07092dc5e96 100644 --- a/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/MetricServiceGRPCHandler.java +++ b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/MetricServiceGRPCHandler.java @@ -32,10 +32,10 @@ import java.util.stream.Collectors; import lombok.SneakyThrows; import lombok.extern.slf4j.Slf4j; -import org.apache.skywalking.oap.meter.analyzer.MetricConvert; -import org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily; +import org.apache.skywalking.oap.meter.analyzer.v2.MetricConvert; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily; import org.apache.skywalking.oap.server.library.util.StringUtil; -import org.apache.skywalking.oap.meter.analyzer.prometheus.PrometheusMetricConverter; +import org.apache.skywalking.oap.meter.analyzer.v2.prometheus.PrometheusMetricConverter; import org.apache.skywalking.oap.server.core.CoreModule; import org.apache.skywalking.oap.server.core.analysis.meter.MeterSystem; import org.apache.skywalking.oap.server.library.module.ModuleManager; diff --git a/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/persistence/LogsPersistence.java b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/persistence/LogsPersistence.java index 00bb3c1cca3c..e796f889e7f4 100644 --- a/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/persistence/LogsPersistence.java +++ b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/persistence/LogsPersistence.java @@ -23,8 +23,8 @@ import lombok.extern.slf4j.Slf4j; import org.apache.skywalking.apm.network.logging.v3.LogData; import org.apache.skywalking.apm.network.servicemesh.v3.HTTPServiceMeshMetric; -import org.apache.skywalking.oap.log.analyzer.module.LogAnalyzerModule; -import org.apache.skywalking.oap.log.analyzer.provider.log.ILogAnalyzerService; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.ILogAnalyzerService; import org.apache.skywalking.oap.server.core.analysis.Layer; import org.apache.skywalking.oap.server.library.module.ModuleManager; import org.apache.skywalking.oap.server.library.module.ModuleStartException; diff --git a/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/persistence/TCPLogsPersistence.java b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/persistence/TCPLogsPersistence.java index ab792eeb0d5d..85d91831e5c4 100644 --- a/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/persistence/TCPLogsPersistence.java +++ b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/envoy/persistence/TCPLogsPersistence.java @@ -20,8 +20,8 @@ import org.apache.skywalking.apm.network.logging.v3.LogData; import org.apache.skywalking.apm.network.servicemesh.v3.TCPServiceMeshMetric; -import org.apache.skywalking.oap.log.analyzer.module.LogAnalyzerModule; -import org.apache.skywalking.oap.log.analyzer.provider.log.ILogAnalyzerService; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.ILogAnalyzerService; import org.apache.skywalking.oap.server.core.analysis.Layer; import org.apache.skywalking.oap.server.library.module.ModuleManager; import org.apache.skywalking.oap.server.library.module.ModuleStartException; diff --git a/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/resources/META-INF/services/org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/resources/META-INF/services/org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider new file mode 100644 index 000000000000..2b8d3067c636 --- /dev/null +++ b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/main/resources/META-INF/services/org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider @@ -0,0 +1,19 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# + +org.apache.skywalking.oap.server.receiver.envoy.EnvoyHTTPLALSourceTypeProvider diff --git a/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/envoy/ClusterManagerMetricsAdapterTest.java b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/envoy/ClusterManagerMetricsAdapterTest.java index 0e0aa3439af5..4fc66fcd04e2 100644 --- a/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/envoy/ClusterManagerMetricsAdapterTest.java +++ b/oap-server/server-receiver-plugin/envoy-metrics-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/envoy/ClusterManagerMetricsAdapterTest.java @@ -24,7 +24,7 @@ import org.apache.skywalking.oap.server.receiver.envoy.metrics.adapters.ClusterManagerMetricsAdapter; import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.HashMap; @@ -42,7 +42,7 @@ public class ClusterManagerMetricsAdapterTest { @SneakyThrows @BeforeEach public void setUp() { - Whitebox.setInternalState(FieldsHelper.forClass(this.getClass()), "initialized", false); + ReflectUtil.setInternalState(FieldsHelper.forClass(this.getClass()), "initialized", false); EnvoyMetricReceiverConfig config = new EnvoyMetricReceiverConfig(); clusterManagerMetricsAdapter = new ClusterManagerMetricsAdapter(config); FieldsHelper.forClass(config.serviceMetaInfoFactory().clazz()).init("metadata-service-mapping.yaml"); diff --git a/oap-server/server-receiver-plugin/otel-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/otel/otlp/OpenTelemetryLogHandler.java b/oap-server/server-receiver-plugin/otel-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/otel/otlp/OpenTelemetryLogHandler.java index cc1d75e025c9..28afc17b1bfa 100644 --- a/oap-server/server-receiver-plugin/otel-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/otel/otlp/OpenTelemetryLogHandler.java +++ b/oap-server/server-receiver-plugin/otel-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/otel/otlp/OpenTelemetryLogHandler.java @@ -33,8 +33,8 @@ import org.apache.skywalking.apm.network.logging.v3.LogDataBody; import org.apache.skywalking.apm.network.logging.v3.LogTags; import org.apache.skywalking.apm.network.logging.v3.TextLog; -import org.apache.skywalking.oap.log.analyzer.module.LogAnalyzerModule; -import org.apache.skywalking.oap.log.analyzer.provider.log.ILogAnalyzerService; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.ILogAnalyzerService; import org.apache.skywalking.oap.server.core.server.GRPCHandlerRegister; import org.apache.skywalking.oap.server.library.module.ModuleManager; import org.apache.skywalking.oap.server.library.module.ModuleStartException; diff --git a/oap-server/server-receiver-plugin/otel-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/otel/otlp/OpenTelemetryMetricRequestProcessor.java b/oap-server/server-receiver-plugin/otel-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/otel/otlp/OpenTelemetryMetricRequestProcessor.java index ae60bdbade3c..6b26a8665631 100644 --- a/oap-server/server-receiver-plugin/otel-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/otel/otlp/OpenTelemetryMetricRequestProcessor.java +++ b/oap-server/server-receiver-plugin/otel-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/otel/otlp/OpenTelemetryMetricRequestProcessor.java @@ -29,11 +29,11 @@ import lombok.Getter; import lombok.RequiredArgsConstructor; import lombok.extern.slf4j.Slf4j; -import org.apache.skywalking.oap.meter.analyzer.MetricConvert; -import org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily; -import org.apache.skywalking.oap.meter.analyzer.prometheus.PrometheusMetricConverter; -import org.apache.skywalking.oap.meter.analyzer.prometheus.rule.Rule; -import org.apache.skywalking.oap.meter.analyzer.prometheus.rule.Rules; +import org.apache.skywalking.oap.meter.analyzer.v2.MetricConvert; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily; +import org.apache.skywalking.oap.meter.analyzer.v2.prometheus.PrometheusMetricConverter; +import org.apache.skywalking.oap.meter.analyzer.v2.prometheus.rule.Rule; +import org.apache.skywalking.oap.meter.analyzer.v2.prometheus.rule.Rules; import org.apache.skywalking.oap.server.core.CoreModule; import org.apache.skywalking.oap.server.core.analysis.meter.MeterSystem; import org.apache.skywalking.oap.server.library.module.ModuleManager; diff --git a/oap-server/server-receiver-plugin/skywalking-ebpf-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/ebpf/provider/handler/AccessLogServiceHandler.java b/oap-server/server-receiver-plugin/skywalking-ebpf-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/ebpf/provider/handler/AccessLogServiceHandler.java index 785a23435950..4b2a3023bc22 100644 --- a/oap-server/server-receiver-plugin/skywalking-ebpf-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/ebpf/provider/handler/AccessLogServiceHandler.java +++ b/oap-server/server-receiver-plugin/skywalking-ebpf-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/ebpf/provider/handler/AccessLogServiceHandler.java @@ -56,7 +56,7 @@ import org.apache.skywalking.apm.network.servicemesh.v3.TCPServiceMeshMetric; import org.apache.skywalking.apm.network.servicemesh.v3.TCPServiceMeshMetrics; import org.apache.skywalking.library.kubernetes.ObjectID; -import org.apache.skywalking.oap.meter.analyzer.k8s.K8sInfoRegistry; +import org.apache.skywalking.oap.meter.analyzer.v2.k8s.K8sInfoRegistry; import org.apache.skywalking.oap.server.core.Const; import org.apache.skywalking.oap.server.core.CoreModule; import org.apache.skywalking.oap.server.core.analysis.Layer; diff --git a/oap-server/server-receiver-plugin/skywalking-log-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/log/provider/LogModuleProvider.java b/oap-server/server-receiver-plugin/skywalking-log-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/log/provider/LogModuleProvider.java index 4014daa3e2ae..67090dc3965c 100644 --- a/oap-server/server-receiver-plugin/skywalking-log-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/log/provider/LogModuleProvider.java +++ b/oap-server/server-receiver-plugin/skywalking-log-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/log/provider/LogModuleProvider.java @@ -19,7 +19,7 @@ import com.linecorp.armeria.common.HttpMethod; import java.util.Collections; -import org.apache.skywalking.oap.log.analyzer.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; import org.apache.skywalking.oap.server.core.CoreModule; import org.apache.skywalking.oap.server.core.server.GRPCHandlerRegister; import org.apache.skywalking.oap.server.core.server.HTTPHandlerRegister; diff --git a/oap-server/server-receiver-plugin/skywalking-log-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/log/provider/handler/grpc/LogReportServiceGrpcHandler.java b/oap-server/server-receiver-plugin/skywalking-log-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/log/provider/handler/grpc/LogReportServiceGrpcHandler.java index 2e0b853461cf..ad019c968b82 100644 --- a/oap-server/server-receiver-plugin/skywalking-log-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/log/provider/handler/grpc/LogReportServiceGrpcHandler.java +++ b/oap-server/server-receiver-plugin/skywalking-log-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/log/provider/handler/grpc/LogReportServiceGrpcHandler.java @@ -23,8 +23,8 @@ import org.apache.skywalking.apm.network.logging.v3.LogData; import org.apache.skywalking.apm.network.logging.v3.LogReportServiceGrpc; import org.apache.skywalking.oap.server.library.util.StringUtil; -import org.apache.skywalking.oap.log.analyzer.module.LogAnalyzerModule; -import org.apache.skywalking.oap.log.analyzer.provider.log.ILogAnalyzerService; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.ILogAnalyzerService; import org.apache.skywalking.oap.server.library.module.ModuleManager; import org.apache.skywalking.oap.server.library.server.grpc.GRPCHandler; import org.apache.skywalking.oap.server.telemetry.TelemetryModule; diff --git a/oap-server/server-receiver-plugin/skywalking-log-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/log/provider/handler/rest/LogReportServiceHTTPHandler.java b/oap-server/server-receiver-plugin/skywalking-log-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/log/provider/handler/rest/LogReportServiceHTTPHandler.java index 0ed2e44dd63c..007b3d36b36e 100644 --- a/oap-server/server-receiver-plugin/skywalking-log-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/log/provider/handler/rest/LogReportServiceHTTPHandler.java +++ b/oap-server/server-receiver-plugin/skywalking-log-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/log/provider/handler/rest/LogReportServiceHTTPHandler.java @@ -22,8 +22,8 @@ import lombok.extern.slf4j.Slf4j; import org.apache.skywalking.apm.network.common.v3.Commands; import org.apache.skywalking.apm.network.logging.v3.LogData; -import org.apache.skywalking.oap.log.analyzer.module.LogAnalyzerModule; -import org.apache.skywalking.oap.log.analyzer.provider.log.ILogAnalyzerService; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.provider.log.ILogAnalyzerService; import org.apache.skywalking.oap.server.library.module.ModuleManager; import org.apache.skywalking.oap.server.telemetry.TelemetryModule; import org.apache.skywalking.oap.server.telemetry.api.CounterMetrics; diff --git a/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/pom.xml b/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/pom.xml index 4477f41b725b..e092ace960fb 100644 --- a/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/pom.xml +++ b/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/pom.xml @@ -39,5 +39,11 @@ <artifactId>skywalking-sharing-server-plugin</artifactId> <version>${project.version}</version> </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> </project> \ No newline at end of file diff --git a/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/telegraf/provider/TelegrafReceiverProvider.java b/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/telegraf/provider/TelegrafReceiverProvider.java index 871294afa932..c40edef1bad0 100644 --- a/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/telegraf/provider/TelegrafReceiverProvider.java +++ b/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/telegraf/provider/TelegrafReceiverProvider.java @@ -20,8 +20,8 @@ import com.google.common.base.Splitter; import com.linecorp.armeria.common.HttpMethod; -import org.apache.skywalking.oap.meter.analyzer.prometheus.rule.Rule; -import org.apache.skywalking.oap.meter.analyzer.prometheus.rule.Rules; +import org.apache.skywalking.oap.meter.analyzer.v2.prometheus.rule.Rule; +import org.apache.skywalking.oap.meter.analyzer.v2.prometheus.rule.Rules; import org.apache.skywalking.oap.server.core.CoreModule; import org.apache.skywalking.oap.server.core.analysis.meter.MeterSystem; import org.apache.skywalking.oap.server.core.server.HTTPHandlerRegister; diff --git a/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/telegraf/provider/handler/TelegrafServiceHandler.java b/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/telegraf/provider/handler/TelegrafServiceHandler.java index cd4dde417006..0006b0be269d 100644 --- a/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/telegraf/provider/handler/TelegrafServiceHandler.java +++ b/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/telegraf/provider/handler/TelegrafServiceHandler.java @@ -23,11 +23,11 @@ import com.linecorp.armeria.server.annotation.RequestConverter; import lombok.extern.slf4j.Slf4j; import org.apache.skywalking.apm.network.common.v3.Commands; -import org.apache.skywalking.oap.meter.analyzer.MetricConvert; -import org.apache.skywalking.oap.meter.analyzer.dsl.Sample; -import org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily; -import org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamilyBuilder; -import org.apache.skywalking.oap.meter.analyzer.prometheus.rule.Rule; +import org.apache.skywalking.oap.meter.analyzer.v2.MetricConvert; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyBuilder; +import org.apache.skywalking.oap.meter.analyzer.v2.prometheus.rule.Rule; import org.apache.skywalking.oap.server.core.analysis.meter.MeterSystem; import org.apache.skywalking.oap.server.library.module.ModuleManager; import org.apache.skywalking.oap.server.receiver.telegraf.provider.handler.pojo.TelegrafData; diff --git a/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/telegraf/TelegrafMetricsTest.java b/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/telegraf/TelegrafMetricsTest.java index 4a5e1513480e..c6e54d77574f 100644 --- a/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/telegraf/TelegrafMetricsTest.java +++ b/oap-server/server-receiver-plugin/skywalking-telegraf-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/telegraf/TelegrafMetricsTest.java @@ -19,10 +19,10 @@ package org.apache.skywalking.oap.server.receiver.telegraf; import com.google.common.collect.ImmutableMap; -import org.apache.skywalking.oap.meter.analyzer.dsl.Sample; -import org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily; -import org.apache.skywalking.oap.meter.analyzer.prometheus.rule.Rule; -import org.apache.skywalking.oap.meter.analyzer.prometheus.rule.Rules; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily; +import org.apache.skywalking.oap.meter.analyzer.v2.prometheus.rule.Rule; +import org.apache.skywalking.oap.meter.analyzer.v2.prometheus.rule.Rules; import org.apache.skywalking.oap.server.core.CoreModule; import org.apache.skywalking.oap.server.core.CoreModuleProvider; import org.apache.skywalking.oap.server.core.analysis.meter.MeterEntity; @@ -49,7 +49,7 @@ import org.junit.jupiter.api.extension.ExtendWith; import org.mockito.Mockito; import org.mockito.junit.jupiter.MockitoExtension; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import org.testcontainers.shaded.com.fasterxml.jackson.core.JsonParseException; import org.testcontainers.shaded.com.fasterxml.jackson.databind.ObjectMapper; @@ -108,14 +108,14 @@ protected void register() { // FIX 1: Removed spy() wrapper. // We use the instance directly. If it is a Mock (from other tests), using it directly is fine. - Whitebox.setInternalState(MetricsStreamProcessor.class, "PROCESSOR", + ReflectUtil.setInternalState(MetricsStreamProcessor.class, "PROCESSOR", MetricsStreamProcessor.getInstance()); // FIX 2: Changed spy(CoreModule.class) to mock(CoreModule.class) // Spying on a Class literal is invalid in modern Mockito. CoreModule coreModule = Mockito.mock(CoreModule.class); - Whitebox.setInternalState(coreModule, "loadedProvider", moduleProvider); + ReflectUtil.setInternalState(coreModule, "loadedProvider", moduleProvider); telegrafServiceHandler = buildTelegrafServiceHandler(); } diff --git a/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/pom.xml b/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/pom.xml index 94ea11676f94..353513ffc80f 100644 --- a/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/pom.xml +++ b/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/pom.xml @@ -38,5 +38,11 @@ <artifactId>meter-analyzer</artifactId> <version>${project.version}</version> </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> </project> diff --git a/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/zabbix/provider/ZabbixMetrics.java b/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/zabbix/provider/ZabbixMetrics.java index 8d43b9ada64a..c05772029fe7 100644 --- a/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/zabbix/provider/ZabbixMetrics.java +++ b/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/zabbix/provider/ZabbixMetrics.java @@ -30,10 +30,10 @@ import org.apache.commons.lang3.time.StopWatch; import org.apache.commons.text.StringTokenizer; import org.apache.skywalking.oap.server.library.util.StringUtil; -import org.apache.skywalking.oap.meter.analyzer.MetricConvert; -import org.apache.skywalking.oap.meter.analyzer.dsl.Sample; -import org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily; -import org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamilyBuilder; +import org.apache.skywalking.oap.meter.analyzer.v2.MetricConvert; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyBuilder; import org.apache.skywalking.oap.server.core.analysis.meter.MeterSystem; import org.apache.skywalking.oap.server.library.util.CollectionUtils; import org.apache.skywalking.oap.server.receiver.zabbix.provider.config.ZabbixConfig; diff --git a/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/zabbix/provider/config/ZabbixConfig.java b/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/zabbix/provider/config/ZabbixConfig.java index c419d38d1522..6081212f8877 100644 --- a/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/zabbix/provider/config/ZabbixConfig.java +++ b/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/src/main/java/org/apache/skywalking/oap/server/receiver/zabbix/provider/config/ZabbixConfig.java @@ -19,7 +19,7 @@ package org.apache.skywalking.oap.server.receiver.zabbix.provider.config; import lombok.Data; -import org.apache.skywalking.oap.meter.analyzer.MetricRuleConfig; +import org.apache.skywalking.oap.meter.analyzer.v2.MetricRuleConfig; import java.util.List; @@ -30,7 +30,6 @@ public class ZabbixConfig implements MetricRuleConfig { private String expSuffix; private String expPrefix; private String filter; - private String initExp; private Entities entities; private List<String> requiredZabbixItemKeys; private List<Metric> metrics; diff --git a/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/zabbix/provider/ZabbixMetricsTest.java b/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/zabbix/provider/ZabbixMetricsTest.java index e1d15ca5b63c..8455baf56810 100644 --- a/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/zabbix/provider/ZabbixMetricsTest.java +++ b/oap-server/server-receiver-plugin/skywalking-zabbix-receiver-plugin/src/test/java/org/apache/skywalking/oap/server/receiver/zabbix/provider/ZabbixMetricsTest.java @@ -45,7 +45,7 @@ import org.mockito.Mockito; import org.mockito.junit.jupiter.MockitoExtension; import org.mockito.junit.jupiter.MockitoSettings; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.ArrayList; import java.util.Arrays; @@ -91,12 +91,12 @@ public void setupMetrics() throws Throwable { // prepare the context meterSystem = Mockito.spy(new MeterSystem(moduleManager)); - Whitebox.setInternalState(MetricsStreamProcessor.class, "PROCESSOR", + ReflectUtil.setInternalState(MetricsStreamProcessor.class, "PROCESSOR", Mockito.spy(MetricsStreamProcessor.getInstance())); doNothing().when(MetricsStreamProcessor.getInstance()).create(any(), (StreamDefinition) any(), any()); CoreModule coreModule = Mockito.spy(CoreModule.class); - Whitebox.setInternalState(coreModule, "loadedProvider", moduleProvider); + ReflectUtil.setInternalState(coreModule, "loadedProvider", moduleProvider); when(moduleManager.find(CoreModule.NAME)).thenReturn(coreModule); when(moduleProvider.getService(MeterSystem.class)).thenReturn(meterSystem); @@ -106,7 +106,7 @@ public void setupMetrics() throws Throwable { map.put("avgLabeled", AvgLabeledFunction.class); map.put("avgHistogram", AvgHistogramFunction.class); map.put("avgHistogramPercentile", AvgHistogramPercentileFunction.class); - Whitebox.setInternalState(meterSystem, "functionRegister", map); + ReflectUtil.setInternalState(meterSystem, "functionRegister", map); super.setupMetrics(); } diff --git a/oap-server/server-starter/pom.xml b/oap-server/server-starter/pom.xml index a1949dcc5c34..ce1601659e81 100644 --- a/oap-server/server-starter/pom.xml +++ b/oap-server/server-starter/pom.xml @@ -47,6 +47,13 @@ </dependency> <!-- OAL runtime core --> + <!-- Hierarchy rule compiler (SPI-loaded by server-core) --> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>hierarchy</artifactId> + <version>${project.version}</version> + </dependency> + <!-- cluster module --> <dependency> <groupId>org.apache.skywalking</groupId> diff --git a/oap-server/server-starter/src/main/resources/hierarchy-definition.yml b/oap-server/server-starter/src/main/resources/hierarchy-definition.yml index 1f44cf5630b3..d8540809f683 100644 --- a/oap-server/server-starter/src/main/resources/hierarchy-definition.yml +++ b/oap-server/server-starter/src/main/resources/hierarchy-definition.yml @@ -81,8 +81,11 @@ hierarchy: CILIUM_SERVICE: K8S_SERVICE: short-name -# Use Groovy script to define the matching rules, the input parameters are the upper service(u) and the lower service(l) and the return value is a boolean, -# which are used to match the relation between the upper service(u) and the lower service(l) on the different layers. +# Define the matching rules as expressions. The input parameters are the upper service(u) and the lower service(l), +# and the return value is a boolean, used to match the relation between the upper service(u) and the lower service(l) +# on different layers. Rules support property access (e.g., u.name, l.shortName), String method calls +# (e.g., substring, lastIndexOf, concat), if/else, return, comparison (==, !=, >, <), logical operators (&&, ||, !), +# arithmetic (+, -), and string/number/boolean literals. auto-matching-rules: # the name of the upper service is equal to the name of the lower service name: "{ (u, l) -> u.name == l.name }" diff --git a/oap-server/server-starter/src/main/resources/lal/envoy-als.yaml b/oap-server/server-starter/src/main/resources/lal/envoy-als.yaml index e6530c3fa713..31843e8888ef 100644 --- a/oap-server/server-starter/src/main/resources/lal/envoy-als.yaml +++ b/oap-server/server-starter/src/main/resources/lal/envoy-als.yaml @@ -14,6 +14,11 @@ # limitations under the License. rules: + # The envoy-als rule has no json/yaml/text parser — it accesses protobuf fields + # directly via parsed.* (e.g. parsed?.response?.responseCode?.value). + # The extra log type (HTTPAccessLogEntry) is registered via SPI by + # EnvoyHTTPLALSourceTypeProvider in envoy-metrics-receiver-plugin, + # enabling the compiler to generate direct proto getter calls. - name: envoy-als layer: MESH dsl: | diff --git a/oap-server/server-storage-plugin/storage-banyandb-plugin/pom.xml b/oap-server/server-storage-plugin/storage-banyandb-plugin/pom.xml index 14558eca719d..f98243e86ea4 100644 --- a/oap-server/server-storage-plugin/storage-banyandb-plugin/pom.xml +++ b/oap-server/server-storage-plugin/storage-banyandb-plugin/pom.xml @@ -50,5 +50,11 @@ <version>${project.version}</version> <scope>test</scope> </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> </project> diff --git a/oap-server/server-storage-plugin/storage-banyandb-plugin/src/test/java/org/apache/skywalking/oap/server/storage/plugin/banyandb/BanyanDBIT.java b/oap-server/server-storage-plugin/storage-banyandb-plugin/src/test/java/org/apache/skywalking/oap/server/storage/plugin/banyandb/BanyanDBIT.java index 8539e4c52e4c..a8a0d3fc0e7a 100644 --- a/oap-server/server-storage-plugin/storage-banyandb-plugin/src/test/java/org/apache/skywalking/oap/server/storage/plugin/banyandb/BanyanDBIT.java +++ b/oap-server/server-storage-plugin/storage-banyandb-plugin/src/test/java/org/apache/skywalking/oap/server/storage/plugin/banyandb/BanyanDBIT.java @@ -65,7 +65,7 @@ import org.junit.jupiter.api.Test; import org.mockito.MockedStatic; import org.mockito.Mockito; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import org.testcontainers.containers.GenericContainer; import org.testcontainers.containers.wait.strategy.Wait; import org.testcontainers.junit.jupiter.Container; @@ -115,7 +115,7 @@ protected void setUpConnection() throws Exception { Mockito.when(telemetryProvider.getService(MetricsCreator.class)) .thenReturn(new MetricsCreatorNoop()); TelemetryModule telemetryModule = Mockito.spy(TelemetryModule.class); - Whitebox.setInternalState(telemetryModule, "loadedProvider", telemetryProvider); + ReflectUtil.setInternalState(telemetryModule, "loadedProvider", telemetryProvider); Mockito.when(moduleManager.find(TelemetryModule.NAME)).thenReturn(telemetryModule); log.info("create BanyanDB client and try to connect"); config = new BanyanDBConfigLoader(provider).loadConfig(); diff --git a/oap-server/server-testing/src/main/java/org/apache/skywalking/oap/server/testing/util/ReflectUtil.java b/oap-server/server-testing/src/main/java/org/apache/skywalking/oap/server/testing/util/ReflectUtil.java new file mode 100644 index 000000000000..fd9371ba4ad2 --- /dev/null +++ b/oap-server/server-testing/src/main/java/org/apache/skywalking/oap/server/testing/util/ReflectUtil.java @@ -0,0 +1,157 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.testing.util; + +import java.lang.reflect.Field; +import java.lang.reflect.Method; +import java.lang.reflect.Modifier; + +/** + * Reflection utilities for test code. Replaces {@code org.powermock.reflect.Whitebox}. + * + * <p>Uses {@code sun.misc.Unsafe} to write {@code static final} fields, which standard + * reflection cannot modify on JDK 12+. + */ +public final class ReflectUtil { + + private static final sun.misc.Unsafe UNSAFE; + + static { + try { + final Field f = sun.misc.Unsafe.class.getDeclaredField("theUnsafe"); + f.setAccessible(true); + UNSAFE = (sun.misc.Unsafe) f.get(null); + } catch (Exception e) { + throw new ExceptionInInitializerError(e); + } + } + + private ReflectUtil() { + } + + /** + * Set a field value on an object instance, searching up the class hierarchy. + */ + public static void setInternalState(final Object target, final String fieldName, + final Object value) { + final Field field = findField(target.getClass(), fieldName); + if (Modifier.isFinal(field.getModifiers())) { + final long offset = UNSAFE.objectFieldOffset(field); + UNSAFE.putObject(target, offset, value); + } else { + field.setAccessible(true); + try { + field.set(target, value); + } catch (IllegalAccessException e) { + throw new RuntimeException("Failed to set field '" + fieldName + "'", e); + } + } + } + + /** + * Set a static field value on a class, searching up the class hierarchy. + * Uses {@code Unsafe} for final fields since {@code Field.set()} is blocked on JDK 12+. + */ + public static void setInternalState(final Class<?> clazz, final String fieldName, + final Object value) { + final Field field = findField(clazz, fieldName); + if (Modifier.isFinal(field.getModifiers())) { + final Object base = UNSAFE.staticFieldBase(field); + final long offset = UNSAFE.staticFieldOffset(field); + UNSAFE.putObject(base, offset, value); + } else { + field.setAccessible(true); + try { + field.set(null, value); + } catch (IllegalAccessException e) { + throw new RuntimeException("Failed to set static field '" + fieldName + "'", e); + } + } + } + + /** + * Get a field value from an object instance, searching up the class hierarchy. + */ + @SuppressWarnings("unchecked") + public static <T> T getInternalState(final Object target, final String fieldName) { + final Field field = findField(target.getClass(), fieldName); + field.setAccessible(true); + try { + return (T) field.get(target); + } catch (IllegalAccessException e) { + throw new RuntimeException("Failed to get field '" + fieldName + "'", e); + } + } + + /** + * Invoke a method on an object instance by name, searching up the class hierarchy. + */ + @SuppressWarnings("unchecked") + public static <T> T invokeMethod(final Object target, final String methodName, + final Object... args) throws Exception { + final Class<?>[] paramTypes = new Class[args.length]; + for (int i = 0; i < args.length; i++) { + paramTypes[i] = args[i] != null ? args[i].getClass() : Object.class; + } + final Method method = findMethod(target.getClass(), methodName, paramTypes); + method.setAccessible(true); + return (T) method.invoke(target, args); + } + + private static Field findField(final Class<?> clazz, final String fieldName) { + Class<?> current = clazz; + while (current != null) { + try { + return current.getDeclaredField(fieldName); + } catch (NoSuchFieldException e) { + current = current.getSuperclass(); + } + } + throw new RuntimeException( + "Field '" + fieldName + "' not found in " + clazz.getName() + " or its superclasses"); + } + + private static Method findMethod(final Class<?> clazz, final String methodName, + final Class<?>[] paramTypes) { + Class<?> current = clazz; + while (current != null) { + for (final Method m : current.getDeclaredMethods()) { + if (!m.getName().equals(methodName)) { + continue; + } + if (m.getParameterCount() != paramTypes.length) { + continue; + } + boolean match = true; + final Class<?>[] declared = m.getParameterTypes(); + for (int i = 0; i < declared.length; i++) { + if (!declared[i].isAssignableFrom(paramTypes[i])) { + match = false; + break; + } + } + if (match) { + return m; + } + } + current = current.getSuperclass(); + } + throw new RuntimeException( + "Method '" + methodName + "' not found in " + clazz.getName() + " or its superclasses"); + } +} diff --git a/oap-server/server-tools/profile-exporter/tool-profile-snapshot-bootstrap/pom.xml b/oap-server/server-tools/profile-exporter/tool-profile-snapshot-bootstrap/pom.xml index abc029fe9bb7..015ae9fb7fa2 100644 --- a/oap-server/server-tools/profile-exporter/tool-profile-snapshot-bootstrap/pom.xml +++ b/oap-server/server-tools/profile-exporter/tool-profile-snapshot-bootstrap/pom.xml @@ -42,5 +42,11 @@ <version>${project.version}</version> </dependency> <!-- core module --> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> </dependencies> </project> diff --git a/oap-server/server-tools/profile-exporter/tool-profile-snapshot-bootstrap/src/test/java/org/apache/skywalking/oap/server/tool/profile/exporter/test/ProfileSnapshotExporterTest.java b/oap-server/server-tools/profile-exporter/tool-profile-snapshot-bootstrap/src/test/java/org/apache/skywalking/oap/server/tool/profile/exporter/test/ProfileSnapshotExporterTest.java index bb0518d99535..4793af1bffdd 100644 --- a/oap-server/server-tools/profile-exporter/tool-profile-snapshot-bootstrap/src/test/java/org/apache/skywalking/oap/server/tool/profile/exporter/test/ProfileSnapshotExporterTest.java +++ b/oap-server/server-tools/profile-exporter/tool-profile-snapshot-bootstrap/src/test/java/org/apache/skywalking/oap/server/tool/profile/exporter/test/ProfileSnapshotExporterTest.java @@ -45,7 +45,7 @@ import org.mockito.Mock; import org.mockito.Mockito; import org.mockito.junit.jupiter.MockitoExtension; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import org.yaml.snakeyaml.Yaml; import java.io.File; @@ -69,8 +69,8 @@ public class ProfileSnapshotExporterTest { public void init() throws IOException { CoreModule coreModule = Mockito.spy(CoreModule.class); StorageModule storageModule = Mockito.spy(StorageModule.class); - Whitebox.setInternalState(coreModule, "loadedProvider", moduleProvider); - Whitebox.setInternalState(storageModule, "loadedProvider", moduleProvider); + ReflectUtil.setInternalState(coreModule, "loadedProvider", moduleProvider); + ReflectUtil.setInternalState(storageModule, "loadedProvider", moduleProvider); Mockito.when(moduleManager.find(CoreModule.NAME)).thenReturn(coreModule); Mockito.when(moduleManager.find(StorageModule.NAME)).thenReturn(storageModule); final ProfileTaskQueryService taskQueryService = new ProfileTaskQueryService(moduleManager, coreModuleConfig); diff --git a/pom.xml b/pom.xml index dec2041fc663..6bc9b49f46f0 100755 --- a/pom.xml +++ b/pom.xml @@ -83,6 +83,7 @@ <modules> <module>oap-server</module> <module>oap-server-bom</module> + <module>test/script-cases/script-runtime-with-groovy</module> </modules> </profile> <profile> @@ -156,13 +157,12 @@ <project.build.outputTimestamp>1715298980</project.build.outputTimestamp> <!-- Compiling and test stages tools. --> - <powermock.version>2.0.9</powermock.version> <checkstyle.version>6.18</checkstyle.version> <junit.version>5.9.2</junit.version> <mockito-core.version>5.11.0</mockito-core.version> <system-stubs.version>2.1.4</system-stubs.version> <lombok.version>1.18.40</lombok.version> - <byte-buddy.version>1.17.0</byte-buddy.version> + <byte-buddy.version>1.18.7</byte-buddy.version> <!-- core lib dependency --> <grpc.version>1.70.0</grpc.version> @@ -193,14 +193,14 @@ <exec-maven-plugin.version>1.6.0</exec-maven-plugin.version> <build-helper-maven-plugin.version>3.2.0</build-helper-maven-plugin.version> <maven-checkstyle-plugin.version>3.1.0</maven-checkstyle-plugin.version> - <jmh.version>1.21</jmh.version> + <jmh.version>1.37</jmh.version> <checkstyle.fails.on.error>true</checkstyle.fails.on.error> <assertj-core.version>3.20.2</assertj-core.version> <cyclonedx-maven-plugin.version>2.8.0</cyclonedx-maven-plugin.version> <flatten-plugin-version>1.6.0</flatten-plugin-version> <skipUTs>false</skipUTs> - <argLine>--add-opens java.base/java.lang=ALL-UNNAMED</argLine> + <argLine>-javaagent:${settings.localRepository}/net/bytebuddy/byte-buddy-agent/${byte-buddy.version}/byte-buddy-agent-${byte-buddy.version}.jar --add-opens java.base/java.lang=ALL-UNNAMED</argLine> <delombok.output.dir>${project.build.directory}/delombok</delombok.output.dir> </properties> @@ -227,11 +227,6 @@ <scope>test</scope> </dependency> - <dependency> - <groupId>org.powermock</groupId> - <artifactId>powermock-reflect</artifactId> - <scope>test</scope> - </dependency> <dependency> <groupId>org.assertj</groupId> <artifactId>assertj-core</artifactId> @@ -303,12 +298,6 @@ <scope>test</scope> </dependency> - <dependency> - <groupId>org.powermock</groupId> - <artifactId>powermock-reflect</artifactId> - <version>${powermock.version}</version> - <scope>test</scope> - </dependency> <dependency> <groupId>org.assertj</groupId> <artifactId>assertj-core</artifactId> diff --git a/test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker/pom.xml b/test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker/pom.xml new file mode 100644 index 000000000000..8f9d222aae22 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker/pom.xml @@ -0,0 +1,94 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!-- + ~ Licensed to the Apache Software Foundation (ASF) under one or more + ~ contributor license agreements. See the NOTICE file distributed with + ~ this work for additional information regarding copyright ownership. + ~ The ASF licenses this file to You under the Apache License, Version 2.0 + ~ (the "License"); you may not use this file except in compliance with + ~ the License. You may obtain a copy of the License at + ~ + ~ http://www.apache.org/licenses/LICENSE-2.0 + ~ + ~ Unless required by applicable law or agreed to in writing, software + ~ distributed under the License is distributed on an "AS IS" BASIS, + ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + ~ See the License for the specific language governing permissions and + ~ limitations under the License. + ~ + --> + +<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> + <parent> + <artifactId>script-runtime-with-groovy</artifactId> + <groupId>org.apache.skywalking</groupId> + <version>${revision}</version> + </parent> + <modelVersion>4.0.0</modelVersion> + + <artifactId>hierarchy-v1-v2-checker</artifactId> + <description>Dual-path comparison tests: Groovy hierarchy rules (v1) vs compiler-generated Javassist hierarchy rules (v2)</description> + + <dependencies> + <!-- V1 Groovy hierarchy rules --> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>hierarchy-v1-with-groovy</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> + <!-- V2 hierarchy rule compiler (ANTLR4 + Javassist, merged into hierarchy) --> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>hierarchy</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-core</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> + <dependency> + <groupId>org.openjdk.jmh</groupId> + <artifactId>jmh-generator-annprocess</artifactId> + <scope>test</scope> + </dependency> + </dependencies> + + <build> + <plugins> + <plugin> + <groupId>org.apache.maven.plugins</groupId> + <artifactId>maven-compiler-plugin</artifactId> + <configuration> + <annotationProcessorPaths> + <path> + <groupId>org.projectlombok</groupId> + <artifactId>lombok</artifactId> + <version>${lombok.version}</version> + </path> + <path> + <groupId>org.openjdk.jmh</groupId> + <artifactId>jmh-generator-annprocess</artifactId> + <version>${jmh.version}</version> + </path> + </annotationProcessorPaths> + </configuration> + </plugin> + <plugin> + <artifactId>maven-clean-plugin</artifactId> + <configuration> + <filesets> + <fileset> + <directory>${project.basedir}/../../scripts</directory> + <includes> + <include>**/*.generated-classes/**</include> + </includes> + </fileset> + </filesets> + </configuration> + </plugin> + </plugins> + </build> +</project> diff --git a/test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/core/config/HierarchyBenchmark.java b/test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/core/config/HierarchyBenchmark.java new file mode 100644 index 000000000000..c1263efd9c83 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/core/config/HierarchyBenchmark.java @@ -0,0 +1,233 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.core.config; + +import java.io.FileReader; +import java.io.Reader; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import java.util.function.BiFunction; +import org.apache.skywalking.oap.server.core.config.v2.compiler.HierarchyRuleClassGenerator; +import org.apache.skywalking.oap.server.core.query.type.Service; +import org.junit.jupiter.api.Test; +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.BenchmarkMode; +import org.openjdk.jmh.annotations.Fork; +import org.openjdk.jmh.annotations.Level; +import org.openjdk.jmh.annotations.Measurement; +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.annotations.OutputTimeUnit; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.Setup; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.annotations.Warmup; +import org.openjdk.jmh.infra.Blackhole; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.options.Options; +import org.openjdk.jmh.runner.options.OptionsBuilder; +import org.yaml.snakeyaml.Yaml; + +/** + * JMH benchmark comparing Hierarchy v1 (Groovy) vs v2 (ANTLR4 + Javassist) + * compilation and execution performance using test-hierarchy-definition.yml + * (4 matching rules, 23 test pairs). + * + * <p>Run: mvn test -pl test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker + * -Dtest=HierarchyBenchmark#runBenchmark -DfailIfNoTests=false + * + * <h2>Reference results (Apple M3 Max, 128 GB RAM, macOS 26.2, JDK 25)</h2> + * <pre> + * Benchmark Mode Cnt Score Error Units + * HierarchyBenchmark.compileV1 avgt 5 2333.266 ± 285.446 us/op + * HierarchyBenchmark.compileV2 avgt 5 2482.365 ± 2467.419 us/op + * HierarchyBenchmark.executeV1 avgt 5 0.958 ± 0.095 us/op + * HierarchyBenchmark.executeV2 avgt 5 0.370 ± 0.006 us/op + * </pre> + * + * <p>Execute speedup: v2 is ~2.6x faster than v1. + * Compile times are comparable (only 4 short rules). + */ +@State(Scope.Thread) +@BenchmarkMode(Mode.AverageTime) +@OutputTimeUnit(TimeUnit.MICROSECONDS) +@Warmup(iterations = 3, time = 2) +@Measurement(iterations = 5, time = 5) +@Fork(1) +public class HierarchyBenchmark { + + private Map<String, String> ruleExpressions; + + // Pre-compiled rules for execute benchmarks + private Map<String, BiFunction<Service, Service, Boolean>> v1Rules; + private Map<String, BiFunction<Service, Service, Boolean>> v2Rules; + + // Test pairs per rule + private Map<String, List<ServicePair>> testPairs; + + @Setup(Level.Trial) + @SuppressWarnings("unchecked") + public void setup() throws Exception { + final Path hierarchyYml = findHierarchyDefinition(); + final Reader reader = new FileReader(hierarchyYml.toFile()); + final Yaml yaml = new Yaml(); + final Map<String, Map> config = yaml.loadAs(reader, Map.class); + ruleExpressions = (Map<String, String>) config.get("auto-matching-rules"); + + // Load test pairs + testPairs = loadTestPairs(hierarchyYml); + + // Pre-compile for execute benchmarks + final GroovyHierarchyRuleProvider groovyProvider = new GroovyHierarchyRuleProvider(); + v1Rules = groovyProvider.buildRules(ruleExpressions); + + v2Rules = new HashMap<>(); + final HierarchyRuleClassGenerator gen = new HierarchyRuleClassGenerator(); + for (final Map.Entry<String, String> entry : ruleExpressions.entrySet()) { + v2Rules.put(entry.getKey(), gen.compile(entry.getKey(), entry.getValue())); + } + } + + @Benchmark + public void compileV1(final Blackhole bh) { + final GroovyHierarchyRuleProvider provider = new GroovyHierarchyRuleProvider(); + bh.consume(provider.buildRules(ruleExpressions)); + } + + @Benchmark + public void compileV2(final Blackhole bh) { + final HierarchyRuleClassGenerator gen = new HierarchyRuleClassGenerator(); + for (final Map.Entry<String, String> entry : ruleExpressions.entrySet()) { + try { + bh.consume(gen.compile(entry.getKey(), entry.getValue())); + } catch (Exception ignored) { + } + } + } + + @Benchmark + public void executeV1(final Blackhole bh) { + for (final Map.Entry<String, BiFunction<Service, Service, Boolean>> entry : + v1Rules.entrySet()) { + final List<ServicePair> pairs = testPairs.get(entry.getKey()); + if (pairs == null) { + continue; + } + for (final ServicePair pair : pairs) { + bh.consume(entry.getValue().apply(pair.upper, pair.lower)); + } + } + } + + @Benchmark + public void executeV2(final Blackhole bh) { + for (final Map.Entry<String, BiFunction<Service, Service, Boolean>> entry : + v2Rules.entrySet()) { + final List<ServicePair> pairs = testPairs.get(entry.getKey()); + if (pairs == null) { + continue; + } + for (final ServicePair pair : pairs) { + bh.consume(entry.getValue().apply(pair.upper, pair.lower)); + } + } + } + + // ==================== Data loading ==================== + + @SuppressWarnings("unchecked") + private Map<String, List<ServicePair>> loadTestPairs(final Path hierarchyYml) throws Exception { + final String baseName = hierarchyYml.getFileName().toString() + .replaceFirst("\\.(yaml|yml)$", ""); + final Path dataPath = hierarchyYml.getParent().resolve(baseName + ".data.yaml"); + final Map<String, List<ServicePair>> result = new HashMap<>(); + if (!Files.isRegularFile(dataPath)) { + return result; + } + final Yaml yaml = new Yaml(); + final Map<String, Object> dataConfig = yaml.load(Files.readString(dataPath)); + if (dataConfig == null || !dataConfig.containsKey("input")) { + return result; + } + final Map<String, List<Map<String, Object>>> input = + (Map<String, List<Map<String, Object>>>) dataConfig.get("input"); + for (final Map.Entry<String, List<Map<String, Object>>> entry : input.entrySet()) { + final List<ServicePair> pairs = new ArrayList<>(); + for (final Map<String, Object> pairDef : entry.getValue()) { + final Map<String, String> upperDef = + (Map<String, String>) pairDef.get("upper"); + final Map<String, String> lowerDef = + (Map<String, String>) pairDef.get("lower"); + pairs.add(new ServicePair( + svc(upperDef.getOrDefault("name", ""), + upperDef.getOrDefault("shortName", "")), + svc(lowerDef.getOrDefault("name", ""), + lowerDef.getOrDefault("shortName", "")) + )); + } + result.put(entry.getKey(), pairs); + } + return result; + } + + private static Service svc(final String name, final String shortName) { + final Service s = new Service(); + s.setName(name); + s.setShortName(shortName); + return s; + } + + private Path findHierarchyDefinition() { + final String[] candidates = { + "test/script-cases/scripts/hierarchy-rule/test-hierarchy-definition.yml", + "../../scripts/hierarchy-rule/test-hierarchy-definition.yml" + }; + for (final String candidate : candidates) { + final Path path = Path.of(candidate); + if (Files.isRegularFile(path)) { + return path; + } + } + throw new IllegalStateException( + "Cannot find test-hierarchy-definition.yml in scripts/hierarchy-rule/"); + } + + private static class ServicePair { + final Service upper; + final Service lower; + + ServicePair(final Service upper, final Service lower) { + this.upper = upper; + this.lower = lower; + } + } + + // ==================== JMH launcher ==================== + + @Test + void runBenchmark() throws Exception { + final Options opt = new OptionsBuilder() + .include(getClass().getSimpleName()) + .build(); + new Runner(opt).run(); + } +} diff --git a/test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/core/config/HierarchyRuleComparisonTest.java b/test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/core/config/HierarchyRuleComparisonTest.java new file mode 100644 index 000000000000..1f6a7a3e3a6c --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/core/config/HierarchyRuleComparisonTest.java @@ -0,0 +1,191 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.core.config; + +import java.io.File; +import java.io.FileReader; +import java.io.Reader; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.Map; +import java.util.function.BiFunction; +import org.apache.skywalking.oap.server.core.query.type.Service; +import org.junit.jupiter.api.DynamicTest; +import org.junit.jupiter.api.TestFactory; +import org.apache.skywalking.oap.server.core.config.v2.compiler.HierarchyRuleClassGenerator; +import org.yaml.snakeyaml.Yaml; + +import static org.junit.jupiter.api.Assertions.assertEquals; + +/** + * Dual-path comparison test for hierarchy matching rules. + * Verifies that Groovy-based rules (v1) produce identical results + * to pure Java rules (v2) for all service pair combinations. + * + * <p>Test pairs are loaded from a companion {@code .data.yaml} file + * alongside the hierarchy definition YAML. + */ +class HierarchyRuleComparisonTest { + + private static Service svc(final String name, final String shortName) { + final Service s = new Service(); + s.setName(name); + s.setShortName(shortName); + return s; + } + + private static class TestPair { + final String description; + final Service upper; + final Service lower; + final Boolean expected; + + TestPair(final String description, final Service upper, + final Service lower, final Boolean expected) { + this.description = description; + this.upper = upper; + this.lower = lower; + this.expected = expected; + } + } + + @SuppressWarnings("unchecked") + @TestFactory + Collection<DynamicTest> allRulesProduceIdenticalResults() throws Exception { + final Path hierarchyYml = findHierarchyDefinition(); + final Reader reader = new FileReader(hierarchyYml.toFile()); + final Yaml yaml = new Yaml(); + final Map<String, Map> config = yaml.loadAs(reader, Map.class); + final Map<String, String> ruleExpressions = + (Map<String, String>) config.get("auto-matching-rules"); + + // Load companion .data.yaml + final Map<String, List<TestPair>> testPairsByRule = loadInputData(hierarchyYml); + + final GroovyHierarchyRuleProvider groovyProvider = new GroovyHierarchyRuleProvider(); + + final Map<String, BiFunction<Service, Service, Boolean>> v1Rules = + groovyProvider.buildRules(ruleExpressions); + + // Build v2 rules with class output + final String baseName = hierarchyYml.getFileName().toString() + .replaceFirst("\\.(yaml|yml)$", ""); + final File classBaseDir = new File(hierarchyYml.getParent().toFile(), + baseName + ".generated-classes"); + final HierarchyRuleClassGenerator generator = new HierarchyRuleClassGenerator(); + generator.setClassOutputDir(classBaseDir); + final java.util.Map<String, BiFunction<Service, Service, Boolean>> v2Rules = + new java.util.HashMap<>(); + for (final Map.Entry<String, String> entry : ruleExpressions.entrySet()) { + final String ruleName = entry.getKey(); + generator.setClassNameHint(ruleName); + v2Rules.put(ruleName, generator.compile(ruleName, entry.getValue())); + } + + final List<DynamicTest> tests = new ArrayList<>(); + for (final Map.Entry<String, String> entry : ruleExpressions.entrySet()) { + final String ruleName = entry.getKey(); + final BiFunction<Service, Service, Boolean> v1 = v1Rules.get(ruleName); + final BiFunction<Service, Service, Boolean> v2 = v2Rules.get(ruleName); + + final List<TestPair> pairs = testPairsByRule.get(ruleName); + if (pairs == null || pairs.isEmpty()) { + continue; + } + for (final TestPair pair : pairs) { + tests.add(DynamicTest.dynamicTest( + ruleName + " | " + pair.description, + () -> { + final boolean v1Result = v1.apply(pair.upper, pair.lower); + final boolean v2Result = v2.apply(pair.upper, pair.lower); + assertEquals(v1Result, v2Result, + "Rule '" + ruleName + "' diverged for " + pair.description + + ": v1=" + v1Result + ", v2=" + v2Result); + if (pair.expected != null) { + assertEquals(pair.expected, v1Result, + "Rule '" + ruleName + "' expected " + pair.expected + + " for " + pair.description + + " but v1=" + v1Result); + } + } + )); + } + } + return tests; + } + + @SuppressWarnings("unchecked") + private Map<String, List<TestPair>> loadInputData(final Path hierarchyYml) throws Exception { + final String baseName = hierarchyYml.getFileName().toString() + .replaceFirst("\\.(yaml|yml)$", ""); + final Path inputPath = hierarchyYml.getParent().resolve(baseName + ".data.yaml"); + + final Map<String, List<TestPair>> result = new java.util.HashMap<>(); + if (!Files.isRegularFile(inputPath)) { + return result; + } + + final Yaml yaml = new Yaml(); + final String content = Files.readString(inputPath); + final Map<String, Object> inputConfig = yaml.load(content); + if (inputConfig == null || !inputConfig.containsKey("input")) { + return result; + } + + final Map<String, List<Map<String, Object>>> input = + (Map<String, List<Map<String, Object>>>) inputConfig.get("input"); + for (final Map.Entry<String, List<Map<String, Object>>> entry : input.entrySet()) { + final String ruleName = entry.getKey(); + final List<TestPair> pairs = new ArrayList<>(); + for (final Map<String, Object> pairDef : entry.getValue()) { + final String description = (String) pairDef.getOrDefault("description", ""); + final Map<String, String> upperDef = (Map<String, String>) pairDef.get("upper"); + final Map<String, String> lowerDef = (Map<String, String>) pairDef.get("lower"); + final Boolean expected = pairDef.containsKey("expected") + ? (Boolean) pairDef.get("expected") : null; + final Service upper = svc( + upperDef.getOrDefault("name", ""), + upperDef.getOrDefault("shortName", "")); + final Service lower = svc( + lowerDef.getOrDefault("name", ""), + lowerDef.getOrDefault("shortName", "")); + pairs.add(new TestPair(description, upper, lower, expected)); + } + result.put(ruleName, pairs); + } + return result; + } + + private Path findHierarchyDefinition() { + final String[] candidates = { + "test/script-cases/scripts/hierarchy-rule/test-hierarchy-definition.yml", + "../../scripts/hierarchy-rule/test-hierarchy-definition.yml" + }; + for (final String candidate : candidates) { + final Path path = Path.of(candidate); + if (Files.isRegularFile(path)) { + return path; + } + } + throw new IllegalStateException( + "Cannot find test-hierarchy-definition.yml in scripts/hierarchy-rule/"); + } +} diff --git a/test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker/src/test/resources/hierarchy-definition.yml b/test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker/src/test/resources/hierarchy-definition.yml new file mode 100644 index 000000000000..1f44cf5630b3 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/hierarchy-v1-v2-checker/src/test/resources/hierarchy-definition.yml @@ -0,0 +1,123 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Define the hierarchy of service layers, the layers under the specific layer are related lower of the layer. +# The relation could have a matching rule for auto matching, which are defined in the `auto-matching-rules` section. +# All the layers are defined in the file `org.apache.skywalking.oap.server.core.analysis.Layers.java`. +# Notice: some hierarchy relations and auto matching rules are only works on k8s env. + +hierarchy: + MESH: + MESH_DP: name + K8S_SERVICE: short-name + + MESH_DP: + K8S_SERVICE: short-name + + GENERAL: + APISIX: lower-short-name-remove-ns + K8S_SERVICE: lower-short-name-remove-ns + KONG: lower-short-name-remove-ns + + MYSQL: + K8S_SERVICE: short-name + + POSTGRESQL: + K8S_SERVICE: short-name + + APISIX: + K8S_SERVICE: short-name + + NGINX: + K8S_SERVICE: short-name + + SO11Y_OAP: + K8S_SERVICE: short-name + + ROCKETMQ: + K8S_SERVICE: short-name + + RABBITMQ: + K8S_SERVICE: short-name + + KAFKA: + K8S_SERVICE: short-name + + CLICKHOUSE: + K8S_SERVICE: short-name + + PULSAR: + K8S_SERVICE: short-name + + ACTIVEMQ: + K8S_SERVICE: short-name + + KONG: + K8S_SERVICE: short-name + + VIRTUAL_DATABASE: + MYSQL: lower-short-name-with-fqdn + POSTGRESQL: lower-short-name-with-fqdn + CLICKHOUSE: lower-short-name-with-fqdn + + VIRTUAL_MQ: + ROCKETMQ: lower-short-name-with-fqdn + RABBITMQ: lower-short-name-with-fqdn + KAFKA: lower-short-name-with-fqdn + PULSAR: lower-short-name-with-fqdn + + CILIUM_SERVICE: + K8S_SERVICE: short-name + +# Use Groovy script to define the matching rules, the input parameters are the upper service(u) and the lower service(l) and the return value is a boolean, +# which are used to match the relation between the upper service(u) and the lower service(l) on the different layers. +auto-matching-rules: + # the name of the upper service is equal to the name of the lower service + name: "{ (u, l) -> u.name == l.name }" + # the short name of the upper service is equal to the short name of the lower service + short-name: "{ (u, l) -> u.shortName == l.shortName }" + # remove the k8s namespace from the lower service short name + # this rule is only works on k8s env. + lower-short-name-remove-ns: "{ (u, l) -> { if(l.shortName.lastIndexOf('.') > 0) return u.shortName == l.shortName.substring(0, l.shortName.lastIndexOf('.')); return false; } }" + # the short name of the upper remove port is equal to the short name of the lower service with fqdn suffix + # this rule is only works on k8s env. + lower-short-name-with-fqdn: "{ (u, l) -> { if(u.shortName.lastIndexOf(':') > 0) return u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local'); return false; } }" + +# The hierarchy level of the service layer, the level is used to define the order of the service layer for UI presentation. +# The level of the upper service should greater than the level of the lower service in `hierarchy` section. +layer-levels: + MESH: 3 + GENERAL: 3 + SO11Y_OAP: 3 + VIRTUAL_DATABASE: 3 + VIRTUAL_MQ: 3 + + MYSQL: 2 + POSTGRESQL: 2 + APISIX: 2 + NGINX: 2 + ROCKETMQ: 2 + CLICKHOUSE: 2 + RABBITMQ: 2 + KAFKA: 2 + PULSAR: 2 + ACTIVEMQ: 2 + KONG: 2 + + MESH_DP: 1 + CILIUM_SERVICE: 1 + + K8S_SERVICE: 0 + diff --git a/test/script-cases/script-runtime-with-groovy/hierarchy-v1-with-groovy/pom.xml b/test/script-cases/script-runtime-with-groovy/hierarchy-v1-with-groovy/pom.xml new file mode 100644 index 000000000000..c5efa459d62f --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/hierarchy-v1-with-groovy/pom.xml @@ -0,0 +1,42 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!-- + ~ Licensed to the Apache Software Foundation (ASF) under one or more + ~ contributor license agreements. See the NOTICE file distributed with + ~ this work for additional information regarding copyright ownership. + ~ The ASF licenses this file to You under the Apache License, Version 2.0 + ~ (the "License"); you may not use this file except in compliance with + ~ the License. You may obtain a copy of the License at + ~ + ~ http://www.apache.org/licenses/LICENSE-2.0 + ~ + ~ Unless required by applicable law or agreed to in writing, software + ~ distributed under the License is distributed on an "AS IS" BASIS, + ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + ~ See the License for the specific language governing permissions and + ~ limitations under the License. + ~ + --> + +<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> + <parent> + <artifactId>script-runtime-with-groovy</artifactId> + <groupId>org.apache.skywalking</groupId> + <version>${revision}</version> + </parent> + <modelVersion>4.0.0</modelVersion> + + <artifactId>hierarchy-v1-with-groovy</artifactId> + <description>Groovy-based hierarchy rule provider (for checker module only, not runtime)</description> + + <dependencies> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-core</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>org.apache.groovy</groupId> + <artifactId>groovy</artifactId> + </dependency> + </dependencies> +</project> diff --git a/test/script-cases/script-runtime-with-groovy/hierarchy-v1-with-groovy/src/main/java/org/apache/skywalking/oap/server/core/config/GroovyHierarchyRuleProvider.java b/test/script-cases/script-runtime-with-groovy/hierarchy-v1-with-groovy/src/main/java/org/apache/skywalking/oap/server/core/config/GroovyHierarchyRuleProvider.java new file mode 100644 index 000000000000..c8e8af43056c --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/hierarchy-v1-with-groovy/src/main/java/org/apache/skywalking/oap/server/core/config/GroovyHierarchyRuleProvider.java @@ -0,0 +1,49 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.core.config; + +import groovy.lang.Closure; +import groovy.lang.GroovyShell; +import java.util.HashMap; +import java.util.Map; +import java.util.function.BiFunction; +import org.apache.skywalking.oap.server.core.query.type.Service; + +/** + * Groovy-based hierarchy rule provider. Uses GroovyShell.evaluate() to compile + * hierarchy matching rule closures from YAML expressions. + * + * <p>This provider is NOT included in the runtime classpath. It is only used + * by the hierarchy-v1-v2-checker module for CI validation against the pure Java + * provider (hierarchy-v2). + */ +public final class GroovyHierarchyRuleProvider implements HierarchyDefinitionService.HierarchyRuleProvider { + + @Override + @SuppressWarnings("unchecked") + public Map<String, BiFunction<Service, Service, Boolean>> buildRules( + final Map<String, String> ruleExpressions) { + final Map<String, BiFunction<Service, Service, Boolean>> rules = new HashMap<>(); + final GroovyShell sh = new GroovyShell(); + ruleExpressions.forEach((name, expression) -> { + final Closure<Boolean> closure = (Closure<Boolean>) sh.evaluate(expression); + rules.put(name, (u, l) -> closure.call(u, l)); + }); + return rules; + } +} diff --git a/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/pom.xml b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/pom.xml new file mode 100644 index 000000000000..eb1e2c836498 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/pom.xml @@ -0,0 +1,52 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!-- + ~ Licensed to the Apache Software Foundation (ASF) under one or more + ~ contributor license agreements. See the NOTICE file distributed with + ~ this work for additional information regarding copyright ownership. + ~ The ASF licenses this file to You under the Apache License, Version 2.0 + ~ (the "License"); you may not use this file except in compliance with + ~ the License. You may obtain a copy of the License at + ~ + ~ http://www.apache.org/licenses/LICENSE-2.0 + ~ + ~ Unless required by applicable law or agreed to in writing, software + ~ distributed under the License is distributed on an "AS IS" BASIS, + ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + ~ See the License for the specific language governing permissions and + ~ limitations under the License. + --> + +<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> + <parent> + <artifactId>script-runtime-with-groovy</artifactId> + <groupId>org.apache.skywalking</groupId> + <version>${revision}</version> + </parent> + <modelVersion>4.0.0</modelVersion> + + <artifactId>lal-v1-with-groovy</artifactId> + <packaging>jar</packaging> + + <dependencies> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>log-analyzer</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>mal-v1-with-groovy</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>org.apache.groovy</groupId> + <artifactId>groovy</artifactId> + </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> + </dependencies> +</project> diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/Binding.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/Binding.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/Binding.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/Binding.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/DSL.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/DSL.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/DSL.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/DSL.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/LALPrecompiledExtension.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/LALPrecompiledExtension.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/LALPrecompiledExtension.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/LALPrecompiledExtension.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/AbstractSpec.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/AbstractSpec.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/AbstractSpec.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/AbstractSpec.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/LALDelegatingScript.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/LALDelegatingScript.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/LALDelegatingScript.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/LALDelegatingScript.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/extractor/ExtractorSpec.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/extractor/ExtractorSpec.java similarity index 80% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/extractor/ExtractorSpec.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/extractor/ExtractorSpec.java index ee9e58e72e74..2a51d10d4f1b 100644 --- a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/extractor/ExtractorSpec.java +++ b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/extractor/ExtractorSpec.java @@ -29,6 +29,7 @@ import java.util.Map; import java.util.Objects; import java.util.Optional; +import java.util.function.Consumer; import java.util.stream.Collectors; import lombok.experimental.Delegate; import org.apache.commons.lang3.StringUtils; @@ -333,6 +334,93 @@ public void sampledTrace(@DelegatesTo(SampledTraceSpec.class) final Closure<?> c sourceReceiver.receive(entity); } + public void metrics(final Consumer<SampleBuilder> consumer) { + if (BINDING.get().shouldAbort()) { + return; + } + final SampleBuilder builder = new SampleBuilder(); + consumer.accept(builder); + + final Sample sample = builder.build(); + final SampleFamily sampleFamily = SampleFamilyBuilder.newBuilder(sample).build(); + + final Optional<List<SampleFamily>> possibleMetricsContainer = BINDING.get().metricsContainer(); + + if (possibleMetricsContainer.isPresent()) { + possibleMetricsContainer.get().add(sampleFamily); + } else { + metricConverts.forEach(it -> it.toMeter( + ImmutableMap.<String, SampleFamily>builder() + .put(sample.getName(), sampleFamily) + .build() + )); + } + } + + public void slowSql(final Consumer<SlowSqlSpec> consumer) { + if (BINDING.get().shouldAbort()) { + return; + } + LogData.Builder log = BINDING.get().log(); + if (log.getLayer() == null + || log.getService() == null + || log.getTimestamp() < 1) { + LOGGER.warn("SlowSql extracts failed, maybe something is not configured."); + return; + } + DatabaseSlowStatementBuilder builder = new DatabaseSlowStatementBuilder(namingControl); + builder.setLayer(Layer.nameOf(log.getLayer())); + + builder.setServiceName(log.getService()); + + BINDING.get().databaseSlowStatement(builder); + + consumer.accept(slowSql); + + if (builder.getId() == null + || builder.getLatency() < 1 + || builder.getStatement() == null) { + LOGGER.warn("SlowSql extracts failed, maybe something is not configured."); + return; + } + + long timeBucketForDB = TimeBucket.getTimeBucket(log.getTimestamp(), DownSampling.Second); + builder.setTimeBucket(timeBucketForDB); + builder.setTimestamp(log.getTimestamp()); + + builder.prepare(); + sourceReceiver.receive(builder.toDatabaseSlowStatement()); + + ServiceMeta serviceMeta = new ServiceMeta(); + serviceMeta.setName(builder.getServiceName()); + serviceMeta.setLayer(builder.getLayer()); + long timeBucket = TimeBucket.getTimeBucket(log.getTimestamp(), DownSampling.Minute); + serviceMeta.setTimeBucket(timeBucket); + sourceReceiver.receive(serviceMeta); + } + + public void sampledTrace(final Consumer<SampledTraceSpec> consumer) { + if (BINDING.get().shouldAbort()) { + return; + } + LogData.Builder log = BINDING.get().log(); + SampledTraceBuilder builder = new SampledTraceBuilder(namingControl); + builder.setLayer(log.getLayer()); + builder.setTimestamp(log.getTimestamp()); + builder.setServiceName(log.getService()); + builder.setServiceInstanceName(log.getServiceInstance()); + builder.setTraceId(log.getTraceContext().getTraceId()); + BINDING.get().sampledTrace(builder); + + consumer.accept(sampledTrace); + + builder.validate(); + final Record record = builder.toRecord(); + final ISource entity = builder.toEntity(); + RecordStreamProcessor.getInstance().in(record); + sourceReceiver.receive(entity); + } + public static class SampleBuilder { @Delegate private final Sample.SampleBuilder sampleBuilder = Sample.builder(); diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/extractor/sampledtrace/SampledTraceSpec.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/extractor/sampledtrace/SampledTraceSpec.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/extractor/sampledtrace/SampledTraceSpec.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/extractor/sampledtrace/SampledTraceSpec.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/extractor/slowsql/SlowSqlSpec.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/extractor/slowsql/SlowSqlSpec.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/extractor/slowsql/SlowSqlSpec.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/extractor/slowsql/SlowSqlSpec.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/filter/FilterSpec.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/filter/FilterSpec.java similarity index 56% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/filter/FilterSpec.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/filter/FilterSpec.java index 7fb7557b7558..b83e71b8b849 100644 --- a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/filter/FilterSpec.java +++ b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/filter/FilterSpec.java @@ -28,6 +28,7 @@ import java.util.Map; import java.util.Optional; import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Consumer; import org.apache.skywalking.apm.network.logging.v3.LogData; import org.apache.skywalking.oap.log.analyzer.dsl.Binding; @@ -189,4 +190,169 @@ public void sink(@DelegatesTo(SinkSpec.class) final Closure<?> cl) { public void filter(final Closure<?> cl) { cl.call(); } + + public void text(final Consumer<TextParserSpec> consumer) { + if (BINDING.get().shouldAbort()) { + return; + } + consumer.accept(textParser); + } + + public void text() { + if (BINDING.get().shouldAbort()) { + return; + } + } + + public void json(final Consumer<JsonParserSpec> consumer) { + if (BINDING.get().shouldAbort()) { + return; + } + consumer.accept(jsonParser); + + final LogData.Builder logData = BINDING.get().log(); + try { + final Map<String, Object> parsed = jsonParser.create().readValue( + logData.getBody().getJson().getJson(), parsedType + ); + BINDING.get().parsed(parsed); + } catch (final Exception e) { + if (jsonParser.abortOnFailure()) { + BINDING.get().abort(); + } + } + } + + public void json() { + if (BINDING.get().shouldAbort()) { + return; + } + + final LogData.Builder logData = BINDING.get().log(); + try { + final Map<String, Object> parsed = jsonParser.create().readValue( + logData.getBody().getJson().getJson(), parsedType + ); + BINDING.get().parsed(parsed); + } catch (final Exception e) { + if (jsonParser.abortOnFailure()) { + BINDING.get().abort(); + } + } + } + + public void yaml(final Consumer<YamlParserSpec> consumer) { + if (BINDING.get().shouldAbort()) { + return; + } + consumer.accept(yamlParser); + + final LogData.Builder logData = BINDING.get().log(); + try { + final Map<String, Object> parsed = yamlParser.create().load( + logData.getBody().getYaml().getYaml() + ); + BINDING.get().parsed(parsed); + } catch (final Exception e) { + if (yamlParser.abortOnFailure()) { + BINDING.get().abort(); + } + } + } + + public void yaml() { + if (BINDING.get().shouldAbort()) { + return; + } + + final LogData.Builder logData = BINDING.get().log(); + try { + final Map<String, Object> parsed = yamlParser.create().load( + logData.getBody().getYaml().getYaml() + ); + BINDING.get().parsed(parsed); + } catch (final Exception e) { + if (yamlParser.abortOnFailure()) { + BINDING.get().abort(); + } + } + } + + public void extractor(final Consumer<ExtractorSpec> consumer) { + if (BINDING.get().shouldAbort()) { + return; + } + consumer.accept(extractor); + } + + public void sink(final Consumer<SinkSpec> consumer) { + if (BINDING.get().shouldAbort()) { + return; + } + consumer.accept(sink); + + final Binding b = BINDING.get(); + final LogData.Builder logData = b.log(); + final Message extraLog = b.extraLog(); + + if (!b.shouldSave()) { + if (LOGGER.isDebugEnabled()) { + LOGGER.debug("Log is dropped: {}", TextFormat.shortDebugString(logData)); + } + return; + } + + final Optional<AtomicReference<Log>> container = BINDING.get().logContainer(); + if (container.isPresent()) { + sinkListenerFactories.stream() + .map(LogSinkListenerFactory::create) + .filter(it -> it instanceof RecordSinkListener) + .map(it -> it.parse(logData, extraLog)) + .map(it -> (RecordSinkListener) it) + .map(RecordSinkListener::getLog) + .findFirst() + .ifPresent(log -> container.get().set(log)); + } else { + sinkListenerFactories.stream() + .map(LogSinkListenerFactory::create) + .forEach(it -> it.parse(logData, extraLog).build()); + } + } + + public void sink() { + if (BINDING.get().shouldAbort()) { + return; + } + + final Binding b = BINDING.get(); + final LogData.Builder logData = b.log(); + final Message extraLog = b.extraLog(); + + if (!b.shouldSave()) { + if (LOGGER.isDebugEnabled()) { + LOGGER.debug("Log is dropped: {}", TextFormat.shortDebugString(logData)); + } + return; + } + + final Optional<AtomicReference<Log>> container = BINDING.get().logContainer(); + if (container.isPresent()) { + sinkListenerFactories.stream() + .map(LogSinkListenerFactory::create) + .filter(it -> it instanceof RecordSinkListener) + .map(it -> it.parse(logData, extraLog)) + .map(it -> (RecordSinkListener) it) + .map(RecordSinkListener::getLog) + .findFirst() + .ifPresent(log -> container.get().set(log)); + } else { + sinkListenerFactories.stream() + .map(LogSinkListenerFactory::create) + .forEach(it -> it.parse(logData, extraLog).build()); + } + } + + public void abort() { + BINDING.get().abort(); + } } diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/AbstractParserSpec.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/AbstractParserSpec.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/AbstractParserSpec.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/AbstractParserSpec.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/JsonParserSpec.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/JsonParserSpec.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/JsonParserSpec.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/JsonParserSpec.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/TextParserSpec.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/TextParserSpec.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/TextParserSpec.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/TextParserSpec.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/YamlParserSpec.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/YamlParserSpec.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/YamlParserSpec.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/parser/YamlParserSpec.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/SamplerSpec.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/SamplerSpec.java similarity index 75% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/SamplerSpec.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/SamplerSpec.java index 97b69d0b472a..e500f3c60c82 100644 --- a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/SamplerSpec.java +++ b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/SamplerSpec.java @@ -23,6 +23,7 @@ import groovy.lang.GString; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; +import java.util.function.Consumer; import org.apache.skywalking.oap.log.analyzer.dsl.spec.AbstractSpec; import org.apache.skywalking.oap.log.analyzer.dsl.spec.sink.sampler.PossibilitySampler; import org.apache.skywalking.oap.log.analyzer.dsl.spec.sink.sampler.RateLimitingSampler; @@ -32,6 +33,7 @@ public class SamplerSpec extends AbstractSpec { private final Map<GString, Sampler> rateLimitSamplers; + private final Map<String, Sampler> rateLimitSamplersByString; private final Map<Integer, Sampler> possibilitySamplers; private final RateLimitingSampler.ResetHandler rlsResetHandler; @@ -40,6 +42,7 @@ public SamplerSpec(final ModuleManager moduleManager, super(moduleManager, moduleConfig); rateLimitSamplers = new ConcurrentHashMap<>(); + rateLimitSamplersByString = new ConcurrentHashMap<>(); possibilitySamplers = new ConcurrentHashMap<>(); rlsResetHandler = new RateLimitingSampler.ResetHandler(); } @@ -58,6 +61,35 @@ public void rateLimit(final GString id, @DelegatesTo(RateLimitingSampler.class) sampleWith(sampler); } + @SuppressWarnings("unused") + public void rateLimit(final String id, @DelegatesTo(RateLimitingSampler.class) final Closure<?> cl) { + if (BINDING.get().shouldAbort()) { + return; + } + + final Sampler sampler = rateLimitSamplersByString.computeIfAbsent( + id, $ -> new RateLimitingSampler(rlsResetHandler).start()); + + cl.setDelegate(sampler); + cl.call(); + + sampleWith(sampler); + } + + @SuppressWarnings("unused") + public void rateLimit(final String id, final Consumer<RateLimitingSampler> consumer) { + if (BINDING.get().shouldAbort()) { + return; + } + + final Sampler sampler = rateLimitSamplersByString.computeIfAbsent( + id, $ -> new RateLimitingSampler(rlsResetHandler).start()); + + consumer.accept((RateLimitingSampler) sampler); + + sampleWith(sampler); + } + @SuppressWarnings("unused") public void possibility(final int percentage, @DelegatesTo(PossibilitySampler.class) final Closure<?> cl) { if (BINDING.get().shouldAbort()) { diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/SinkSpec.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/SinkSpec.java similarity index 81% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/SinkSpec.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/SinkSpec.java index 82566f9b25d1..f2ae371a21e2 100644 --- a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/SinkSpec.java +++ b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/SinkSpec.java @@ -20,6 +20,7 @@ import groovy.lang.Closure; import groovy.lang.DelegatesTo; +import java.util.function.Consumer; import org.apache.skywalking.oap.log.analyzer.dsl.spec.AbstractSpec; import org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleConfig; import org.apache.skywalking.oap.server.library.module.ModuleManager; @@ -44,6 +45,13 @@ public void sampler(@DelegatesTo(SamplerSpec.class) final Closure<?> cl) { cl.call(); } + public void sampler(final Consumer<SamplerSpec> consumer) { + if (BINDING.get().shouldAbort()) { + return; + } + consumer.accept(sampler); + } + @SuppressWarnings("unused") public void enforcer(final Closure<?> cl) { if (BINDING.get().shouldAbort()) { @@ -52,6 +60,13 @@ public void enforcer(final Closure<?> cl) { BINDING.get().save(); } + public void enforcer() { + if (BINDING.get().shouldAbort()) { + return; + } + BINDING.get().save(); + } + @SuppressWarnings("unused") public void dropper(final Closure<?> cl) { if (BINDING.get().shouldAbort()) { @@ -59,4 +74,11 @@ public void dropper(final Closure<?> cl) { } BINDING.get().drop(); } + + public void dropper() { + if (BINDING.get().shouldAbort()) { + return; + } + BINDING.get().drop(); + } } diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/sampler/PossibilitySampler.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/sampler/PossibilitySampler.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/sampler/PossibilitySampler.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/sampler/PossibilitySampler.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/sampler/RateLimitingSampler.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/sampler/RateLimitingSampler.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/sampler/RateLimitingSampler.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/sampler/RateLimitingSampler.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/sampler/Sampler.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/sampler/Sampler.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/sampler/Sampler.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/dsl/spec/sink/sampler/Sampler.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/module/LogAnalyzerModule.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/module/LogAnalyzerModule.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/module/LogAnalyzerModule.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/module/LogAnalyzerModule.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LALConfig.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LALConfig.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LALConfig.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LALConfig.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LALConfigs.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LALConfigs.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LALConfigs.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LALConfigs.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LogAnalyzerModuleConfig.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LogAnalyzerModuleConfig.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LogAnalyzerModuleConfig.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LogAnalyzerModuleConfig.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LogAnalyzerModuleProvider.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LogAnalyzerModuleProvider.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LogAnalyzerModuleProvider.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/LogAnalyzerModuleProvider.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/ILogAnalysisListenerManager.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/ILogAnalysisListenerManager.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/ILogAnalysisListenerManager.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/ILogAnalysisListenerManager.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/ILogAnalyzerService.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/ILogAnalyzerService.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/ILogAnalyzerService.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/ILogAnalyzerService.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/LogAnalyzer.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/LogAnalyzer.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/LogAnalyzer.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/LogAnalyzer.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/LogAnalyzerServiceImpl.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/LogAnalyzerServiceImpl.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/LogAnalyzerServiceImpl.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/LogAnalyzerServiceImpl.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/analyzer/LogAnalyzerFactory.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/analyzer/LogAnalyzerFactory.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/analyzer/LogAnalyzerFactory.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/analyzer/LogAnalyzerFactory.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogAnalysisListener.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogAnalysisListener.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogAnalysisListener.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogAnalysisListener.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogAnalysisListenerFactory.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogAnalysisListenerFactory.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogAnalysisListenerFactory.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogAnalysisListenerFactory.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogFilterListener.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogFilterListener.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogFilterListener.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogFilterListener.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogSinkListener.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogSinkListener.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogSinkListener.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogSinkListener.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogSinkListenerFactory.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogSinkListenerFactory.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogSinkListenerFactory.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/LogSinkListenerFactory.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/RecordSinkListener.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/RecordSinkListener.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/RecordSinkListener.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/RecordSinkListener.java diff --git a/oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/TrafficSinkListener.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/TrafficSinkListener.java similarity index 100% rename from oap-server/analyzer/log-analyzer/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/TrafficSinkListener.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/log/analyzer/provider/log/listener/TrafficSinkListener.java diff --git a/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/registry/ProcessRegistry.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/registry/ProcessRegistry.java new file mode 100644 index 000000000000..221b488d9d2c --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/registry/ProcessRegistry.java @@ -0,0 +1,41 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.dsl.registry; + +/** + * Simplified ProcessRegistry mock for dual-path comparison tests (v1 LAL). + * Prevents hitting K8s/OAP internals during script execution. + */ +public class ProcessRegistry { + + public static final String LOCAL_VIRTUAL_PROCESS = "UNKNOWN_LOCAL"; + public static final String REMOTE_VIRTUAL_PROCESS = "UNKNOWN_REMOTE"; + + public static String generateVirtualLocalProcess(String service, String instance) { + return "mock-process-id"; + } + + public static String generateVirtualRemoteProcess(String service, String instance, String remoteAddress) { + return "mock-process-id"; + } + + public static String generateVirtualProcess(String service, String instance, String processName) { + return "mock-process-id"; + } +} diff --git a/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.library.module.ModuleDefine b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.library.module.ModuleDefine new file mode 100644 index 000000000000..54d5a91d08b4 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.library.module.ModuleDefine @@ -0,0 +1,19 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# + +org.apache.skywalking.oap.log.analyzer.module.LogAnalyzerModule \ No newline at end of file diff --git a/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.library.module.ModuleProvider b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.library.module.ModuleProvider new file mode 100644 index 000000000000..8f00b261f68d --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/main/resources/META-INF/services/org.apache.skywalking.oap.server.library.module.ModuleProvider @@ -0,0 +1,18 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleProvider \ No newline at end of file diff --git a/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/dsl/DSLSecurityTest.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/log/analyzer/dsl/DSLSecurityTest.java similarity index 93% rename from oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/dsl/DSLSecurityTest.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/log/analyzer/dsl/DSLSecurityTest.java index f4df42034c53..429e15383c25 100644 --- a/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/dsl/DSLSecurityTest.java +++ b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/log/analyzer/dsl/DSLSecurityTest.java @@ -30,7 +30,7 @@ import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.params.ParameterizedTest; import org.junit.jupiter.params.provider.MethodSource; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.Arrays; import java.util.Collection; @@ -105,7 +105,7 @@ public static Collection<Object[]> data() { @BeforeEach public void setup() { - Whitebox.setInternalState(manager, "isInPrepareStage", false); + ReflectUtil.setInternalState(manager, "isInPrepareStage", false); when(manager.find(anyString())).thenReturn(mock(ModuleProviderHolder.class)); when(manager.find(CoreModule.NAME).provider()).thenReturn(mock(ModuleServiceHolder.class)); when(manager.find(CoreModule.NAME).provider().getService(SourceReceiver.class)) @@ -124,9 +124,8 @@ public void setup() { public void testSecurity(String name, String script) { assertThrows(MultipleCompilationErrorsException.class, () -> { final DSL dsl = DSL.of(manager, new LogAnalyzerModuleConfig(), script); - Whitebox.setInternalState( - Whitebox.getInternalState(dsl, "filterSpec"), "sinkListenerFactories", Collections.emptyList() - ); + final Object filterSpec = ReflectUtil.getInternalState(dsl, "filterSpec"); + ReflectUtil.setInternalState(filterSpec, "sinkListenerFactories", Collections.emptyList()); dsl.bind(new Binding().log(LogData.newBuilder())); dsl.evaluate(); diff --git a/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/dsl/DSLTest.java b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/log/analyzer/dsl/DSLTest.java similarity index 97% rename from oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/dsl/DSLTest.java rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/log/analyzer/dsl/DSLTest.java index 66c996697eea..ff83af97bb8c 100644 --- a/oap-server/analyzer/log-analyzer/src/test/java/org/apache/skywalking/oap/log/analyzer/dsl/DSLTest.java +++ b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/log/analyzer/dsl/DSLTest.java @@ -32,7 +32,7 @@ import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.params.ParameterizedTest; import org.junit.jupiter.params.provider.MethodSource; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.Arrays; import java.util.Collection; @@ -197,7 +197,7 @@ public static Collection<Object[]> data() { @BeforeEach public void setup() { - Whitebox.setInternalState(manager, "isInPrepareStage", false); + ReflectUtil.setInternalState(manager, "isInPrepareStage", false); when(manager.find(anyString())).thenReturn(mock(ModuleProviderHolder.class)); ModuleProviderHolder logAnalyzerHolder = mock(ModuleProviderHolder.class); LogAnalyzerModuleProvider logAnalyzerProvider = mock(LogAnalyzerModuleProvider.class); @@ -220,9 +220,8 @@ public void setup() { @MethodSource("data") public void testDslStaticCompile(String name, String script) throws ModuleStartException { final DSL dsl = DSL.of(manager, new LogAnalyzerModuleConfig(), script); - Whitebox.setInternalState( - Whitebox.getInternalState(dsl, "filterSpec"), "sinkListenerFactories", Collections.emptyList() - ); + final Object filterSpec = ReflectUtil.getInternalState(dsl, "filterSpec"); + ReflectUtil.setInternalState(filterSpec, "sinkListenerFactories", Collections.emptyList()); dsl.bind(new Binding().log(LogData.newBuilder().build())); dsl.evaluate(); diff --git a/oap-server/analyzer/log-analyzer/src/test/resources/log-mal-rules/placeholder.yaml b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/test/resources/log-mal-rules/placeholder.yaml similarity index 100% rename from oap-server/analyzer/log-analyzer/src/test/resources/log-mal-rules/placeholder.yaml rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/test/resources/log-mal-rules/placeholder.yaml diff --git a/oap-server/analyzer/log-analyzer/src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker b/test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker similarity index 100% rename from oap-server/analyzer/log-analyzer/src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker rename to test/script-cases/script-runtime-with-groovy/lal-v1-with-groovy/src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/CLAUDE.md b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/CLAUDE.md new file mode 100644 index 000000000000..bced8ed592ce --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/CLAUDE.md @@ -0,0 +1,106 @@ +# MAL/LAL/Hierarchy v1-v2 Comparison Checker + +Cross-version comparison tests that validate v2 (ANTLR4+Javassist) DSL compilers produce identical results to v1 (Groovy) compilers. + +## Test Classes + +| Class | Tests | Description | +|-------|-------|-------------| +| `MalComparisonTest` | ~1268 | Compiles and executes all MAL rules from 6 script directories | +| `LalComparisonTest` | 35 | Compiles and executes all LAL rules from script directories | +| `MalFilterComparisonTest` | 31 | Validates MAL filter operations (tagEqual, tagNotEqual, etc.) | +| `MalInputDataGeneratorTest` | 1 | Generates `.data.yaml` companion files for MAL rules | +| `MalExpectedDataGeneratorTest` | 1 | Generates expected sections in `.data.yaml` from v1 output | + +## How It Works + +For each DSL expression: +1. Compile with v1 (Groovy) and v2 (ANTLR4+Javassist) +2. Compare compile-time metadata (sample names, scope type, aggregation labels, etc.) +3. Execute both with identical mock input data +4. Assert output samples match (entities, labels, values) +5. Validate against expected data in `.data.yaml` / `.input.data` + +## Script Directories (MAL) + +All under `test/script-cases/scripts/mal/`: + +| Directory | Source | Rules | +|-----------|--------|-------| +| `test-meter-analyzer-config` | `server-starter/.../meter-analyzer-config/` | ~17 configs | +| `test-otel-rules` | `server-starter/.../otel-rules/` | ~73 service configs | +| `test-envoy-metrics-rules` | `server-starter/.../envoy-metrics-rules/` | 3 configs | +| `test-log-mal-rules` | `server-starter/.../log-mal-rules/` | 2 configs | +| `test-telegraf-rules` | `server-starter/.../telegraf-rules/` | 1 config (vm.yaml) | +| `test-zabbix-rules` | `server-starter/.../zabbix-rules/` | 1 config (agent.yaml) | + +## Input Data Mock Principles + +### MAL (.data.yaml files) + +Each MAL rule YAML has a companion `.data.yaml` with `input` and `expected` sections. + +**Input section:** +- Every metric referenced in expressions must have samples +- Label variants must cover all filter operations (tagEqual, tagNotEqual, tagMatch) +- Labels from entity function `['label']` args (e.g., `instance(['host_name'], ['service_instance_id'])`) must be present in ALL input samples — these determine scope/service/instance/endpoint entity extraction +- Numeric YAML keys (e.g., zabbix `1`, `2`) → use `String.valueOf()` in Java code + +**Expected section:** +- Auto-generated from v1 (Groovy) execution output — v1 is the trusted baseline +- Rich assertions: entities (scope/service/instance/endpoint/layer), samples (labels/value) +- `error: 'v1 not-success'` means input data is broken — fix input, don't skip +- EMPTY results are hard failures + +**YAML key variants:** +- Standard rules use `metricsRules` key +- Zabbix rules use `metrics` key (both are handled by the collector) + +### LAL (.input.data files) + +Each LAL rule YAML has a companion `.input.data` with per-rule test entries. + +**Entry structure:** service, body-type, body, optional tags/extra-log, expect assertions. + +**Expect assertions:** save, abort, service, instance, endpoint, layer, tag.*, sampledTrace.* + +**Proto-typed rules:** Use `extra-log.proto-class` + `extra-log.proto-json` for protobuf extraLog. + +### Hierarchy + +No data files — Service mock objects are built inline in test code. + +## Generators + +### MalInputDataGenerator + +Extracts metric names and label requirements from compiled AST metadata. Generates `.data.yaml` input sections automatically. Run via `MalInputDataGeneratorTest` — skips files that already exist. + +**Label sources extracted:** +- Compiled metadata: `aggregationLabels` and `scopeLabels` from `ExpressionMetadata` +- Tag filters: `tagEqual`, `tagNotEqual`, `tagMatch`, `tagNotMatch` arguments +- Closure access: `tags.label` and `tags['label']` property/bracket access +- Entity function arguments: `['label']` in `service()`, `instance()`, `endpoint()`, `process()` calls from `expSuffix` + +**Entity function labels** are critical — `instance(['host_name'], ['service_instance_id'], Layer.MYSQL)` requires `service_instance_id` in every input sample. Without it, the entity extraction produces incorrect scope/service/instance values. The generator parses `expSuffix` to find these `['label']` arguments automatically. + +### MalExpectedDataGenerator + +Runs v1 engine on input data and captures output as expected baseline. Run via `MalExpectedDataGeneratorTest` — updates the `expected:` section in existing `.data.yaml` files. + +## Adding New Rules + +1. Copy the production YAML to the appropriate `test-*` directory +2. Run `MalInputDataGeneratorTest` to generate the `.data.yaml` +3. Review input data — add missing label variants for filters +4. Run `MalExpectedDataGeneratorTest` to generate expected sections +5. Run `MalComparisonTest` to verify all tests pass +6. Check for `error: 'v1 not-success'` in expected — fix input data + +## Duplicate Rule Names + +Some production configs (e.g., apisix.yaml) have duplicate rule names for route-based vs node-based variants. The collector disambiguates with `_2` suffix (e.g., `endpoint_http_status` → `endpoint_http_status_2`). + +## K8s Mocking + +Rules using `retagByK8sMeta` require K8s registry mocks. Both v1 and v2 K8sInfoRegistry are mocked via `Mockito.mockStatic()` in `@BeforeAll`. Mock `findServiceName(ns, pod)` returns `pod.ns`. diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/pom.xml b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/pom.xml new file mode 100644 index 000000000000..84c2a3bc25d1 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/pom.xml @@ -0,0 +1,120 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!-- + ~ Licensed to the Apache Software Foundation (ASF) under one or more + ~ contributor license agreements. See the NOTICE file distributed with + ~ this work for additional information regarding copyright ownership. + ~ The ASF licenses this file to You under the Apache License, Version 2.0 + ~ (the "License"); you may not use this file except in compliance with + ~ the License. You may obtain a copy of the License at + ~ + ~ http://www.apache.org/licenses/LICENSE-2.0 + ~ + ~ Unless required by applicable law or agreed to in writing, software + ~ distributed under the License is distributed on an "AS IS" BASIS, + ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + ~ See the License for the specific language governing permissions and + ~ limitations under the License. + ~ + --> + +<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> + <parent> + <artifactId>script-runtime-with-groovy</artifactId> + <groupId>org.apache.skywalking</groupId> + <version>${revision}</version> + </parent> + <modelVersion>4.0.0</modelVersion> + + <artifactId>mal-lal-v1-v2-checker</artifactId> + <description>Dual-path comparison tests: Groovy MAL/LAL (v1) vs compiler-generated Javassist MAL/LAL (v2)</description> + + <dependencies> + <!-- V1 Groovy MAL path --> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>mal-v1-with-groovy</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> + <!-- V1 Groovy LAL path --> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>lal-v1-with-groovy</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> + <!-- V2 MAL compiler (ANTLR4 + Javassist, merged into meter-analyzer) --> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>meter-analyzer</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> + <!-- V2 LAL compiler (ANTLR4 + Javassist, merged into log-analyzer) --> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>log-analyzer</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-core</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> + <!-- Envoy proto types needed to compile envoy-als LAL rules with extraLogType --> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>receiver-proto</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> + <dependency> + <groupId>org.apache.groovy</groupId> + <artifactId>groovy</artifactId> + <scope>test</scope> + </dependency> + <dependency> + <groupId>org.openjdk.jmh</groupId> + <artifactId>jmh-generator-annprocess</artifactId> + <scope>test</scope> + </dependency> + </dependencies> + + <build> + <plugins> + <plugin> + <groupId>org.apache.maven.plugins</groupId> + <artifactId>maven-compiler-plugin</artifactId> + <configuration> + <annotationProcessorPaths> + <path> + <groupId>org.projectlombok</groupId> + <artifactId>lombok</artifactId> + <version>${lombok.version}</version> + </path> + <path> + <groupId>org.openjdk.jmh</groupId> + <artifactId>jmh-generator-annprocess</artifactId> + <version>${jmh.version}</version> + </path> + </annotationProcessorPaths> + </configuration> + </plugin> + <plugin> + <artifactId>maven-clean-plugin</artifactId> + <configuration> + <filesets> + <fileset> + <directory>${project.basedir}/../../scripts</directory> + <includes> + <include>**/*.generated-classes/**</include> + </includes> + </fileset> + </filesets> + </configuration> + </plugin> + </plugins> + </build> +</project> diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/registry/ProcessRegistry.java b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/registry/ProcessRegistry.java new file mode 100644 index 000000000000..ffc113fd11a9 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/registry/ProcessRegistry.java @@ -0,0 +1,36 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.dsl.registry; + +public class ProcessRegistry { + public static final String LOCAL_VIRTUAL_PROCESS = "UNKNOWN_LOCAL"; + public static final String REMOTE_VIRTUAL_PROCESS = "UNKNOWN_REMOTE"; + + public static String generateVirtualLocalProcess(String service, String instance) { + return "mock-process-id"; + } + + public static String generateVirtualRemoteProcess(String service, String instance, String remoteAddress) { + return "mock-process-id"; + } + + public static String generateVirtualProcess(String service, String instance, String processName) { + return "mock-process-id"; + } +} diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/registry/ProcessRegistry.java b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/registry/ProcessRegistry.java new file mode 100644 index 000000000000..dc8b9396123b --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/meter/analyzer/v2/dsl/registry/ProcessRegistry.java @@ -0,0 +1,36 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.v2.dsl.registry; + +public class ProcessRegistry { + public static final String LOCAL_VIRTUAL_PROCESS = "UNKNOWN_LOCAL"; + public static final String REMOTE_VIRTUAL_PROCESS = "UNKNOWN_REMOTE"; + + public static String generateVirtualLocalProcess(String service, String instance) { + return "mock-process-id"; + } + + public static String generateVirtualRemoteProcess(String service, String instance, String remoteAddress) { + return "mock-process-id"; + } + + public static String generateVirtualProcess(String service, String instance, String processName) { + return "mock-process-id"; + } +} diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/InMemoryCompiler.java b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/InMemoryCompiler.java new file mode 100644 index 000000000000..af2ad5b4d5f5 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/InMemoryCompiler.java @@ -0,0 +1,118 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.checker; + +import java.io.File; +import java.io.IOException; +import java.net.URL; +import java.net.URLClassLoader; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.Arrays; +import java.util.List; +import javax.tools.JavaCompiler; +import javax.tools.JavaFileObject; +import javax.tools.StandardJavaFileManager; +import javax.tools.ToolProvider; + +/** + * Compiles generated Java source code in-memory and loads the resulting class. + */ +public final class InMemoryCompiler { + + private final Path tempDir; + private final URLClassLoader classLoader; + + public InMemoryCompiler() throws IOException { + this.tempDir = Files.createTempDirectory("checker-compile-"); + final File srcDir = new File(tempDir.toFile(), "src"); + final File outDir = new File(tempDir.toFile(), "classes"); + srcDir.mkdirs(); + outDir.mkdirs(); + this.classLoader = new URLClassLoader( + new URL[]{outDir.toURI().toURL()}, + Thread.currentThread().getContextClassLoader() + ); + } + + /** + * Compile a single Java source file and return the loaded Class. + * + * @param packageName fully qualified package (e.g. "org.apache...rt.mal") + * @param className simple class name (e.g. "MalExpr_test") + * @param sourceCode the full Java source code + * @return the loaded Class + */ + public Class<?> compile(final String packageName, final String className, + final String sourceCode) throws Exception { + final String fqcn = packageName + "." + className; + + final File srcDir = new File(tempDir.toFile(), "src"); + final File outDir = new File(tempDir.toFile(), "classes"); + final File pkgDir = new File(srcDir, packageName.replace('.', File.separatorChar)); + pkgDir.mkdirs(); + + final File javaFile = new File(pkgDir, className + ".java"); + Files.writeString(javaFile.toPath(), sourceCode); + + final JavaCompiler compiler = ToolProvider.getSystemJavaCompiler(); + if (compiler == null) { + throw new IllegalStateException("No Java compiler available — requires JDK"); + } + + final String classpath = System.getProperty("java.class.path"); + + try (StandardJavaFileManager fm = compiler.getStandardFileManager(null, null, null)) { + final Iterable<? extends JavaFileObject> units = + fm.getJavaFileObjectsFromFiles(List.of(javaFile)); + + final List<String> options = Arrays.asList( + "-d", outDir.getAbsolutePath(), + "-classpath", classpath + ); + + final java.io.StringWriter errors = new java.io.StringWriter(); + final JavaCompiler.CompilationTask task = + compiler.getTask(errors, fm, null, options, null, units); + + if (!task.call()) { + throw new RuntimeException( + "Compilation failed for " + fqcn + ":\n" + errors); + } + } + + return classLoader.loadClass(fqcn); + } + + public void close() throws IOException { + classLoader.close(); + deleteRecursive(tempDir.toFile()); + } + + private static void deleteRecursive(final File file) { + if (file.isDirectory()) { + final File[] children = file.listFiles(); + if (children != null) { + for (final File child : children) { + deleteRecursive(child); + } + } + } + file.delete(); + } +} diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/lal/LalBenchmark.java b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/lal/LalBenchmark.java new file mode 100644 index 000000000000..30bc52e208f3 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/lal/LalBenchmark.java @@ -0,0 +1,501 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.checker.lal; + +import java.lang.reflect.Field; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.ServiceLoader; +import java.util.concurrent.TimeUnit; +import com.google.protobuf.Message; +import com.google.protobuf.util.JsonFormat; +import org.apache.skywalking.apm.network.common.v3.KeyStringValuePair; +import org.apache.skywalking.apm.network.logging.v3.JSONLog; +import org.apache.skywalking.apm.network.logging.v3.LogData; +import org.apache.skywalking.apm.network.logging.v3.LogDataBody; +import org.apache.skywalking.apm.network.logging.v3.LogTags; +import org.apache.skywalking.apm.network.logging.v3.TextLog; +import org.apache.skywalking.apm.network.logging.v3.TraceContext; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALClassGenerator; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider; +import org.apache.skywalking.oap.server.core.CoreModule; +import org.apache.skywalking.oap.server.core.config.ConfigService; +import org.apache.skywalking.oap.server.core.config.NamingControl; +import org.apache.skywalking.oap.server.core.source.SourceReceiver; +import org.apache.skywalking.oap.server.library.module.ModuleManager; +import org.apache.skywalking.oap.server.library.module.ModuleProviderHolder; +import org.apache.skywalking.oap.server.library.module.ModuleServiceHolder; +import org.junit.jupiter.api.Test; +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.BenchmarkMode; +import org.openjdk.jmh.annotations.Fork; +import org.openjdk.jmh.annotations.Level; +import org.openjdk.jmh.annotations.Measurement; +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.annotations.OutputTimeUnit; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.Setup; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.annotations.Warmup; +import org.openjdk.jmh.infra.Blackhole; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.options.Options; +import org.openjdk.jmh.runner.options.OptionsBuilder; +import org.yaml.snakeyaml.Yaml; + +import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +/** + * JMH benchmark comparing LAL v1 (Groovy) vs v2 (ANTLR4 + Javassist) + * compilation and execution performance using envoy-als.yaml (2 rules). + * + * <p>Run: mvn test -pl test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker + * -Dtest=LalBenchmark#runBenchmark -DfailIfNoTests=false + * + * <h2>Reference results (Apple M3 Max, 128 GB RAM, macOS 26.2, JDK 25)</h2> + * <pre> + * Benchmark Mode Cnt Score Error Units + * LalBenchmark.compileV1 avgt 5 34534.987 ± 3811.245 us/op + * LalBenchmark.compileV2 avgt 5 881.997 ± 102.587 us/op + * LalBenchmark.executeV1 avgt 5 36.683 ± 5.223 us/op + * LalBenchmark.executeV2 avgt 5 12.909 ± 2.378 us/op + * </pre> + * + * <p>Compile speedup: v2 is ~39x faster than v1 (Groovy script compilation is expensive). + * Execute speedup: v2 is ~2.8x faster than v1. + */ +@State(Scope.Thread) +@BenchmarkMode(Mode.AverageTime) +@OutputTimeUnit(TimeUnit.MICROSECONDS) +@Warmup(iterations = 3, time = 2) +@Measurement(iterations = 5, time = 5) +@Fork(1) +public class LalBenchmark { + + private List<RuleEntry> rules; + + // Pre-compiled expressions for execute benchmarks + private List<org.apache.skywalking.oap.log.analyzer.dsl.DSL> v1Dsls; + private List<org.apache.skywalking.oap.log.analyzer.v2.dsl.LalExpression> v2Exprs; + + // Module managers for v1/v2 + private ModuleManager v1Manager; + private ModuleManager v2Manager; + + // Pre-created FilterSpec for v2 execute benchmark (reusable) + private org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.filter.FilterSpec v2FilterSpec; + + // Test log data per rule + private List<LogData> testLogs; + private List<Message> extraLogs; + + // SPI lookup cache + private Map<String, Class<?>> spiTypes; + + @Setup(Level.Trial) + @SuppressWarnings("unchecked") + public void setup() throws Exception { + // Load envoy-als.yaml + final Path lalYaml = findScript("lal", "test-lal/oap-cases/envoy-als.yaml"); + final Yaml yaml = new Yaml(); + final Map<String, Object> config = yaml.load(Files.readString(lalYaml)); + final List<Map<String, String>> ruleConfigs = + (List<Map<String, String>>) config.get("rules"); + + // Load envoy-als.input.data + final Path inputDataPath = lalYaml.getParent().resolve("envoy-als.input.data"); + Map<String, Map<String, Object>> inputData = null; + if (Files.isRegularFile(inputDataPath)) { + inputData = yaml.load(Files.readString(inputDataPath)); + } + + // Parse rules + rules = new ArrayList<>(); + testLogs = new ArrayList<>(); + extraLogs = new ArrayList<>(); + for (final Map<String, String> rule : ruleConfigs) { + final String name = rule.get("name"); + final String dsl = rule.get("dsl"); + final String layer = rule.get("layer"); + if (name == null || dsl == null) { + continue; + } + final Map<String, Object> ruleInput = + inputData != null ? inputData.get(name) : null; + + // Resolve extraLogType via SPI + Class<?> extraLogType = null; + if (layer != null) { + extraLogType = spiExtraLogTypes().get(layer); + } + + rules.add(new RuleEntry(name, dsl, layer, extraLogType)); + testLogs.add(buildLogData(ruleInput, dsl)); + extraLogs.add(buildExtraLog(ruleInput)); + } + + // Set up module managers + v1Manager = buildMockModuleManager(true); + v2Manager = buildMockModuleManager(false); + + // Pre-create v2 FilterSpec (reusable across iterations) + v2FilterSpec = + new org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.filter.FilterSpec( + v2Manager, new LogAnalyzerModuleConfig()); + disableSinkListenersOnSpec(v2FilterSpec); + + // Pre-compile for execute benchmarks + v1Dsls = new ArrayList<>(); + v2Exprs = new ArrayList<>(); + for (final RuleEntry rule : rules) { + final org.apache.skywalking.oap.log.analyzer.dsl.DSL v1Dsl = + org.apache.skywalking.oap.log.analyzer.dsl.DSL.of( + v1Manager, + new org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleConfig(), + rule.dsl); + disableSinkListeners(v1Dsl); + v1Dsls.add(v1Dsl); + + final LALClassGenerator gen = new LALClassGenerator(); + if (rule.extraLogType != null) { + gen.setExtraLogType(rule.extraLogType); + } + v2Exprs.add(gen.compile(rule.dsl)); + } + } + + @Benchmark + public void compileV1(final Blackhole bh) { + for (final RuleEntry rule : rules) { + try { + final org.apache.skywalking.oap.log.analyzer.dsl.DSL dsl = + org.apache.skywalking.oap.log.analyzer.dsl.DSL.of( + v1Manager, + new org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleConfig(), + rule.dsl); + bh.consume(dsl); + } catch (Exception ignored) { + } + } + } + + @Benchmark + public void compileV2(final Blackhole bh) { + for (final RuleEntry rule : rules) { + try { + final LALClassGenerator gen = new LALClassGenerator(); + if (rule.extraLogType != null) { + gen.setExtraLogType(rule.extraLogType); + } + bh.consume(gen.compile(rule.dsl)); + } catch (Exception ignored) { + } + } + } + + @Benchmark + public void executeV1(final Blackhole bh) { + for (int i = 0; i < v1Dsls.size(); i++) { + try { + final org.apache.skywalking.oap.log.analyzer.dsl.Binding ctx = + new org.apache.skywalking.oap.log.analyzer.dsl.Binding() + .log(testLogs.get(i)); + if (extraLogs.get(i) != null) { + ctx.extraLog(extraLogs.get(i)); + } + v1Dsls.get(i).bind(ctx); + v1Dsls.get(i).evaluate(); + bh.consume(ctx); + } catch (Exception ignored) { + } + } + } + + @Benchmark + public void executeV2(final Blackhole bh) { + for (int i = 0; i < v2Exprs.size(); i++) { + try { + final org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext ctx = + new org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext() + .log(testLogs.get(i)); + if (extraLogs.get(i) != null) { + ctx.extraLog(extraLogs.get(i)); + } + v2Exprs.get(i).execute(v2FilterSpec, ctx); + bh.consume(ctx); + } catch (Exception ignored) { + } + } + } + + // ==================== Mock setup ==================== + + private ModuleManager buildMockModuleManager(final boolean isV1) { + final ModuleManager manager = mock(ModuleManager.class); + setInternalField(manager, "isInPrepareStage", false); + when(manager.find(anyString())).thenReturn(mock(ModuleProviderHolder.class)); + + final ModuleProviderHolder logAnalyzerHolder = mock(ModuleProviderHolder.class); + if (isV1) { + final org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleProvider + provider = mock( + org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleProvider.class); + when(provider.getMetricConverts()).thenReturn(Collections.emptyList()); + when(logAnalyzerHolder.provider()).thenReturn(provider); + } else { + final org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleProvider + provider = mock( + org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleProvider.class); + when(provider.getMetricConverts()).thenReturn(Collections.emptyList()); + when(logAnalyzerHolder.provider()).thenReturn(provider); + } + when(manager.find(LogAnalyzerModule.NAME)).thenReturn(logAnalyzerHolder); + + when(manager.find(CoreModule.NAME).provider()).thenReturn(mock(ModuleServiceHolder.class)); + when(manager.find(CoreModule.NAME).provider().getService(SourceReceiver.class)) + .thenReturn(mock(SourceReceiver.class)); + when(manager.find(CoreModule.NAME).provider().getService(ConfigService.class)) + .thenReturn(mock(ConfigService.class)); + when(manager.find(CoreModule.NAME) + .provider() + .getService(ConfigService.class) + .getSearchableLogsTags()) + .thenReturn(""); + final NamingControl namingControl = mock(NamingControl.class); + when(namingControl.formatServiceName(anyString())) + .thenAnswer(inv -> inv.getArgument(0)); + when(namingControl.formatInstanceName(anyString())) + .thenAnswer(inv -> inv.getArgument(0)); + when(namingControl.formatEndpointName(anyString(), anyString())) + .thenAnswer(inv -> inv.getArgument(1)); + when(manager.find(CoreModule.NAME).provider().getService(NamingControl.class)) + .thenReturn(namingControl); + return manager; + } + + // ==================== Log data builders ==================== + + @SuppressWarnings("unchecked") + private LogData buildLogData(final Map<String, Object> inputData, + final String dsl) { + if (inputData == null) { + return buildSyntheticLogData(dsl); + } + + final LogData.Builder builder = LogData.newBuilder(); + final String service = (String) inputData.get("service"); + if (service != null) { + builder.setService(service); + } + final String instance = (String) inputData.get("instance"); + if (instance != null) { + builder.setServiceInstance(instance); + } + final String traceId = (String) inputData.get("trace-id"); + if (traceId != null) { + builder.setTraceContext(TraceContext.newBuilder().setTraceId(traceId)); + } + final Object tsObj = inputData.get("timestamp"); + if (tsObj != null) { + builder.setTimestamp(Long.parseLong(String.valueOf(tsObj))); + } + + final String bodyType = (String) inputData.get("body-type"); + final String body = (String) inputData.get("body"); + if ("json".equals(bodyType) && body != null) { + builder.setBody(LogDataBody.newBuilder() + .setJson(JSONLog.newBuilder().setJson(body))); + } else if ("text".equals(bodyType) && body != null) { + builder.setBody(LogDataBody.newBuilder() + .setText(TextLog.newBuilder().setText(body))); + } + + final Map<String, String> tags = + (Map<String, String>) inputData.get("tags"); + if (tags != null && !tags.isEmpty()) { + final LogTags.Builder tagsBuilder = LogTags.newBuilder(); + for (final Map.Entry<String, String> tag : tags.entrySet()) { + tagsBuilder.addData(KeyStringValuePair.newBuilder() + .setKey(tag.getKey()) + .setValue(tag.getValue())); + } + builder.setTags(tagsBuilder); + } + return builder.build(); + } + + private LogData buildSyntheticLogData(final String dsl) { + final LogData.Builder builder = LogData.newBuilder() + .setService("test-service") + .setServiceInstance("test-instance") + .setTimestamp(System.currentTimeMillis()) + .setTraceContext(TraceContext.newBuilder() + .setTraceId("test-trace-id-123") + .setTraceSegmentId("test-segment-id-456") + .setSpanId(1)); + if (dsl.contains("json")) { + builder.setBody(LogDataBody.newBuilder() + .setJson(JSONLog.newBuilder() + .setJson("{\"level\":\"ERROR\",\"msg\":\"test\"}"))); + } + return builder.build(); + } + + @SuppressWarnings("unchecked") + private static Message buildExtraLog( + final Map<String, Object> inputData) throws Exception { + if (inputData == null) { + return null; + } + final Map<String, String> extraLog = + (Map<String, String>) inputData.get("extra-log"); + if (extraLog == null) { + return null; + } + final String protoClass = extraLog.get("proto-class"); + final String protoJson = extraLog.get("proto-json"); + if (protoClass == null || protoJson == null) { + return null; + } + final Class<?> clazz = Class.forName(protoClass); + final Message.Builder builder = (Message.Builder) + clazz.getMethod("newBuilder").invoke(null); + JsonFormat.parser().ignoringUnknownFields().merge(protoJson, builder); + return builder.build(); + } + + // ==================== SPI lookup ==================== + + private Map<String, Class<?>> spiExtraLogTypes() { + if (spiTypes == null) { + spiTypes = new HashMap<>(); + for (final LALSourceTypeProvider p : + ServiceLoader.load(LALSourceTypeProvider.class)) { + spiTypes.put(p.layer().name(), p.extraLogType()); + } + } + return spiTypes; + } + + // ==================== Reflection helpers ==================== + + private void disableSinkListeners(final Object dsl) { + try { + final Object filterSpec = getInternalField(dsl, "filterSpec"); + setInternalField(filterSpec, "sinkListenerFactories", Collections.emptyList()); + } catch (Exception ignored) { + } + } + + private void disableSinkListenersOnSpec(final Object filterSpec) { + try { + setInternalField(filterSpec, "sinkListenerFactories", Collections.emptyList()); + } catch (Exception ignored) { + } + } + + private static void setInternalField(final Object target, final String fieldName, + final Object value) { + try { + Field field = null; + Class<?> clazz = target.getClass(); + while (clazz != null && field == null) { + try { + field = clazz.getDeclaredField(fieldName); + } catch (NoSuchFieldException e) { + clazz = clazz.getSuperclass(); + } + } + if (field != null) { + field.setAccessible(true); + field.set(target, value); + } + } catch (Exception ignored) { + } + } + + private static Object getInternalField(final Object target, final String fieldName) { + try { + Field field = null; + Class<?> clazz = target.getClass(); + while (clazz != null && field == null) { + try { + field = clazz.getDeclaredField(fieldName); + } catch (NoSuchFieldException e) { + clazz = clazz.getSuperclass(); + } + } + if (field != null) { + field.setAccessible(true); + return field.get(target); + } + } catch (Exception ignored) { + } + return null; + } + + // ==================== Utilities ==================== + + private static Path findScript(final String language, final String relative) { + final String[] candidates = { + "test/script-cases/scripts/" + language + "/" + relative, + "../../scripts/" + language + "/" + relative + }; + for (final String candidate : candidates) { + final Path path = Path.of(candidate); + if (Files.isRegularFile(path)) { + return path; + } + } + throw new IllegalStateException("Cannot find " + relative + " in scripts/" + language); + } + + private static class RuleEntry { + final String name; + final String dsl; + final String layer; + final Class<?> extraLogType; + + RuleEntry(final String name, final String dsl, + final String layer, final Class<?> extraLogType) { + this.name = name; + this.dsl = dsl; + this.layer = layer; + this.extraLogType = extraLogType; + } + } + + // ==================== JMH launcher ==================== + + @Test + void runBenchmark() throws Exception { + final Options opt = new OptionsBuilder() + .include(getClass().getSimpleName()) + .build(); + new Runner(opt).run(); + } +} diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/lal/LalComparisonTest.java b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/lal/LalComparisonTest.java new file mode 100644 index 000000000000..3efde00b398d --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/lal/LalComparisonTest.java @@ -0,0 +1,792 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.checker.lal; + +import java.io.File; +import java.lang.reflect.Field; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.ServiceLoader; +import com.google.protobuf.Message; +import com.google.protobuf.util.JsonFormat; +import org.apache.skywalking.apm.network.common.v3.KeyStringValuePair; +import org.apache.skywalking.apm.network.logging.v3.JSONLog; +import org.apache.skywalking.apm.network.logging.v3.LogData; +import org.apache.skywalking.apm.network.logging.v3.LogDataBody; +import org.apache.skywalking.apm.network.logging.v3.LogTags; +import org.apache.skywalking.apm.network.logging.v3.TextLog; +import org.apache.skywalking.apm.network.logging.v3.TraceContext; +import org.apache.skywalking.oap.server.analyzer.provider.trace.parser.listener.SampledTraceBuilder; +import org.apache.skywalking.oap.server.core.analysis.record.Record; +import org.apache.skywalking.oap.log.analyzer.v2.compiler.LALClassGenerator; +import org.apache.skywalking.oap.log.analyzer.v2.module.LogAnalyzerModule; +import org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleConfig; +import org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider; +import org.apache.skywalking.oap.server.core.CoreModule; +import org.apache.skywalking.oap.server.core.config.ConfigService; +import org.apache.skywalking.oap.server.core.config.NamingControl; +import org.apache.skywalking.oap.server.core.source.SourceReceiver; +import org.apache.skywalking.oap.server.library.module.ModuleManager; +import org.apache.skywalking.oap.server.library.module.ModuleProviderHolder; +import org.apache.skywalking.oap.server.library.module.ModuleServiceHolder; +import org.junit.jupiter.api.DynamicTest; +import org.junit.jupiter.api.TestFactory; +import org.yaml.snakeyaml.Yaml; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.fail; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.Mockito.atLeastOnce; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +/** + * Dual-path comparison test for LAL (Log Analysis Language) scripts. + * <ul> + * <li>Path A (v1): Groovy via {@code org.apache.skywalking.oap.log.analyzer.dsl.DSL}</li> + * <li>Path B (v2): ANTLR4 + Javassist via {@link LALClassGenerator}</li> + * </ul> + * Both paths are fed the same mock LogData and the resulting Binding state is compared. + * + * <p>v1 classes use original package {@code org.apache.skywalking.oap.log.analyzer.dsl.*}, + * v2 classes use {@code org.apache.skywalking.oap.log.analyzer.v2.dsl.*}. + */ +class LalComparisonTest { + + @TestFactory + Collection<DynamicTest> lalScriptsCompileAndExecute() throws Exception { + final List<DynamicTest> tests = new ArrayList<>(); + final Map<String, List<LalRule>> yamlRules = loadAllLalYamlFiles(); + + for (final Map.Entry<String, List<LalRule>> entry : yamlRules.entrySet()) { + final String yamlFile = entry.getKey(); + for (final LalRule rule : entry.getValue()) { + // Compile v2 once per rule — the expression is stateless + org.apache.skywalking.oap.log.analyzer.v2.dsl.LalExpression v2Expr = null; + String v2CompileError = null; + try { + v2Expr = compileV2(rule); + } catch (Exception e) { + final Throwable cause = e.getCause() != null ? e.getCause() : e; + v2CompileError = cause.getClass().getSimpleName() + + ": " + cause.getMessage(); + } + + if (rule.inputs.isEmpty()) { + final org.apache.skywalking.oap.log.analyzer.v2.dsl.LalExpression expr = v2Expr; + final String err = v2CompileError; + tests.add(DynamicTest.dynamicTest( + yamlFile + " | " + rule.name, + () -> compareExecution(rule.name, rule.dsl, null, expr, err) + )); + } else { + for (int i = 0; i < rule.inputs.size(); i++) { + final int idx = i; + final org.apache.skywalking.oap.log.analyzer.v2.dsl.LalExpression expr = v2Expr; + final String err = v2CompileError; + final String testName = rule.inputs.size() == 1 + ? rule.name : rule.name + " [" + i + "]"; + tests.add(DynamicTest.dynamicTest( + yamlFile + " | " + testName, + () -> compareExecution(testName, rule.dsl, + rule.inputs.get(idx), expr, err) + )); + } + } + } + } + + return tests; + } + + private org.apache.skywalking.oap.log.analyzer.v2.dsl.LalExpression compileV2( + final LalRule rule) throws Exception { + final LALClassGenerator generator = new LALClassGenerator(); + if (rule.extraLogType != null) { + generator.setExtraLogType(Class.forName(rule.extraLogType)); + } else if (rule.layer != null) { + generator.setExtraLogType(spiExtraLogTypes().get(rule.layer)); + } else { + generator.setExtraLogType(null); + } + if (rule.sourceFile != null) { + final String baseName = rule.sourceFile.getName() + .replaceFirst("\\.(yaml|yml)$", ""); + generator.setClassOutputDir(new java.io.File( + rule.sourceFile.getParent(), + baseName + ".generated-classes")); + generator.setClassNameHint(baseName + "_" + rule.name); + } + return generator.compile(rule.dsl); + } + + private void compareExecution( + final String testName, final String dsl, + final Map<String, Object> inputData, + final org.apache.skywalking.oap.log.analyzer.v2.dsl.LalExpression v2Expr, + final String v2CompileError) throws Exception { + + final LogData testLog = buildLogData(inputData, dsl); + + // Build proto extraLog from input data if available + final Message extraLog = buildExtraLog(inputData); + + // ---- V1: Groovy path ---- + // v1 uses original packages: org.apache.skywalking.oap.log.analyzer.dsl.* + final ModuleManager v1Manager = buildMockModuleManager(true); + final org.apache.skywalking.oap.log.analyzer.dsl.Binding v1Ctx; + try { + final org.apache.skywalking.oap.log.analyzer.dsl.DSL v1Dsl = + org.apache.skywalking.oap.log.analyzer.dsl.DSL.of( + v1Manager, + new org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleConfig(), + dsl); + disableSinkListeners(v1Dsl); + + v1Ctx = new org.apache.skywalking.oap.log.analyzer.dsl.Binding().log(testLog); + if (extraLog != null) { + v1Ctx.extraLog(extraLog); + } + v1Dsl.bind(v1Ctx); + v1Dsl.evaluate(); + } catch (Exception e) { + final Throwable cause = e.getCause() != null ? e.getCause() : e; + fail(testName + ": v1 (Groovy) failed — " + + cause.getClass().getSimpleName() + ": " + cause.getMessage()); + return; + } + + // ---- V2: ANTLR4 + Javassist path ---- + // v2 expression is pre-compiled (one compile per rule, multiple executions) + final ModuleManager v2Manager = buildMockModuleManager(false); + org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext v2Ctx = null; + String v2Error = v2CompileError; + if (v2Expr != null) { + try { + final org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.filter.FilterSpec v2FilterSpec = + new org.apache.skywalking.oap.log.analyzer.v2.dsl.spec.filter.FilterSpec( + v2Manager, new LogAnalyzerModuleConfig()); + disableSinkListenersOnSpec(v2FilterSpec); + + v2Ctx = new org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext().log(testLog); + if (extraLog != null) { + v2Ctx.extraLog(extraLog); + } + + v2Expr.execute(v2FilterSpec, v2Ctx); + } catch (Exception e) { + final Throwable cause = e.getCause() != null ? e.getCause() : e; + v2Error = cause.getClass().getSimpleName() + ": " + cause.getMessage(); + } + } + + // ---- Compare ---- + if (v2Ctx == null) { + fail(testName + ": v2 execution failed but v1 succeeded — " + v2Error); + return; + } + + // Compare binding state + assertEquals(v1Ctx.shouldAbort(), v2Ctx.shouldAbort(), + testName + ": shouldAbort mismatch"); + assertEquals(v1Ctx.shouldSave(), v2Ctx.shouldSave(), + testName + ": shouldSave mismatch"); + + final LogData.Builder v1Log = v1Ctx.log(); + final LogData.Builder v2Log = v2Ctx.log(); + + assertEquals(v1Log.getService(), v2Log.getService(), + testName + ": service mismatch"); + assertEquals(v1Log.getServiceInstance(), v2Log.getServiceInstance(), + testName + ": serviceInstance mismatch"); + assertEquals(v1Log.getEndpoint(), v2Log.getEndpoint(), + testName + ": endpoint mismatch"); + assertEquals(v1Log.getLayer(), v2Log.getLayer(), + testName + ": layer mismatch"); + assertEquals(v1Log.getTimestamp(), v2Log.getTimestamp(), + testName + ": timestamp mismatch"); + assertEquals(v1Log.getTags(), v2Log.getTags(), + testName + ": tags mismatch"); + + // Compare sampledTrace builder state + // v1 Groovy Binding throws MissingPropertyException if sampledTrace was never set + SampledTraceBuilder v1St = null; + try { + v1St = v1Ctx.sampledTraceBuilder(); + } catch (Exception ignored) { + // Not set — rule has no sampledTrace block + } + final SampledTraceBuilder v2St = v2Ctx.sampledTraceBuilder(); + if (v1St != null || v2St != null) { + if (v1St == null) { + fail(testName + ": v1 has no sampledTrace but v2 does"); + } + if (v2St == null) { + fail(testName + ": v2 has no sampledTrace but v1 does"); + } + // Fields set by prepareSampledTrace() from log context + assertEquals(v1St.getTraceId(), v2St.getTraceId(), + testName + ": sampledTrace.traceId mismatch"); + assertEquals(v1St.getServiceName(), v2St.getServiceName(), + testName + ": sampledTrace.serviceName mismatch"); + assertEquals(v1St.getServiceInstanceName(), v2St.getServiceInstanceName(), + testName + ": sampledTrace.serviceInstanceName mismatch"); + assertEquals(v1St.getLayer(), v2St.getLayer(), + testName + ": sampledTrace.layer mismatch"); + assertEquals(v1St.getTimestamp(), v2St.getTimestamp(), + testName + ": sampledTrace.timestamp mismatch"); + + // Verify traceId came from the log (not empty/fabricated) + assertEquals(testLog.getTraceContext().getTraceId(), + v2St.getTraceId(), + testName + ": sampledTrace.traceId should match log traceId"); + + // Fields set by DSL closure body + assertEquals(v1St.getLatency(), v2St.getLatency(), + testName + ": sampledTrace.latency mismatch"); + assertEquals(v1St.getUri(), v2St.getUri(), + testName + ": sampledTrace.uri mismatch"); + assertEquals(v1St.getReason(), v2St.getReason(), + testName + ": sampledTrace.reason mismatch"); + assertEquals(v1St.getProcessId(), v2St.getProcessId(), + testName + ": sampledTrace.processId mismatch"); + assertEquals(v1St.getDestProcessId(), v2St.getDestProcessId(), + testName + ": sampledTrace.destProcessId mismatch"); + assertEquals(v1St.getDetectPoint(), v2St.getDetectPoint(), + testName + ": sampledTrace.detectPoint mismatch"); + assertEquals(v1St.getComponentId(), v2St.getComponentId(), + testName + ": sampledTrace.componentId mismatch"); + + // Verify builder.toRecord() produces valid Record for RecordStreamProcessor + // (submitSampledTrace already called validate + toRecord + RecordStreamProcessor.in + // during execution; this explicitly confirms toRecord consistency) + final Record v1Record = v1St.toRecord(); + final Record v2Record = v2St.toRecord(); + assertNotNull(v1Record, testName + ": v1 toRecord() returned null"); + assertNotNull(v2Record, testName + ": v2 toRecord() returned null"); + assertEquals(v1Record.getClass(), v2Record.getClass(), + testName + ": toRecord() type mismatch"); + + // Verify v2 actually dispatched the trace via sourceReceiver.receive() + final SourceReceiver v2Receiver = v2Manager.find(CoreModule.NAME) + .provider().getService(SourceReceiver.class); + verify(v2Receiver, atLeastOnce()).receive(any()); + } + + // ---- Validate expected section ---- + if (inputData != null) { + @SuppressWarnings("unchecked") + final Map<String, Object> expect = + (Map<String, Object>) inputData.get("expect"); + if (expect != null) { + validateExpected(testName, v2Ctx, v2Log, expect); + } + } + } + + @SuppressWarnings("unchecked") + private void validateExpected(final String ruleName, + final org.apache.skywalking.oap.log.analyzer.v2.dsl.ExecutionContext ctx, + final LogData.Builder logBuilder, + final Map<String, Object> expect) { + for (final Map.Entry<String, Object> entry : expect.entrySet()) { + final String key = entry.getKey(); + final String expected = String.valueOf(entry.getValue()); + + switch (key) { + case "save": + assertEquals(Boolean.parseBoolean(expected), ctx.shouldSave(), + ruleName + ": expect.save mismatch"); + break; + case "abort": + assertEquals(Boolean.parseBoolean(expected), ctx.shouldAbort(), + ruleName + ": expect.abort mismatch"); + break; + case "service": + assertEquals(expected, logBuilder.getService(), + ruleName + ": expect.service mismatch"); + break; + case "instance": + assertEquals(expected, logBuilder.getServiceInstance(), + ruleName + ": expect.instance mismatch"); + break; + case "endpoint": + assertEquals(expected, logBuilder.getEndpoint(), + ruleName + ": expect.endpoint mismatch"); + break; + case "layer": + assertEquals(expected, logBuilder.getLayer(), + ruleName + ": expect.layer mismatch"); + break; + case "timestamp": + assertEquals(Long.parseLong(expected), logBuilder.getTimestamp(), + ruleName + ": expect.timestamp mismatch"); + break; + default: + if (key.startsWith("tag.")) { + final String tagKey = key.substring(4); + final String actual = logBuilder.getTags().getDataList().stream() + .filter(kv -> kv.getKey().equals(tagKey)) + .map(KeyStringValuePair::getValue) + .findFirst().orElse(""); + assertEquals(expected, actual, + ruleName + ": expect." + key + " mismatch"); + } else if (key.startsWith("sampledTrace.")) { + final String field = key.substring("sampledTrace.".length()); + final SampledTraceBuilder st = ctx.sampledTraceBuilder(); + assertNotNull(st, ruleName + ": expect sampledTrace but builder is null"); + validateSampledTraceField(ruleName, field, expected, st); + } + break; + } + } + } + + private void validateSampledTraceField(final String ruleName, + final String field, final String expected, + final SampledTraceBuilder st) { + switch (field) { + case "traceId": + assertEquals(expected, st.getTraceId(), + ruleName + ": expect.sampledTrace.traceId mismatch"); + break; + case "serviceName": + assertEquals(expected, st.getServiceName(), + ruleName + ": expect.sampledTrace.serviceName mismatch"); + break; + case "serviceInstanceName": + assertEquals(expected, st.getServiceInstanceName(), + ruleName + ": expect.sampledTrace.serviceInstanceName mismatch"); + break; + case "timestamp": + assertEquals(Long.parseLong(expected), st.getTimestamp(), + ruleName + ": expect.sampledTrace.timestamp mismatch"); + break; + case "latency": + assertEquals(Integer.parseInt(expected), st.getLatency(), + ruleName + ": expect.sampledTrace.latency mismatch"); + break; + case "uri": + assertEquals(expected, st.getUri(), + ruleName + ": expect.sampledTrace.uri mismatch"); + break; + case "reason": + assertNotNull(st.getReason(), + ruleName + ": expect.sampledTrace.reason is null"); + assertEquals(expected, st.getReason().name(), + ruleName + ": expect.sampledTrace.reason mismatch"); + break; + case "processId": + assertEquals(expected, st.getProcessId(), + ruleName + ": expect.sampledTrace.processId mismatch"); + break; + case "destProcessId": + assertEquals(expected, st.getDestProcessId(), + ruleName + ": expect.sampledTrace.destProcessId mismatch"); + break; + case "detectPoint": + assertNotNull(st.getDetectPoint(), + ruleName + ": expect.sampledTrace.detectPoint is null"); + assertEquals(expected, st.getDetectPoint().name(), + ruleName + ": expect.sampledTrace.detectPoint mismatch"); + break; + case "componentId": + assertEquals(Integer.parseInt(expected), st.getComponentId(), + ruleName + ": expect.sampledTrace.componentId mismatch"); + break; + default: + break; + } + } + + private ModuleManager buildMockModuleManager(final boolean isV1) { + final ModuleManager manager = mock(ModuleManager.class); + setInternalField(manager, "isInPrepareStage", false); + when(manager.find(anyString())).thenReturn(mock(ModuleProviderHolder.class)); + + // v1 and v2 have different LogAnalyzerModuleProvider classes that ExtractorSpec casts to. + // Each path needs its own manager with the correct provider type. + final ModuleProviderHolder logAnalyzerHolder = mock(ModuleProviderHolder.class); + if (isV1) { + final org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleProvider + provider = mock( + org.apache.skywalking.oap.log.analyzer.provider.LogAnalyzerModuleProvider.class); + when(provider.getMetricConverts()).thenReturn(Collections.emptyList()); + when(logAnalyzerHolder.provider()).thenReturn(provider); + } else { + final org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleProvider + provider = mock( + org.apache.skywalking.oap.log.analyzer.v2.provider.LogAnalyzerModuleProvider.class); + when(provider.getMetricConverts()).thenReturn(Collections.emptyList()); + when(logAnalyzerHolder.provider()).thenReturn(provider); + } + when(manager.find(LogAnalyzerModule.NAME)).thenReturn(logAnalyzerHolder); + + when(manager.find(CoreModule.NAME).provider()).thenReturn(mock(ModuleServiceHolder.class)); + when(manager.find(CoreModule.NAME).provider().getService(SourceReceiver.class)) + .thenReturn(mock(SourceReceiver.class)); + when(manager.find(CoreModule.NAME).provider().getService(ConfigService.class)) + .thenReturn(mock(ConfigService.class)); + when(manager.find(CoreModule.NAME) + .provider() + .getService(ConfigService.class) + .getSearchableLogsTags()) + .thenReturn(""); + final NamingControl namingControl = mock(NamingControl.class); + when(namingControl.formatServiceName(anyString())) + .thenAnswer(inv -> inv.getArgument(0)); + when(namingControl.formatInstanceName(anyString())) + .thenAnswer(inv -> inv.getArgument(0)); + when(namingControl.formatEndpointName(anyString(), anyString())) + .thenAnswer(inv -> inv.getArgument(1)); + when(manager.find(CoreModule.NAME).provider().getService(NamingControl.class)) + .thenReturn(namingControl); + return manager; + } + + @SuppressWarnings("unchecked") + private LogData buildLogData(final Map<String, Object> inputData, + final String dsl) { + if (inputData == null) { + return buildSyntheticLogData(dsl); + } + + final LogData.Builder builder = LogData.newBuilder(); + + final String service = (String) inputData.get("service"); + if (service != null) { + builder.setService(service); + } + + final String instance = (String) inputData.get("instance"); + if (instance != null) { + builder.setServiceInstance(instance); + } + + final String traceId = (String) inputData.get("trace-id"); + if (traceId != null) { + builder.setTraceContext(TraceContext.newBuilder().setTraceId(traceId)); + } + + final Object tsObj = inputData.get("timestamp"); + if (tsObj != null) { + builder.setTimestamp(Long.parseLong(String.valueOf(tsObj))); + } + + final String bodyType = (String) inputData.get("body-type"); + final String body = (String) inputData.get("body"); + + if ("json".equals(bodyType) && body != null) { + builder.setBody(LogDataBody.newBuilder() + .setJson(JSONLog.newBuilder().setJson(body))); + } else if ("text".equals(bodyType) && body != null) { + builder.setBody(LogDataBody.newBuilder() + .setText(TextLog.newBuilder().setText(body))); + } + + final Map<String, String> tags = + (Map<String, String>) inputData.get("tags"); + if (tags != null && !tags.isEmpty()) { + final LogTags.Builder tagsBuilder = LogTags.newBuilder(); + for (final Map.Entry<String, String> tag : tags.entrySet()) { + tagsBuilder.addData(KeyStringValuePair.newBuilder() + .setKey(tag.getKey()) + .setValue(tag.getValue())); + } + builder.setTags(tagsBuilder); + } + + return builder.build(); + } + + private LogData buildSyntheticLogData(final String dsl) { + final LogData.Builder builder = LogData.newBuilder() + .setService("test-service") + .setServiceInstance("test-instance") + .setTimestamp(System.currentTimeMillis()) + .setTraceContext(TraceContext.newBuilder() + .setTraceId("test-trace-id-123") + .setTraceSegmentId("test-segment-id-456") + .setSpanId(1)); + + if (dsl.contains("json")) { + builder.setBody(LogDataBody.newBuilder() + .setJson(JSONLog.newBuilder() + .setJson("{\"level\":\"ERROR\",\"msg\":\"test\"," + + "\"layer\":\"GENERAL\",\"service\":\"test-svc\"," + + "\"instance\":\"test-inst\",\"endpoint\":\"test-ep\"," + + "\"time\":\"1234567890\"," + + "\"id\":\"slow-1\",\"statement\":\"SELECT 1\"," + + "\"query_time\":500,\"code\":200," + + "\"env\":\"prod\",\"region\":\"us-east\"," + + "\"skip\":\"false\"," + + "\"data\":{\"name\":\"test-value\"}," + + "\"latency\":100,\"uri\":\"/api/test\"," + + "\"reason\":\"SLOW\",\"pid\":\"proc-1\"," + + "\"dpid\":\"proc-2\",\"dp\":\"CLIENT\"}"))); + } + + if (dsl.contains("LOG_KIND")) { + builder.setTags(LogTags.newBuilder() + .addData(KeyStringValuePair.newBuilder() + .setKey("LOG_KIND").setValue("SLOW_SQL"))); + } + + return builder.build(); + } + + private void disableSinkListeners(final Object dsl) { + try { + final Object filterSpec = getInternalField(dsl, "filterSpec"); + setInternalField(filterSpec, "sinkListenerFactories", Collections.emptyList()); + } catch (Exception e) { + // Best effort + } + } + + private void disableSinkListenersOnSpec(final Object filterSpec) { + try { + setInternalField(filterSpec, "sinkListenerFactories", Collections.emptyList()); + } catch (Exception e) { + // Best effort + } + } + + private static void setInternalField(final Object target, final String fieldName, + final Object value) { + try { + Field field = null; + Class<?> clazz = target.getClass(); + while (clazz != null && field == null) { + try { + field = clazz.getDeclaredField(fieldName); + } catch (NoSuchFieldException e) { + clazz = clazz.getSuperclass(); + } + } + if (field != null) { + field.setAccessible(true); + field.set(target, value); + } + } catch (Exception e) { + // ignore + } + } + + private static Object getInternalField(final Object target, final String fieldName) { + try { + Field field = null; + Class<?> clazz = target.getClass(); + while (clazz != null && field == null) { + try { + field = clazz.getDeclaredField(fieldName); + } catch (NoSuchFieldException e) { + clazz = clazz.getSuperclass(); + } + } + if (field != null) { + field.setAccessible(true); + return field.get(target); + } + } catch (Exception e) { + // ignore + } + return null; + } + + @SuppressWarnings("unchecked") + private Map<String, List<LalRule>> loadAllLalYamlFiles() throws Exception { + final Map<String, List<LalRule>> result = new HashMap<>(); + final Yaml yaml = new Yaml(); + + final Path scriptsDir = findScriptsDir("lal"); + if (scriptsDir == null) { + return result; + } + final Path lalDir = scriptsDir.resolve("test-lal"); + if (!Files.isDirectory(lalDir)) { + return result; + } + + // Scan top-level and subdirectories (oap-cases/, feature-cases/) + final List<File> yamlFiles = new ArrayList<>(); + collectYamlFiles(lalDir.toFile(), yamlFiles); + + for (final File file : yamlFiles) { + final String content = Files.readString(file.toPath()); + final Map<String, Object> config = yaml.load(content); + if (config == null || !config.containsKey("rules")) { + continue; + } + final List<Map<String, String>> rules = + (List<Map<String, String>>) config.get("rules"); + if (rules == null) { + continue; + } + + // Load matching .input.data file if present + final String baseName = file.getName() + .replaceFirst("\\.(yaml|yml)$", ""); + final File inputDataFile = new File(file.getParent(), + baseName + ".input.data"); + Map<String, Map<String, Object>> inputData = null; + if (inputDataFile.exists()) { + inputData = yaml.load( + Files.readString(inputDataFile.toPath())); + } + + final List<LalRule> lalRules = new ArrayList<>(); + for (final Map<String, String> rule : rules) { + final String name = rule.get("name"); + final String dslStr = rule.get("dsl"); + if (name == null || dslStr == null) { + continue; + } + final String extraLogType = rule.get("extraLogType"); + final String layer = rule.get("layer"); + + final Object ruleInput = inputData != null + ? inputData.get(name) : null; + + final List<Map<String, Object>> inputs; + if (ruleInput instanceof List) { + inputs = (List<Map<String, Object>>) ruleInput; + } else if (ruleInput instanceof Map) { + inputs = Collections.singletonList( + (Map<String, Object>) ruleInput); + } else { + inputs = Collections.emptyList(); + } + lalRules.add(new LalRule( + name, dslStr, extraLogType, layer, inputs, file)); + } + + if (!lalRules.isEmpty()) { + final String relative = lalDir.relativize(file.toPath()).toString(); + result.put("lal/" + relative, lalRules); + } + } + return result; + } + + private Path findScriptsDir(final String language) { + final String[] candidates = { + "test/script-cases/scripts/" + language, + "../../scripts/" + language + }; + for (final String candidate : candidates) { + final Path path = Path.of(candidate); + if (Files.isDirectory(path)) { + return path; + } + } + return null; + } + + private static void collectYamlFiles(final File dir, + final List<File> result) { + final File[] children = dir.listFiles(); + if (children == null) { + return; + } + for (final File child : children) { + if (child.isDirectory()) { + collectYamlFiles(child, result); + } else if (child.getName().endsWith(".yaml") + || child.getName().endsWith(".yml")) { + result.add(child); + } + } + } + + // ==================== Proto extraLog builder ==================== + + @SuppressWarnings("unchecked") + private static Message buildExtraLog( + final Map<String, Object> inputData) throws Exception { + if (inputData == null) { + return null; + } + final Map<String, String> extraLog = + (Map<String, String>) inputData.get("extra-log"); + if (extraLog == null) { + return null; + } + + final String protoClass = extraLog.get("proto-class"); + final String protoJson = extraLog.get("proto-json"); + if (protoClass == null || protoJson == null) { + return null; + } + + final Class<?> clazz = Class.forName(protoClass); + final Message.Builder builder = (Message.Builder) + clazz.getMethod("newBuilder").invoke(null); + JsonFormat.parser() + .ignoringUnknownFields() + .merge(protoJson, builder); + return builder.build(); + } + + // ==================== SPI lookup ==================== + + private Map<String, Class<?>> spiTypes; + + private Map<String, Class<?>> spiExtraLogTypes() { + if (spiTypes == null) { + spiTypes = new HashMap<>(); + for (final LALSourceTypeProvider p : + ServiceLoader.load(LALSourceTypeProvider.class)) { + spiTypes.put(p.layer().name(), p.extraLogType()); + } + } + return spiTypes; + } + + // ==================== Inner classes ==================== + + private static class LalRule { + final String name; + final String dsl; + final String extraLogType; + final String layer; + final List<Map<String, Object>> inputs; + final File sourceFile; + + LalRule(final String name, final String dsl, + final String extraLogType, final String layer, + final List<Map<String, Object>> inputs, + final File sourceFile) { + this.name = name; + this.dsl = dsl; + this.extraLogType = extraLogType; + this.layer = layer; + this.inputs = inputs; + this.sourceFile = sourceFile; + } + } +} diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/lal/TestMeshLALSourceTypeProvider.java b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/lal/TestMeshLALSourceTypeProvider.java new file mode 100644 index 000000000000..bcaf35f6547f --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/lal/TestMeshLALSourceTypeProvider.java @@ -0,0 +1,34 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.checker.lal; + +import io.envoyproxy.envoy.data.accesslog.v3.HTTPAccessLogEntry; +import org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider; +import org.apache.skywalking.oap.server.core.analysis.Layer; + +public class TestMeshLALSourceTypeProvider implements LALSourceTypeProvider { + @Override + public Layer layer() { + return Layer.MESH; + } + + @Override + public Class<?> extraLogType() { + return HTTPAccessLogEntry.class; + } +} diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalBenchmark.java b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalBenchmark.java new file mode 100644 index 000000000000..94d7e3587125 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalBenchmark.java @@ -0,0 +1,342 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.checker.mal; + +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import com.google.common.collect.ImmutableMap; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALClassGenerator; +import org.junit.jupiter.api.Test; +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.BenchmarkMode; +import org.openjdk.jmh.annotations.Fork; +import org.openjdk.jmh.annotations.Level; +import org.openjdk.jmh.annotations.Measurement; +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.annotations.OutputTimeUnit; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.Setup; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.annotations.Warmup; +import org.openjdk.jmh.infra.Blackhole; +import org.openjdk.jmh.runner.Runner; +import org.openjdk.jmh.runner.options.Options; +import org.openjdk.jmh.runner.options.OptionsBuilder; +import org.yaml.snakeyaml.Yaml; + +/** + * JMH benchmark comparing MAL v1 (Groovy) vs v2 (ANTLR4 + Javassist) + * compilation and execution performance using oap.yaml (56 rules). + * + * <p>Run: mvn test -pl test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker + * -Dtest=MalBenchmark#runBenchmark -DfailIfNoTests=false + * + * <h2>Reference results (Apple M3 Max, 128 GB RAM, macOS 26.2, JDK 25)</h2> + * <pre> + * Benchmark Mode Cnt Score Error Units + * MalBenchmark.compileV1 avgt 5 58580.003 ± 5959.853 us/op + * MalBenchmark.compileV2 avgt 5 62741.101 ± 12124.545 us/op + * MalBenchmark.executeV1 avgt 5 1838.774 ± 143.389 us/op + * MalBenchmark.executeV2 avgt 5 376.037 ± 15.169 us/op + * </pre> + * + * <p>Execute speedup: v2 is ~4.9x faster than v1. + * Compile times are comparable (56 rules, dominated by class generation overhead). + */ +@State(Scope.Thread) +@BenchmarkMode(Mode.AverageTime) +@OutputTimeUnit(TimeUnit.MICROSECONDS) +@Warmup(iterations = 3, time = 2) +@Measurement(iterations = 5, time = 5) +@Fork(1) +public class MalBenchmark { + + private List<RuleEntry> rules; + + // Pre-compiled expressions for execute benchmarks + private List<org.apache.skywalking.oap.meter.analyzer.dsl.Expression> v1Exprs; + private List<org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalExpression> v2Exprs; + + // Shared input data + private Map<String, org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily> v1Data; + private Map<String, org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily> v2Data; + private List<String> ruleNames; + + @Setup(Level.Trial) + @SuppressWarnings("unchecked") + public void setup() throws Exception { + // Load oap.yaml + final Path oapYaml = findScript("mal", "test-otel-rules/oap.yaml"); + final Yaml yaml = new Yaml(); + final Map<String, Object> config = yaml.load(Files.readString(oapYaml)); + + final String expSuffix = config.containsKey("expSuffix") + ? (String) config.get("expSuffix") : ""; + final String expPrefix = config.containsKey("expPrefix") + ? (String) config.get("expPrefix") : ""; + final List<Map<String, String>> metricsRules = + (List<Map<String, String>>) config.get("metricsRules"); + + rules = new ArrayList<>(); + for (final Map<String, String> rule : metricsRules) { + final String name = rule.get("name"); + final String exp = rule.get("exp"); + if (name != null && exp != null) { + rules.add(new RuleEntry(name, formatExp(expPrefix, expSuffix, exp))); + } + } + + // Load oap.data.yaml + final Path dataYaml = oapYaml.getParent().resolve("oap.data.yaml"); + final Map<String, Object> dataConfig = yaml.load(Files.readString(dataYaml)); + final Map<String, Object> inputSection = + (Map<String, Object>) dataConfig.get("input"); + + v1Data = buildV1Data(inputSection); + v2Data = buildV2Data(inputSection); + + // Pre-compile all rules for execute benchmarks + v1Exprs = new ArrayList<>(); + v2Exprs = new ArrayList<>(); + ruleNames = new ArrayList<>(); + for (final RuleEntry rule : rules) { + try { + final org.apache.skywalking.oap.meter.analyzer.dsl.Expression v1 = + org.apache.skywalking.oap.meter.analyzer.dsl.DSL.parse( + rule.name, rule.expression); + v1.parse(); + v1Exprs.add(v1); + + final MALClassGenerator gen = new MALClassGenerator(); + final org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalExpression v2 = + gen.compile(rule.name, rule.expression); + v2Exprs.add(v2); + ruleNames.add(rule.name); + } catch (Exception e) { + // Skip rules that fail to compile (same as comparison test) + } + } + + // Prime CounterWindows for increase/rate expressions + for (int i = 0; i < v1Exprs.size(); i++) { + final String name = ruleNames.get(i); + try { + v1Exprs.get(i).run(v1Data); + } catch (Exception ignored) { + } + try { + setMetricName(v2Data, name); + v2Exprs.get(i).run(v2Data); + } catch (Exception ignored) { + } + } + } + + @Benchmark + public void compileV1(final Blackhole bh) { + for (final RuleEntry rule : rules) { + try { + final org.apache.skywalking.oap.meter.analyzer.dsl.Expression expr = + org.apache.skywalking.oap.meter.analyzer.dsl.DSL.parse( + rule.name, rule.expression); + bh.consume(expr.parse()); + } catch (Exception ignored) { + } + } + } + + @Benchmark + public void compileV2(final Blackhole bh) { + for (final RuleEntry rule : rules) { + try { + final MALClassGenerator gen = new MALClassGenerator(); + bh.consume(gen.compile(rule.name, rule.expression)); + } catch (Exception ignored) { + } + } + } + + @Benchmark + public void executeV1(final Blackhole bh) { + for (final org.apache.skywalking.oap.meter.analyzer.dsl.Expression v1 : v1Exprs) { + try { + bh.consume(v1.run(v1Data)); + } catch (Exception ignored) { + } + } + } + + @Benchmark + public void executeV2(final Blackhole bh) { + for (int i = 0; i < v2Exprs.size(); i++) { + try { + setMetricName(v2Data, ruleNames.get(i)); + bh.consume(v2Exprs.get(i).run(v2Data)); + } catch (Exception ignored) { + } + } + } + + // ==================== Data builders ==================== + + @SuppressWarnings("unchecked") + private Map<String, org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily> buildV1Data( + final Map<String, Object> inputSection) { + final Map<String, org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily> data = + new HashMap<>(); + final long now = System.currentTimeMillis(); + for (final Map.Entry<String, Object> entry : inputSection.entrySet()) { + final String sampleName = entry.getKey(); + final List<Map<String, Object>> sampleList = + (List<Map<String, Object>>) entry.getValue(); + final List<org.apache.skywalking.oap.meter.analyzer.dsl.Sample> samples = + new ArrayList<>(); + for (final Map<String, Object> def : sampleList) { + final Map<String, String> labels = parseLabels(def); + samples.add(org.apache.skywalking.oap.meter.analyzer.dsl.Sample.builder() + .name(sampleName) + .labels(ImmutableMap.copyOf(labels)) + .value(((Number) def.get("value")).doubleValue()) + .timestamp(now) + .build()); + } + data.put(sampleName, + org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamilyBuilder + .newBuilder(samples.toArray( + new org.apache.skywalking.oap.meter.analyzer.dsl.Sample[0])) + .build()); + } + return data; + } + + @SuppressWarnings("unchecked") + private Map<String, org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily> buildV2Data( + final Map<String, Object> inputSection) { + final Map<String, org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily> data = + new HashMap<>(); + final long now = System.currentTimeMillis(); + for (final Map.Entry<String, Object> entry : inputSection.entrySet()) { + final String sampleName = entry.getKey(); + final List<Map<String, Object>> sampleList = + (List<Map<String, Object>>) entry.getValue(); + final List<org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample> samples = + new ArrayList<>(); + for (final Map<String, Object> def : sampleList) { + final Map<String, String> labels = parseLabels(def); + samples.add(org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample.builder() + .name(sampleName) + .labels(ImmutableMap.copyOf(labels)) + .value(((Number) def.get("value")).doubleValue()) + .timestamp(now) + .build()); + } + data.put(sampleName, + org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyBuilder + .newBuilder(samples.toArray( + new org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample[0])) + .build()); + } + return data; + } + + @SuppressWarnings("unchecked") + private static Map<String, String> parseLabels(final Map<String, Object> def) { + final Map<String, String> labels = new HashMap<>(); + final Object raw = def.get("labels"); + if (raw instanceof Map) { + for (final Map.Entry<String, Object> e : + ((Map<String, Object>) raw).entrySet()) { + labels.put(e.getKey(), String.valueOf(e.getValue())); + } + } + return labels; + } + + private static void setMetricName( + final Map<String, org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily> data, + final String name) { + for (final org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily s : + data.values()) { + if (s != org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily.EMPTY) { + s.context.setMetricName(name); + } + } + } + + // ==================== Utilities ==================== + + private static String formatExp(final String expPrefix, final String expSuffix, + final String exp) { + String ret = exp; + if (!expPrefix.isEmpty()) { + final int dot = exp.indexOf('.'); + if (dot >= 0) { + ret = String.format("(%s.%s)", exp.substring(0, dot), expPrefix); + final String after = exp.substring(dot + 1); + if (!after.isEmpty()) { + ret = String.format("(%s.%s)", ret, after); + } + } else { + ret = String.format("(%s.%s)", exp, expPrefix); + } + } + if (!expSuffix.isEmpty()) { + ret = String.format("(%s).%s", ret, expSuffix); + } + return ret; + } + + private static Path findScript(final String language, final String relative) { + final String[] candidates = { + "test/script-cases/scripts/" + language + "/" + relative, + "../../scripts/" + language + "/" + relative + }; + for (final String candidate : candidates) { + final Path path = Path.of(candidate); + if (Files.isRegularFile(path)) { + return path; + } + } + throw new IllegalStateException("Cannot find " + relative + " in scripts/" + language); + } + + private static class RuleEntry { + final String name; + final String expression; + + RuleEntry(final String name, final String expression) { + this.name = name; + this.expression = expression; + } + } + + // ==================== JMH launcher ==================== + + @Test + void runBenchmark() throws Exception { + final Options opt = new OptionsBuilder() + .include(getClass().getSimpleName()) + .build(); + new Runner(opt).run(); + } +} diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalComparisonTest.java b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalComparisonTest.java new file mode 100644 index 000000000000..1f84691350b1 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalComparisonTest.java @@ -0,0 +1,1107 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.checker.mal; + +import java.io.File; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.HashMap; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.TreeMap; +import java.util.regex.Matcher; +import java.util.regex.Pattern; +import java.util.stream.Collectors; +import com.google.common.collect.ImmutableMap; +import org.apache.skywalking.oap.server.core.analysis.meter.MeterEntity; +import lombok.extern.slf4j.Slf4j; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALClassGenerator; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.ExpressionMetadata; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.DynamicTest; +import org.junit.jupiter.api.TestFactory; +import org.mockito.MockedStatic; +import org.mockito.Mockito; +import org.yaml.snakeyaml.Yaml; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.junit.jupiter.api.Assertions.fail; + +/** + * Dual-path comparison test for MAL (Meter Analysis Language) expressions. + * <ul> + * <li>Path A (v1): Groovy via {@code org.apache.skywalking.oap.meter.analyzer.dsl.DSL}</li> + * <li>Path B (v2): ANTLR4 + Javassist via {@link MALClassGenerator}</li> + * </ul> + * + * <p>When a companion {@code .data.yaml} file exists alongside a MAL YAML script, + * it provides realistic mock data (sample names, labels, values) for runtime + * execution comparison and expected output validation. + * + * <p>v1 classes use original package {@code org.apache.skywalking.oap.meter.analyzer.dsl.*}, + * v2 classes use {@code org.apache.skywalking.oap.meter.analyzer.v2.dsl.*}. + * Both are called via hard-coded typed references (no reflection). + */ +@Slf4j +class MalComparisonTest { + + private static MockedStatic<org.apache.skywalking.oap.meter.analyzer.k8s.K8sInfoRegistry> V1_K8S_MOCK; + private static MockedStatic<org.apache.skywalking.oap.meter.analyzer.v2.k8s.K8sInfoRegistry> V2_K8S_MOCK; + + static { + final org.apache.skywalking.oap.server.core.config.NamingControl namingControl = + Mockito.mock(org.apache.skywalking.oap.server.core.config.NamingControl.class); + Mockito.when(namingControl.formatServiceName(org.mockito.ArgumentMatchers.anyString())) + .thenAnswer(invocation -> invocation.getArgument(0)); + Mockito.when(namingControl.formatInstanceName(org.mockito.ArgumentMatchers.anyString())) + .thenAnswer(invocation -> invocation.getArgument(0)); + Mockito.when(namingControl.formatEndpointName(org.mockito.ArgumentMatchers.anyString(), org.mockito.ArgumentMatchers.anyString())) + .thenAnswer(invocation -> invocation.getArgument(1)); + MeterEntity.setNamingControl(namingControl); + + // Mock K8s metadata for retagByK8sMeta rules (pod→service lookup) + final org.apache.skywalking.oap.meter.analyzer.k8s.K8sInfoRegistry mockV1K8s = + Mockito.mock(org.apache.skywalking.oap.meter.analyzer.k8s.K8sInfoRegistry.class); + Mockito.when(mockV1K8s.findServiceName( + org.mockito.ArgumentMatchers.anyString(), + org.mockito.ArgumentMatchers.anyString())) + .thenAnswer(inv -> inv.<String>getArgument(1) + "." + inv.<String>getArgument(0)); + V1_K8S_MOCK = Mockito.mockStatic( + org.apache.skywalking.oap.meter.analyzer.k8s.K8sInfoRegistry.class); + V1_K8S_MOCK.when( + org.apache.skywalking.oap.meter.analyzer.k8s.K8sInfoRegistry::getInstance) + .thenReturn(mockV1K8s); + + final org.apache.skywalking.oap.meter.analyzer.v2.k8s.K8sInfoRegistry mockV2K8s = + Mockito.mock(org.apache.skywalking.oap.meter.analyzer.v2.k8s.K8sInfoRegistry.class); + Mockito.when(mockV2K8s.findServiceName( + org.mockito.ArgumentMatchers.anyString(), + org.mockito.ArgumentMatchers.anyString())) + .thenAnswer(inv -> inv.<String>getArgument(1) + "." + inv.<String>getArgument(0)); + V2_K8S_MOCK = Mockito.mockStatic( + org.apache.skywalking.oap.meter.analyzer.v2.k8s.K8sInfoRegistry.class); + V2_K8S_MOCK.when( + org.apache.skywalking.oap.meter.analyzer.v2.k8s.K8sInfoRegistry::getInstance) + .thenReturn(mockV2K8s); + } + + @AfterAll + static void teardownK8sMocks() { + if (V1_K8S_MOCK != null) { + V1_K8S_MOCK.close(); + } + if (V2_K8S_MOCK != null) { + V2_K8S_MOCK.close(); + } + } + + private static final Pattern TAG_EQUAL_PATTERN = + Pattern.compile("\\.tagEqual\\s*\\(\\s*'([^']+)'\\s*,\\s*'([^']+)'\\s*\\)"); + + private static final String[] HISTOGRAM_LE_VALUES = + {"50", "100", "250", "500", "1000"}; + + /** Advance by 2 s per call — must be >1 s (for timeDiff/1000≥1) and <15 s (smallest rate window). */ + private long timestampCounter = System.currentTimeMillis(); + + @TestFactory + Collection<DynamicTest> malExpressionsMatch() throws Exception { + final List<DynamicTest> tests = new ArrayList<>(); + final Map<String, List<MalRule>> yamlRules = loadAllMalYamlFiles(); + + for (final Map.Entry<String, List<MalRule>> entry : yamlRules.entrySet()) { + final String yamlFile = entry.getKey(); + for (final MalRule rule : entry.getValue()) { + // Compile v2 once per metric — compilation is independent of input data + org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalExpression v2Expr = null; + ExpressionMetadata v2Meta = null; + String v2CompileError = null; + try { + v2Expr = compileV2(rule); + v2Meta = v2Expr.metadata(); + } catch (Exception e) { + v2CompileError = e.getMessage(); + } + + final org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalExpression fExpr = v2Expr; + final ExpressionMetadata fMeta = v2Meta; + final String fErr = v2CompileError; + tests.add(DynamicTest.dynamicTest( + yamlFile + " | " + rule.name, + () -> compareExpression(rule, fExpr, fMeta, fErr) + )); + } + } + + return tests; + } + + private org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalExpression compileV2( + final MalRule rule) throws Exception { + final MALClassGenerator generator = new MALClassGenerator(); + if (rule.sourceFile != null) { + final String baseName = rule.sourceFile.getName() + .replaceFirst("\\.(yaml|yml)$", ""); + generator.setClassOutputDir(new java.io.File( + rule.sourceFile.getParent(), + baseName + ".generated-classes")); + generator.setClassNameHint(rule.name); + } + return generator.compile(rule.name, rule.fullExpression); + } + + @SuppressWarnings("unchecked") + private void compareExpression( + final MalRule rule, + final org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalExpression v2MalExpr, + final ExpressionMetadata v2Meta, + final String v2CompileError) throws Exception { + final String metricName = rule.name; + final String expression = rule.fullExpression; + + // ---- V1: Groovy path (original packages) ---- + final org.apache.skywalking.oap.meter.analyzer.dsl.Expression v1Expr; + final org.apache.skywalking.oap.meter.analyzer.dsl.ExpressionParsingContext v1Ctx; + try { + v1Expr = org.apache.skywalking.oap.meter.analyzer.dsl.DSL.parse( + metricName, expression); + v1Ctx = v1Expr.parse(); + } catch (Exception e) { + final Throwable cause = e.getCause() != null ? e.getCause() : e; + fail(metricName + ": v1 (Groovy) failed — " + + cause.getClass().getSimpleName() + ": " + cause.getMessage()); + return; + } + + // ---- Compare metadata ---- + if (v2Meta == null) { + fail(metricName + ": v2 compile failed but v1 succeeded — " + v2CompileError); + return; + } + + assertEquals(v1Ctx.getSamples(), v2Meta.getSamples(), + metricName + ": samples mismatch"); + assertEquals(v1Ctx.getScopeType(), v2Meta.getScopeType(), + metricName + ": scopeType mismatch"); + assertEquals( + v1Ctx.getDownsampling() == null ? null : v1Ctx.getDownsampling().name(), + v2Meta.getDownsampling() == null ? null : v2Meta.getDownsampling().name(), + metricName + ": downsampling mismatch"); + assertEquals(v1Ctx.isHistogram(), v2Meta.isHistogram(), + metricName + ": isHistogram mismatch"); + assertEquals(v1Ctx.getScopeLabels(), v2Meta.getScopeLabels(), + metricName + ": scopeLabels mismatch"); + assertEquals(v1Ctx.getAggregationLabels(), v2Meta.getAggregationLabels(), + metricName + ": aggregationLabels mismatch"); + + // ---- Runtime execution comparison ---- + if (rule.inputConfig != null) { + final Map<String, Object> inputSection = + (Map<String, Object>) rule.inputConfig.get("input"); + final Map<String, Object> expectedSection = + (Map<String, Object>) rule.inputConfig.get("expected"); + if (inputSection != null) { + compareExecutionWithInput( + rule, v1Expr, v2MalExpr, v2Meta, inputSection, expectedSection); + return; + } + } + compareExecution(metricName, expression, v1Expr, v2MalExpr, v2Meta); + } + + // ==================== Input-driven runtime comparison ==================== + + @SuppressWarnings("unchecked") + private void compareExecutionWithInput( + final MalRule rule, + final org.apache.skywalking.oap.meter.analyzer.dsl.Expression v1Expr, + final org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalExpression v2MalExpr, + final ExpressionMetadata v2Meta, + final Map<String, Object> inputSection, + final Map<String, Object> expectedSection) { + final String metricName = rule.name; + // Unique per file+rule to isolate CounterWindow entries across files + final String cwMetricName = rule.sourceFile.getName() + "/" + metricName; + final String expression = rule.fullExpression; + final boolean hasIncrease = expression.contains(".increase(") + || expression.contains(".rate("); + + // Clear CounterWindow before each rule so previous rules' entries + // cannot contaminate rate()/increase() calculations + org.apache.skywalking.oap.meter.analyzer.dsl.counter.CounterWindow + .INSTANCE.reset(); + + // For increase()/rate(), prime the CounterWindow with half-value data + // so that rate = (value - value*0.5) / dt ≠ 0 — avoids 0/0 NaN in + // expressions like A.rate()/B.rate() + // Build v1 prime + v1 real consecutively so their timestamp delta + // matches the generator (2 s gap). + final Map<String, org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily> v1Data; + if (hasIncrease) { + try { + final Map<String, org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily> v1Prime = + buildV1MockDataFromInput(inputSection, 0.5); + for (final org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily s : v1Prime.values()) { + s.context.setMetricName(cwMetricName); + } + v1Expr.run(v1Prime); + } catch (Exception ignored) { + } + } + v1Data = buildV1MockDataFromInput(inputSection, 1.0); + for (final org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily s : v1Data.values()) { + s.context.setMetricName(cwMetricName); + } + + // v2 prime + v2 real (also consecutive, same delta) + final Map<String, org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily> v2Data; + if (hasIncrease) { + try { + final Map<String, org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily> primeData = + buildV2MockDataFromInput(inputSection, 0.5); + for (final org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily s : primeData.values()) { + if (s != org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily.EMPTY) { + s.context.setMetricName(cwMetricName); + } + } + v2MalExpr.run(primeData); + } catch (Exception ignored) { + } + } + v2Data = buildV2MockDataFromInput(inputSection, 1.0); + for (final org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily s : v2Data.values()) { + if (s != org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily.EMPTY) { + s.context.setMetricName(cwMetricName); + } + } + + // V1 run — v1 is production-verified; if it fails, the input data is wrong + org.apache.skywalking.oap.meter.analyzer.dsl.Result v1Result; + try { + v1Result = v1Expr.run(v1Data); + } catch (Exception e) { + fail(metricName + ": v1 runtime threw with input data — " + + e.getClass().getSimpleName() + ": " + e.getMessage()); + return; + } + assertTrue(v1Result.isSuccess(), + metricName + ": v1 runtime returned not-success with input data" + + " — fix the .data.yaml input section"); + + // V2 run + org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily v2Sf; + try { + v2Sf = v2MalExpr.run(v2Data); + } catch (Exception e) { + fail(metricName + ": v2 runtime failed but v1 succeeded (with input data) — " + + e.getClass().getSimpleName() + ": " + e.getMessage()); + return; + } + + // Compare results — both must succeed + final boolean v2Success = v2Sf != null + && v2Sf != org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily.EMPTY; + assertTrue(v2Success, + metricName + ": v2 returned EMPTY but v1 succeeded"); + + if (v1Result.isSuccess() && v2Success) { + compareSampleFamilies(metricName, v1Result.getData(), v2Sf); + } + + // Validate expected section + if (expectedSection != null) { + final String qualifiedMetricName = rule.metricPrefix != null + ? rule.metricPrefix + "_" + metricName : metricName; + final Map<String, Object> metricExpected = + (Map<String, Object>) expectedSection.get(qualifiedMetricName); + if (metricExpected == null) { + // Try without prefix + final Map<String, Object> directExpected = + (Map<String, Object>) expectedSection.get(metricName); + if (directExpected != null) { + validateExpected(metricName, v2Sf, v2Success, directExpected); + } + } else { + validateExpected(qualifiedMetricName, v2Sf, v2Success, metricExpected); + } + } + } + + @SuppressWarnings("unchecked") + private void validateExpected(final String metricName, + final org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily v2Sf, + final boolean v2Success, + final Map<String, Object> expected) { + // Rich expected: entities + samples → hard assertions + final List<Map<String, Object>> expectedEntities = + (List<Map<String, Object>>) expected.get("entities"); + final List<Map<String, Object>> expectedSamples = + (List<Map<String, Object>>) expected.get("samples"); + + if (expectedEntities != null || expectedSamples != null) { + // EMPTY is a hard failure when rich expected data exists + assertTrue(v2Success, metricName + ": v2 returned EMPTY but rich expected data exists"); + assertNotNull(v2Sf, metricName + ": v2 SampleFamily is null"); + } + + // Validate entities (MeterEntity from context) + if (expectedEntities != null && !expectedEntities.isEmpty()) { + final Map<MeterEntity, ?> meterSamples = v2Sf.context.getMeterSamples(); + assertNotNull(meterSamples, metricName + ": no MeterEntity output"); + + final List<String> actualEntityDescs = meterSamples.keySet().stream() + .map(MalComparisonTest::describeEntity) + .sorted() + .collect(Collectors.toList()); + + final List<String> expectedEntityDescs = expectedEntities.stream() + .map(MalComparisonTest::describeExpectedEntity) + .sorted() + .collect(Collectors.toList()); + + assertEquals(expectedEntityDescs.size(), actualEntityDescs.size(), + metricName + ": entity count mismatch — expected " + + expectedEntityDescs + " but got " + actualEntityDescs); + + for (int i = 0; i < expectedEntityDescs.size(); i++) { + assertEquals(expectedEntityDescs.get(i), actualEntityDescs.get(i), + metricName + ": entity[" + i + "] mismatch"); + } + } + + // Validate samples (labels + values) + if (expectedSamples != null && !expectedSamples.isEmpty()) { + final org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample[] actualSorted = + sortV2Samples(v2Sf.samples); + + assertEquals(expectedSamples.size(), actualSorted.length, + metricName + ": expected " + expectedSamples.size() + + " samples but got " + actualSorted.length); + + // Sort expected by normalized (all-String) labels for consistent comparison + // SnakeYAML may parse label values as Integer/null, so normalize first + final List<Map<String, Object>> sortedExpected = new ArrayList<>(expectedSamples); + sortedExpected.sort((a, b) -> { + final String aLabels = normalizeLabelsForSort(a.get("labels")); + final String bLabels = normalizeLabelsForSort(b.get("labels")); + return aLabels.compareTo(bLabels); + }); + + for (int i = 0; i < sortedExpected.size(); i++) { + final Map<String, Object> expSample = sortedExpected.get(i); + final org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample actSample = + actualSorted[i]; + + // Compare labels + final Map<?, ?> rawExpLabels = + (Map<?, ?>) expSample.get("labels"); + if (rawExpLabels != null) { + final Map<String, String> expLabels = new LinkedHashMap<>(); + for (final Map.Entry<?, ?> le : rawExpLabels.entrySet()) { + expLabels.put(String.valueOf(le.getKey()), + le.getValue() == null ? "" : String.valueOf(le.getValue())); + } + assertEquals(expLabels, actSample.getLabels(), + metricName + ": sample[" + i + "] labels mismatch"); + } + + // Compare values with tolerance + // For time()-dependent expressions (large magnitudes), use relative tolerance + if (expSample.containsKey("value")) { + final double expValue = ((Number) expSample.get("value")).doubleValue(); + final double actValue = actSample.getValue(); + final double tolerance = Math.abs(expValue) > 1e6 + ? Math.abs(expValue) * 0.01 : 0.001; + assertEquals(expValue, actValue, tolerance, + metricName + ": sample[" + i + "] value mismatch" + + " (expected=" + expValue + ", actual=" + actValue + ")"); + } + } + } + + // Legacy min_samples (soft check for backwards compatibility) + if (expected.containsKey("min_samples")) { + final int minSamples = ((Number) expected.get("min_samples")).intValue(); + if (minSamples > 0 && v2Success) { + assertTrue(v2Sf.samples.length >= minSamples, + metricName + ": expected min_samples=" + minSamples + + " but got " + v2Sf.samples.length); + } + } + } + + private static String describeExpectedEntity(final Map<String, Object> entity) { + final StringBuilder sb = new StringBuilder(); + sb.append(entity.getOrDefault("scope", "SERVICE")); + sb.append("|svc=").append(entity.getOrDefault("service", "")); + final Object inst = entity.get("instance"); + if (inst != null && !inst.toString().isEmpty()) { + sb.append("|inst=").append(inst); + } + final Object ep = entity.get("endpoint"); + if (ep != null && !ep.toString().isEmpty()) { + sb.append("|ep=").append(ep); + } + final Object layer = entity.get("layer"); + if (layer != null) { + sb.append("|layer=").append(layer); + } + for (int i = 0; i <= 5; i++) { + final Object attr = entity.get("attr" + i); + if (attr != null) { + sb.append("|attr").append(i).append("=").append(attr); + } + } + return sb.toString(); + } + + /** + * Normalize a YAML labels map to a string with all values converted to String. + * SnakeYAML may parse label values as Integer (e.g. status: 404) or null + * (e.g. status: ) which would sort differently than "404" or "". + */ + @SuppressWarnings("unchecked") + private static String normalizeLabelsForSort(final Object rawLabels) { + if (!(rawLabels instanceof Map)) { + return String.valueOf(rawLabels); + } + final Map<String, String> normalized = new TreeMap<>(); + for (final Map.Entry<?, ?> e : ((Map<?, ?>) rawLabels).entrySet()) { + normalized.put( + String.valueOf(e.getKey()), + e.getValue() == null ? "" : String.valueOf(e.getValue())); + } + return normalized.toString(); + } + + // ==================== Build mock data from .data.yaml ==================== + + @SuppressWarnings("unchecked") + private Map<String, org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily> buildV1MockDataFromInput( + final Map<String, Object> inputSection, final double valueScale) { + final Map<String, org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily> data = + new HashMap<>(); + final long now = timestampCounter; + timestampCounter += 2_000; + + for (final Map.Entry<String, Object> entry : inputSection.entrySet()) { + final String sampleName = entry.getKey(); + final List<Map<String, Object>> sampleList = + (List<Map<String, Object>>) entry.getValue(); + final List<org.apache.skywalking.oap.meter.analyzer.dsl.Sample> samples = + new ArrayList<>(); + + for (final Map<String, Object> sampleDef : sampleList) { + final Map<String, String> labels = new HashMap<>(); + final Object rawLabels = sampleDef.get("labels"); + if (rawLabels instanceof Map) { + for (final Map.Entry<?, ?> le : + ((Map<?, ?>) rawLabels).entrySet()) { + labels.put(String.valueOf(le.getKey()), + le.getValue() == null ? "" : String.valueOf(le.getValue())); + } + } + final double value = ((Number) sampleDef.get("value")).doubleValue() + * valueScale; + samples.add(org.apache.skywalking.oap.meter.analyzer.dsl.Sample.builder() + .name(sampleName) + .labels(ImmutableMap.copyOf(labels)) + .value(value) + .timestamp(now) + .build()); + } + + data.put(sampleName, + org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamilyBuilder + .newBuilder(samples.toArray( + new org.apache.skywalking.oap.meter.analyzer.dsl.Sample[0])) + .build()); + } + return data; + } + + @SuppressWarnings("unchecked") + private Map<String, org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily> buildV2MockDataFromInput( + final Map<String, Object> inputSection, final double valueScale) { + final Map<String, org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily> data = + new HashMap<>(); + final long now = timestampCounter; + timestampCounter += 2_000; + + for (final Map.Entry<String, Object> entry : inputSection.entrySet()) { + final String sampleName = entry.getKey(); + final List<Map<String, Object>> sampleList = + (List<Map<String, Object>>) entry.getValue(); + final List<org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample> samples = + new ArrayList<>(); + + for (final Map<String, Object> sampleDef : sampleList) { + final Map<String, String> labels = new HashMap<>(); + final Object rawLabels = sampleDef.get("labels"); + if (rawLabels instanceof Map) { + for (final Map.Entry<?, ?> le : + ((Map<?, ?>) rawLabels).entrySet()) { + labels.put(String.valueOf(le.getKey()), + le.getValue() == null ? "" : String.valueOf(le.getValue())); + } + } + final double value = ((Number) sampleDef.get("value")).doubleValue() + * valueScale; + samples.add(org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample.builder() + .name(sampleName) + .labels(ImmutableMap.copyOf(labels)) + .value(value) + .timestamp(now) + .build()); + } + + data.put(sampleName, + org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyBuilder + .newBuilder(samples.toArray( + new org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample[0])) + .build()); + } + return data; + } + + // ==================== Auto-generated mock data (fallback) ==================== + + private void compareExecution( + final String metricName, + final String expression, + final org.apache.skywalking.oap.meter.analyzer.dsl.Expression v1Expr, + final org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalExpression v2MalExpr, + final ExpressionMetadata v2Meta) { + final boolean hasIncrease = expression.contains(".increase(") + || expression.contains(".rate("); + + // Clear CounterWindow before each rule so previous rules' entries + // cannot contaminate rate()/increase() calculations + org.apache.skywalking.oap.meter.analyzer.dsl.counter.CounterWindow + .INSTANCE.reset(); + + // For increase()/rate(), prime then build real data consecutively per engine + // so that v1 prime→real and v2 prime→real each have a consistent 2 s delta. + final Map<String, org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily> v1Data; + if (hasIncrease) { + try { + final Map<String, org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily> v1Prime = + buildV1MockData(metricName, expression, v2Meta, 0.5); + for (final org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily s : v1Prime.values()) { + s.context.setMetricName(metricName); + } + v1Expr.run(v1Prime); + } catch (Exception ignored) { + } + } + v1Data = buildV1MockData(metricName, expression, v2Meta, 1.0); + for (final org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily s : v1Data.values()) { + s.context.setMetricName(metricName); + } + + final Map<String, org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily> v2Data; + if (hasIncrease) { + try { + final Map<String, org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily> primeData = + buildV2MockData(metricName, expression, v2Meta, 0.5); + for (final org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily s : primeData.values()) { + if (s != org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily.EMPTY) { + s.context.setMetricName(metricName); + } + } + v2MalExpr.run(primeData); + } catch (Exception ignored) { + } + } + v2Data = buildV2MockData(metricName, expression, v2Meta, 1.0); + + // V1 run — v1 is production-verified; if it fails, the mock data is wrong + org.apache.skywalking.oap.meter.analyzer.dsl.Result v1Result; + try { + v1Result = v1Expr.run(v1Data); + } catch (Exception e) { + fail(metricName + ": v1 runtime threw with auto-generated data — " + + e.getClass().getSimpleName() + ": " + e.getMessage()); + return; + } + assertTrue(v1Result.isSuccess(), + metricName + ": v1 runtime returned not-success with auto-generated data"); + + // V2 run + org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily v2Sf; + try { + for (final org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily s : v2Data.values()) { + if (s != org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily.EMPTY) { + s.context.setMetricName(metricName); + } + } + v2Sf = v2MalExpr.run(v2Data); + } catch (Exception e) { + fail(metricName + ": v2 runtime failed but v1 succeeded — " + + e.getClass().getSimpleName() + ": " + e.getMessage()); + return; + } + + // Compare results — both must succeed + final boolean v2Success = v2Sf != null + && v2Sf != org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily.EMPTY; + assertTrue(v2Success, + metricName + ": v2 returned EMPTY but v1 succeeded"); + + compareSampleFamilies(metricName, v1Result.getData(), v2Sf); + } + + // ==================== V1 mock data (original packages) ==================== + + private Map<String, org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily> buildV1MockData( + final String metricName, final String expression, + final ExpressionMetadata meta, final double valueScale) { + final Map<String, org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily> data = + new HashMap<>(); + final long now = timestampCounter; + timestampCounter += 2_000; + final Map<String, String> tagEqualLabels = extractTagEqualLabels(expression); + + for (final String sampleName : meta.getSamples()) { + final Map<String, String> labels = new HashMap<>(); + for (final String label : meta.getScopeLabels()) { + labels.put(label, inferLabelValue(label, tagEqualLabels)); + } + for (final String label : meta.getAggregationLabels()) { + labels.put(label, inferLabelValue(label, tagEqualLabels)); + } + labels.putAll(tagEqualLabels); + + if (meta.isHistogram()) { + data.put(sampleName, buildV1HistogramSamples( + sampleName, labels, now, valueScale)); + } else { + final org.apache.skywalking.oap.meter.analyzer.dsl.Sample sample = + org.apache.skywalking.oap.meter.analyzer.dsl.Sample.builder() + .name(sampleName) + .labels(ImmutableMap.copyOf(labels)) + .value(100.0 * valueScale) + .timestamp(now) + .build(); + data.put(sampleName, + org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamilyBuilder + .newBuilder(sample).build()); + } + } + return data; + } + + private org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily buildV1HistogramSamples( + final String sampleName, final Map<String, String> baseLabels, + final long timestamp, final double valueScale) { + final List<org.apache.skywalking.oap.meter.analyzer.dsl.Sample> samples = + new ArrayList<>(); + double cumulativeValue = 0; + for (final String le : HISTOGRAM_LE_VALUES) { + cumulativeValue += 10.0 * valueScale; + final Map<String, String> labels = new HashMap<>(baseLabels); + labels.put("le", le); + samples.add(org.apache.skywalking.oap.meter.analyzer.dsl.Sample.builder() + .name(sampleName) + .labels(ImmutableMap.copyOf(labels)) + .value(cumulativeValue) + .timestamp(timestamp) + .build()); + } + return org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamilyBuilder.newBuilder( + samples.toArray(new org.apache.skywalking.oap.meter.analyzer.dsl.Sample[0])).build(); + } + + // ==================== V2 mock data (.v2. packages) ==================== + + private Map<String, org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily> buildV2MockData( + final String metricName, final String expression, + final ExpressionMetadata meta, final double valueScale) { + final Map<String, org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily> data = + new HashMap<>(); + final long now = timestampCounter; + timestampCounter += 2_000; + final Map<String, String> tagEqualLabels = extractTagEqualLabels(expression); + + for (final String sampleName : meta.getSamples()) { + final Map<String, String> labels = new HashMap<>(); + for (final String label : meta.getScopeLabels()) { + labels.put(label, inferLabelValue(label, tagEqualLabels)); + } + for (final String label : meta.getAggregationLabels()) { + labels.put(label, inferLabelValue(label, tagEqualLabels)); + } + labels.putAll(tagEqualLabels); + + if (meta.isHistogram()) { + data.put(sampleName, buildV2HistogramSamples( + sampleName, labels, now, valueScale)); + } else { + final org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample sample = + org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample.builder() + .name(sampleName) + .labels(ImmutableMap.copyOf(labels)) + .value(100.0 * valueScale) + .timestamp(now) + .build(); + data.put(sampleName, + org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyBuilder + .newBuilder(sample).build()); + } + } + return data; + } + + private org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily buildV2HistogramSamples( + final String sampleName, final Map<String, String> baseLabels, + final long timestamp, final double valueScale) { + final List<org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample> samples = + new ArrayList<>(); + double cumulativeValue = 0; + for (final String le : HISTOGRAM_LE_VALUES) { + cumulativeValue += 10.0 * valueScale; + final Map<String, String> labels = new HashMap<>(baseLabels); + labels.put("le", le); + samples.add(org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample.builder() + .name(sampleName) + .labels(ImmutableMap.copyOf(labels)) + .value(cumulativeValue) + .timestamp(timestamp) + .build()); + } + return org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamilyBuilder.newBuilder( + samples.toArray( + new org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample[0])).build(); + } + + // ==================== Cross-version comparison ==================== + + private static void compareSampleFamilies( + final String metricName, + final org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily v1Sf, + final org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily v2Sf) { + final org.apache.skywalking.oap.meter.analyzer.dsl.Sample[] v1Sorted = + sortV1Samples(v1Sf.samples); + final org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample[] v2Sorted = + sortV2Samples(v2Sf.samples); + + assertEquals(v1Sorted.length, v2Sorted.length, + metricName + ": output sample count mismatch (v1=" + + v1Sorted.length + ", v2=" + v2Sorted.length + ")"); + + for (int i = 0; i < v1Sorted.length; i++) { + assertEquals(v1Sorted[i].getLabels(), v2Sorted[i].getLabels(), + metricName + ": output sample[" + i + "] labels mismatch"); + assertEquals(v1Sorted[i].getValue(), v2Sorted[i].getValue(), 0.001, + metricName + ": output sample[" + i + "] value mismatch" + + " (v1=" + v1Sorted[i].getValue() + + ", v2=" + v2Sorted[i].getValue() + ")"); + } + + // Compare MeterEntity output (service/instance/endpoint names) + compareMeterEntities(metricName, v1Sf, v2Sf); + } + + private static void compareMeterEntities( + final String metricName, + final org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily v1Sf, + final org.apache.skywalking.oap.meter.analyzer.v2.dsl.SampleFamily v2Sf) { + final Map<MeterEntity, ?> v1Entities = v1Sf.context.getMeterSamples(); + final Map<MeterEntity, ?> v2Entities = v2Sf.context.getMeterSamples(); + + if (v1Entities.isEmpty() && v2Entities.isEmpty()) { + return; + } + + assertEquals(v1Entities.size(), v2Entities.size(), + metricName + ": MeterEntity count mismatch (v1=" + + v1Entities.size() + ", v2=" + v2Entities.size() + ")"); + + final List<String> v1EntityDescs = v1Entities.keySet().stream() + .map(e -> describeEntity(e)) + .sorted() + .collect(Collectors.toList()); + final List<String> v2EntityDescs = v2Entities.keySet().stream() + .map(e -> describeEntity(e)) + .sorted() + .collect(Collectors.toList()); + + assertEquals(v1EntityDescs, v2EntityDescs, + metricName + ": MeterEntity mismatch"); + } + + private static String describeEntity(final MeterEntity entity) { + final StringBuilder sb = new StringBuilder(); + sb.append(entity.getScopeType().name()); + final String svc = entity.getServiceName(); + sb.append("|svc=").append(svc == null ? "" : svc); + if (entity.getInstanceName() != null && !entity.getInstanceName().isEmpty()) { + sb.append("|inst=").append(entity.getInstanceName()); + } + if (entity.getEndpointName() != null && !entity.getEndpointName().isEmpty()) { + sb.append("|ep=").append(entity.getEndpointName()); + } + if (entity.getLayer() != null) { + sb.append("|layer=").append(entity.getLayer().name()); + } + appendAttr(sb, "attr0", entity.getAttr0()); + appendAttr(sb, "attr1", entity.getAttr1()); + appendAttr(sb, "attr2", entity.getAttr2()); + appendAttr(sb, "attr3", entity.getAttr3()); + appendAttr(sb, "attr4", entity.getAttr4()); + appendAttr(sb, "attr5", entity.getAttr5()); + return sb.toString(); + } + + private static void appendAttr(final StringBuilder sb, + final String name, final String value) { + if (value != null) { + sb.append("|").append(name).append("=").append(value); + } + } + + private static org.apache.skywalking.oap.meter.analyzer.dsl.Sample[] sortV1Samples( + final org.apache.skywalking.oap.meter.analyzer.dsl.Sample[] samples) { + final org.apache.skywalking.oap.meter.analyzer.dsl.Sample[] sorted = + Arrays.copyOf(samples, samples.length); + Arrays.sort(sorted, (a, b) -> normalizeLabelsForSort(a.getLabels()).compareTo( + normalizeLabelsForSort(b.getLabels()))); + return sorted; + } + + private static org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample[] sortV2Samples( + final org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample[] samples) { + final org.apache.skywalking.oap.meter.analyzer.v2.dsl.Sample[] sorted = + Arrays.copyOf(samples, samples.length); + Arrays.sort(sorted, (a, b) -> normalizeLabelsForSort(a.getLabels()).compareTo( + normalizeLabelsForSort(b.getLabels()))); + return sorted; + } + + // ==================== Helpers ==================== + + private static Map<String, String> extractTagEqualLabels(final String expression) { + final Map<String, String> labels = new HashMap<>(); + final Matcher matcher = TAG_EQUAL_PATTERN.matcher(expression); + while (matcher.find()) { + labels.put(matcher.group(1), matcher.group(2)); + } + return labels; + } + + private static String inferLabelValue(final String label, + final Map<String, String> tagEqualLabels) { + if (tagEqualLabels.containsKey(label)) { + return tagEqualLabels.get(label); + } + switch (label) { + case "service": + return "test-service"; + case "instance": + case "service_instance_id": + return "test-instance"; + case "endpoint": + return "/test"; + case "host_name": + return "test-host"; + case "le": + return "100"; + case "job_name": + return "mysql-monitoring"; + case "cluster": + return "test-cluster"; + case "node": + case "node_id": + return "test-node"; + case "topic": + return "test-topic"; + case "queue": + return "test-queue"; + case "broker": + return "test-broker"; + default: + return "test-value"; + } + } + + // ==================== YAML loading ==================== + + @SuppressWarnings("unchecked") + private Map<String, List<MalRule>> loadAllMalYamlFiles() throws Exception { + final Map<String, List<MalRule>> result = new HashMap<>(); + final Yaml yaml = new Yaml(); + + final String[] dirs = { + "test-meter-analyzer-config", + "test-otel-rules", + "test-envoy-metrics-rules", + "test-log-mal-rules", + "test-telegraf-rules", + "test-zabbix-rules" + }; + + final Path scriptsDir = findScriptsDir("mal"); + if (scriptsDir != null) { + for (final String dir : dirs) { + final Path dirPath = scriptsDir.resolve(dir); + if (Files.isDirectory(dirPath)) { + collectYamlFiles(dirPath.toFile(), dir, yaml, result); + } + } + } + + return result; + } + + @SuppressWarnings("unchecked") + private void collectYamlFiles(final File dir, final String prefix, + final Yaml yaml, + final Map<String, List<MalRule>> result) throws Exception { + final File[] files = dir.listFiles(); + if (files == null) { + return; + } + for (final File file : files) { + if (file.isDirectory()) { + collectYamlFiles(file, prefix + "/" + file.getName(), yaml, result); + continue; + } + if (!file.getName().endsWith(".yaml") && !file.getName().endsWith(".yml")) { + continue; + } + // Skip companion .data.yaml files + if (file.getName().endsWith(".data.yaml")) { + continue; + } + final String content = Files.readString(file.toPath()); + final Map<String, Object> config = yaml.load(content); + if (config == null + || (!config.containsKey("metricsRules") && !config.containsKey("metrics"))) { + continue; + } + final Object rawSuffix = config.get("expSuffix"); + final String expSuffix = rawSuffix instanceof String ? (String) rawSuffix : ""; + final Object rawPrefix = config.get("expPrefix"); + final String expPrefix = rawPrefix instanceof String ? (String) rawPrefix : ""; + final Object rawMetricPrefix = config.get("metricPrefix"); + final String metricPrefix = rawMetricPrefix instanceof String + ? (String) rawMetricPrefix : null; + // Support both "metricsRules" (standard) and "metrics" (zabbix) + List<Map<String, String>> rules = + (List<Map<String, String>>) config.get("metricsRules"); + if (rules == null) { + rules = (List<Map<String, String>>) config.get("metrics"); + } + if (rules == null) { + continue; + } + + // Load companion .data.yaml if it exists + final String baseName = file.getName().replaceFirst("\\.(yaml|yml)$", ""); + final File inputFile = new File(file.getParent(), baseName + ".data.yaml"); + Map<String, Object> inputConfig = null; + if (inputFile.exists()) { + final String inputContent = Files.readString(inputFile.toPath()); + inputConfig = yaml.load(inputContent); + } + + final String yamlName = prefix + "/" + file.getName(); + final List<MalRule> malRules = new ArrayList<>(); + final Map<String, Integer> nameCount = new HashMap<>(); + for (final Map<String, String> rule : rules) { + final String name = rule.get("name"); + final String exp = rule.get("exp"); + if (name == null || exp == null) { + continue; + } + // Disambiguate duplicate rule names within the same file + final int count = nameCount.merge(name, 1, Integer::sum); + final String uniqueName = count > 1 ? name + "_" + count : name; + final String fullExp = formatExp(expPrefix, expSuffix, exp); + malRules.add(new MalRule(uniqueName, fullExp, inputConfig, metricPrefix, file)); + } + if (!malRules.isEmpty()) { + result.put(yamlName, malRules); + } + } + } + + private Path findScriptsDir(final String language) { + final String[] candidates = { + "test/script-cases/scripts/" + language, + "../../scripts/" + language + }; + for (final String candidate : candidates) { + final Path path = Path.of(candidate); + if (Files.isDirectory(path)) { + return path; + } + } + return null; + } + + /** + * Replicates the production {@code MetricConvert.formatExp()} logic: + * inserts {@code expPrefix} after the metric name (first dot-segment), + * and appends {@code expSuffix} after the whole expression. + */ + private static String formatExp(final String expPrefix, final String expSuffix, + final String exp) { + String ret = exp; + if (!expPrefix.isEmpty()) { + final int dot = exp.indexOf('.'); + if (dot >= 0) { + ret = String.format("(%s.%s)", exp.substring(0, dot), expPrefix); + final String after = exp.substring(dot + 1); + if (!after.isEmpty()) { + ret = String.format("(%s.%s)", ret, after); + } + } else { + ret = String.format("(%s.%s)", exp, expPrefix); + } + } + if (!expSuffix.isEmpty()) { + ret = String.format("(%s).%s", ret, expSuffix); + } + return ret; + } + + private static class MalRule { + final String name; + final String fullExpression; + final Map<String, Object> inputConfig; + final String metricPrefix; + final File sourceFile; + + MalRule(final String name, final String fullExpression, + final Map<String, Object> inputConfig, final String metricPrefix, + final File sourceFile) { + this.name = name; + this.fullExpression = fullExpression; + this.inputConfig = inputConfig; + this.metricPrefix = metricPrefix; + this.sourceFile = sourceFile; + } + } +} diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalExpectedDataGenerator.java b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalExpectedDataGenerator.java new file mode 100644 index 000000000000..8dd58725c40d --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalExpectedDataGenerator.java @@ -0,0 +1,597 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.checker.mal; + +import java.io.File; +import java.io.IOException; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import com.google.common.collect.ImmutableMap; +import lombok.extern.slf4j.Slf4j; +import org.apache.skywalking.oap.server.core.analysis.meter.MeterEntity; +import org.mockito.MockedStatic; +import org.mockito.Mockito; +import org.yaml.snakeyaml.Yaml; + +/** + * Generates rich {@code expected:} sections in companion {@code .data.yaml} files + * by running v1 (Groovy) MAL expressions and capturing their output. + * + * <p>v1 is production-verified and trusted. Its output (entities, samples, values) + * becomes the expected baseline for v1-v2 comparison tests. + * + * <p>Run via {@link MalExpectedDataGeneratorTest}. + */ +@Slf4j +public final class MalExpectedDataGenerator { + + private static final String[] DIRS = { + "test-meter-analyzer-config", + "test-otel-rules", + "test-envoy-metrics-rules", + "test-log-mal-rules", + "test-telegraf-rules", + "test-zabbix-rules" + }; + + /** Advance by 2 s per call — must be >1 s (for timeDiff/1000≥1) and <15 s (smallest rate window). */ + private long timestampCounter = System.currentTimeMillis(); + private MockedStatic<org.apache.skywalking.oap.meter.analyzer.k8s.K8sInfoRegistry> v1K8sMock; + private MockedStatic<org.apache.skywalking.oap.meter.analyzer.v2.k8s.K8sInfoRegistry> v2K8sMock; + + static { + final org.apache.skywalking.oap.server.core.config.NamingControl namingControl = + Mockito.mock(org.apache.skywalking.oap.server.core.config.NamingControl.class); + Mockito.when(namingControl.formatServiceName( + org.mockito.ArgumentMatchers.anyString())) + .thenAnswer(invocation -> invocation.getArgument(0)); + Mockito.when(namingControl.formatInstanceName( + org.mockito.ArgumentMatchers.anyString())) + .thenAnswer(invocation -> invocation.getArgument(0)); + Mockito.when(namingControl.formatEndpointName( + org.mockito.ArgumentMatchers.anyString(), + org.mockito.ArgumentMatchers.anyString())) + .thenAnswer(invocation -> invocation.getArgument(1)); + MeterEntity.setNamingControl(namingControl); + } + + public void setupK8sMocks() { + // Mock v1 K8sInfoRegistry + final org.apache.skywalking.oap.meter.analyzer.k8s.K8sInfoRegistry mockV1 = + Mockito.mock(org.apache.skywalking.oap.meter.analyzer.k8s.K8sInfoRegistry.class); + Mockito.when(mockV1.findServiceName( + org.mockito.ArgumentMatchers.anyString(), + org.mockito.ArgumentMatchers.anyString())) + .thenAnswer(inv -> { + final String ns = inv.getArgument(0); + final String pod = inv.getArgument(1); + return pod + "." + ns; + }); + v1K8sMock = Mockito.mockStatic( + org.apache.skywalking.oap.meter.analyzer.k8s.K8sInfoRegistry.class); + v1K8sMock.when( + org.apache.skywalking.oap.meter.analyzer.k8s.K8sInfoRegistry::getInstance) + .thenReturn(mockV1); + + // Mock v2 K8sInfoRegistry + final org.apache.skywalking.oap.meter.analyzer.v2.k8s.K8sInfoRegistry mockV2 = + Mockito.mock(org.apache.skywalking.oap.meter.analyzer.v2.k8s.K8sInfoRegistry.class); + Mockito.when(mockV2.findServiceName( + org.mockito.ArgumentMatchers.anyString(), + org.mockito.ArgumentMatchers.anyString())) + .thenAnswer(inv -> { + final String ns = inv.getArgument(0); + final String pod = inv.getArgument(1); + return pod + "." + ns; + }); + v2K8sMock = Mockito.mockStatic( + org.apache.skywalking.oap.meter.analyzer.v2.k8s.K8sInfoRegistry.class); + v2K8sMock.when( + org.apache.skywalking.oap.meter.analyzer.v2.k8s.K8sInfoRegistry::getInstance) + .thenReturn(mockV2); + } + + public void teardownK8sMocks() { + if (v1K8sMock != null) { + v1K8sMock.close(); + } + if (v2K8sMock != null) { + v2K8sMock.close(); + } + } + + /** + * Process all MAL script directories and update expected sections in .data.yaml files. + * + * @return int[3]: [updated, skipped, errors] + */ + public int[] processAll() throws Exception { + final Path scriptsDir = findScriptsDir(); + if (scriptsDir == null) { + log.warn("Cannot find scripts/mal directory"); + return new int[]{0, 0, 0}; + } + int updated = 0; + int skipped = 0; + int errors = 0; + for (final String dir : DIRS) { + final Path dirPath = scriptsDir.resolve(dir); + if (Files.isDirectory(dirPath)) { + final int[] counts = processDirectory(dirPath); + updated += counts[0]; + skipped += counts[1]; + errors += counts[2]; + } + } + log.info("Expected generation: updated={}, skipped={}, errors={}", updated, skipped, errors); + return new int[]{updated, skipped, errors}; + } + + @SuppressWarnings("unchecked") + int[] processDirectory(final Path dir) throws Exception { + int updated = 0; + int skipped = 0; + int errors = 0; + final File[] files = dir.toFile().listFiles(); + if (files == null) { + return new int[]{0, 0, 0}; + } + for (final File file : files) { + if (file.isDirectory()) { + final int[] sub = processDirectory(file.toPath()); + updated += sub[0]; + skipped += sub[1]; + errors += sub[2]; + continue; + } + if (!file.getName().endsWith(".yaml") && !file.getName().endsWith(".yml")) { + continue; + } + if (file.getName().endsWith(".data.yaml") || file.getName().endsWith(".data.yml")) { + continue; + } + final String baseName = file.getName().replaceAll("\\.(yaml|yml)$", ""); + final File dataFile = new File(file.getParentFile(), baseName + ".data.yaml"); + if (!dataFile.exists()) { + skipped++; + continue; + } + try { + if (generateExpectedForFile(file, dataFile)) { + updated++; + } else { + skipped++; + } + } catch (Exception e) { + log.warn("Error processing {}: {}", file.getName(), e.getMessage()); + errors++; + } + } + return new int[]{updated, skipped, errors}; + } + + @SuppressWarnings("unchecked") + boolean generateExpectedForFile(final File yamlFile, final File dataFile) + throws IOException { + final Yaml yaml = new Yaml(); + final Map<String, Object> config = yaml.load(Files.readString(yamlFile.toPath())); + if (config == null + || (!config.containsKey("metricsRules") && !config.containsKey("metrics"))) { + return false; + } + + final Object rawSuffix = config.get("expSuffix"); + final String expSuffix = rawSuffix instanceof String ? (String) rawSuffix : ""; + final Object rawPrefix = config.get("expPrefix"); + final String expPrefix = rawPrefix instanceof String ? (String) rawPrefix : ""; + final Object rawMetricPrefix = config.get("metricPrefix"); + final String metricPrefix = rawMetricPrefix instanceof String + ? (String) rawMetricPrefix : null; + + // Support both "metricsRules" (standard) and "metrics" (zabbix) + List<Map<String, String>> rules = + (List<Map<String, String>>) config.get("metricsRules"); + if (rules == null) { + rules = (List<Map<String, String>>) config.get("metrics"); + } + if (rules == null || rules.isEmpty()) { + return false; + } + + // Load input section from data file + final Map<String, Object> dataConfig = yaml.load(Files.readString(dataFile.toPath())); + if (dataConfig == null) { + return false; + } + final Map<String, Object> inputSection = + (Map<String, Object>) dataConfig.get("input"); + if (inputSection == null) { + return false; + } + + // Run v1 for each rule and collect expected output + final Map<String, ExpectedOutput> expectations = new LinkedHashMap<>(); + boolean anyChanged = false; + final Map<String, Integer> nameCount = new HashMap<>(); + + for (final Map<String, String> rule : rules) { + final String name = rule.get("name"); + final String exp = rule.get("exp"); + if (name == null || exp == null) { + continue; + } + + // Disambiguate duplicate rule names within the same file + final int count = nameCount.merge(name, 1, Integer::sum); + final String uniqueName = count > 1 ? name + "_" + count : name; + final String qualifiedName = metricPrefix != null + ? metricPrefix + "_" + uniqueName : uniqueName; + final String fullExp = formatExp(expPrefix, expSuffix, exp); + final boolean hasIncrease = fullExp.contains(".increase(") + || fullExp.contains(".rate("); + + try { + final org.apache.skywalking.oap.meter.analyzer.dsl.Expression v1Expr = + org.apache.skywalking.oap.meter.analyzer.dsl.DSL.parse(name, fullExp); + + // Clear CounterWindow before each rule so previous rules' entries + // cannot contaminate rate()/increase() calculations + org.apache.skywalking.oap.meter.analyzer.dsl.counter.CounterWindow + .INSTANCE.reset(); + + // Unique metricName per file+rule to isolate CounterWindow entries + final String cwMetricName = yamlFile.getName() + "/" + name; + + // Prime for increase()/rate() with half-values FIRST (older timestamp) + // so rate = (value - value*0.5) / dt ≠ 0 + if (hasIncrease) { + try { + v1Expr.run(buildV1MockDataFromInput(inputSection, 0.5, cwMetricName)); + } catch (Exception ignored) { + } + } + + // Build v1 mock data from input (full values, newer timestamp) + final Map<String, org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily> v1Data = + buildV1MockDataFromInput(inputSection, 1.0, cwMetricName); + + // Run v1 + final org.apache.skywalking.oap.meter.analyzer.dsl.Result v1Result = + v1Expr.run(v1Data); + + if (!v1Result.isSuccess()) { + throw new IllegalStateException( + "v1 returned not-success — fix input data in " + + dataFile.getName()); + } + + final org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily sf = + v1Result.getData(); + if (sf == null + || sf == org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily.EMPTY + || sf.samples.length == 0) { + log.warn(" {} [{}]: v1 returned EMPTY", yamlFile.getName(), name); + expectations.put(qualifiedName, ExpectedOutput.empty()); + continue; + } + + // Capture entities + final List<EntityInfo> entities = new ArrayList<>(); + for (final MeterEntity entity : sf.context.getMeterSamples().keySet()) { + entities.add(new EntityInfo( + entity.getScopeType().name(), + entity.getServiceName(), + entity.getInstanceName(), + entity.getEndpointName(), + entity.getLayer() != null ? entity.getLayer().name() : null, + new String[] { + entity.getAttr0(), entity.getAttr1(), entity.getAttr2(), + entity.getAttr3(), entity.getAttr4(), entity.getAttr5() + } + )); + } + + // Capture samples (sorted for deterministic output) + final org.apache.skywalking.oap.meter.analyzer.dsl.Sample[] sorted = + Arrays.copyOf(sf.samples, sf.samples.length); + Arrays.sort(sorted, (a, b) -> + a.getLabels().toString().compareTo(b.getLabels().toString())); + + final List<SampleInfo> samples = new ArrayList<>(); + for (final org.apache.skywalking.oap.meter.analyzer.dsl.Sample s : sorted) { + samples.add(new SampleInfo( + new LinkedHashMap<>(s.getLabels()), + s.getValue())); + } + + expectations.put(qualifiedName, new ExpectedOutput(entities, samples)); + anyChanged = true; + + } catch (Exception e) { + log.warn(" {} [{}]: v1 failed — {}: {}", + yamlFile.getName(), name, + e.getClass().getSimpleName(), e.getMessage()); + expectations.put(qualifiedName, ExpectedOutput.error(e.getMessage())); + } + } + + if (!anyChanged && expectations.values().stream().allMatch(e -> e.samples == null)) { + return false; + } + + // Rewrite the data file: keep input section, replace expected section + rewriteDataFile(dataFile, dataConfig, expectations); + log.info(" Updated expected: {}", dataFile.getName()); + return true; + } + + @SuppressWarnings("unchecked") + private void rewriteDataFile(final File dataFile, + final Map<String, Object> dataConfig, + final Map<String, ExpectedOutput> expectations) + throws IOException { + // Read original file to preserve input section exactly as-is + final String original = Files.readString(dataFile.toPath()); + final int expectedIdx = original.indexOf("\nexpected:"); + final String inputPart; + if (expectedIdx >= 0) { + inputPart = original.substring(0, expectedIdx + 1); + } else { + inputPart = original + "\n"; + } + + final StringBuilder sb = new StringBuilder(inputPart); + sb.append("expected:\n"); + for (final Map.Entry<String, ExpectedOutput> entry : expectations.entrySet()) { + final String metricName = entry.getKey(); + final ExpectedOutput output = entry.getValue(); + sb.append(" ").append(yamlKey(metricName)).append(":\n"); + + if (output.error != null) { + sb.append(" error: '").append(escapeYaml(output.error)).append("'\n"); + continue; + } + if (output.samples == null || output.samples.isEmpty()) { + sb.append(" empty: true\n"); + continue; + } + + // Entities + if (!output.entities.isEmpty()) { + sb.append(" entities:\n"); + for (final EntityInfo e : output.entities) { + sb.append(" - scope: ").append(e.scope).append("\n"); + if (e.service != null && !e.service.isEmpty()) { + sb.append(" service: ").append(yamlValue(e.service)).append("\n"); + } + if (e.instance != null && !e.instance.isEmpty()) { + sb.append(" instance: ").append(yamlValue(e.instance)).append("\n"); + } + if (e.endpoint != null && !e.endpoint.isEmpty()) { + sb.append(" endpoint: ").append(yamlValue(e.endpoint)).append("\n"); + } + if (e.layer != null) { + sb.append(" layer: ").append(e.layer).append("\n"); + } + for (int ai = 0; ai < e.attrs.length; ai++) { + if (e.attrs[ai] != null) { + sb.append(" attr").append(ai).append(": ") + .append(yamlValue(e.attrs[ai])).append("\n"); + } + } + } + } + + // Samples + sb.append(" samples:\n"); + for (final SampleInfo s : output.samples) { + sb.append(" - labels:\n"); + for (final Map.Entry<String, String> l : s.labels.entrySet()) { + sb.append(" ").append(yamlKey(l.getKey())) + .append(": ").append(yamlValue(l.getValue())).append("\n"); + } + sb.append(" value: ").append(s.value).append("\n"); + } + } + + Files.writeString(dataFile.toPath(), sb.toString()); + } + + @SuppressWarnings("unchecked") + private Map<String, org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily> + buildV1MockDataFromInput(final Map<String, Object> inputSection, + final double valueScale, + final String metricName) { + final Map<String, org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily> data = + new HashMap<>(); + final long now = timestampCounter; + timestampCounter += 2_000; + + for (final Map.Entry<String, Object> entry : inputSection.entrySet()) { + final String sampleName = entry.getKey(); + final List<Map<String, Object>> sampleList = + (List<Map<String, Object>>) entry.getValue(); + final List<org.apache.skywalking.oap.meter.analyzer.dsl.Sample> samples = + new ArrayList<>(); + + for (final Map<String, Object> sampleDef : sampleList) { + final Map<String, String> labels = new HashMap<>(); + final Object rawLabels = sampleDef.get("labels"); + if (rawLabels instanceof Map) { + for (final Map.Entry<?, ?> le : + ((Map<?, ?>) rawLabels).entrySet()) { + labels.put(String.valueOf(le.getKey()), + le.getValue() == null ? "" : String.valueOf(le.getValue())); + } + } + final double value = ((Number) sampleDef.get("value")).doubleValue() + * valueScale; + samples.add(org.apache.skywalking.oap.meter.analyzer.dsl.Sample.builder() + .name(sampleName) + .labels(ImmutableMap.copyOf(labels)) + .value(value) + .timestamp(now) + .build()); + } + + final org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamily sf = + org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamilyBuilder + .newBuilder(samples.toArray( + new org.apache.skywalking.oap.meter.analyzer.dsl.Sample[0])) + .build(); + sf.context.setMetricName(metricName); + data.put(sampleName, sf); + } + return data; + } + + static String formatExp(final String expPrefix, final String expSuffix, + final String exp) { + String ret = exp; + if (!expPrefix.isEmpty()) { + final int dot = exp.indexOf('.'); + if (dot >= 0) { + ret = String.format("(%s.%s)", exp.substring(0, dot), expPrefix); + final String after = exp.substring(dot + 1); + if (!after.isEmpty()) { + ret = String.format("(%s.%s)", ret, after); + } + } else { + ret = String.format("(%s.%s)", exp, expPrefix); + } + } + if (!expSuffix.isEmpty()) { + ret = String.format("(%s).%s", ret, expSuffix); + } + return ret; + } + + private static String yamlKey(final String key) { + if (key.contains("-") || key.contains(".") || key.contains(" ")) { + return "'" + key + "'"; + } + return key; + } + + private static String yamlValue(final String value) { + if (value == null) { + return "''"; + } + if (value.contains(":") || value.contains("#") || value.contains("{") + || value.contains("}") || value.contains("[") || value.contains("]") + || value.contains("'") || value.contains("\"") || value.contains(",") + || value.contains("&") || value.contains("*") || value.contains("!") + || value.contains("|") || value.contains(">") || value.contains("%") + || value.contains("@") || value.contains("`") + || "true".equals(value) || "false".equals(value) + || "null".equals(value) || "yes".equals(value) || "no".equals(value) + || "-".equals(value) || "~".equals(value) + || value.startsWith("- ") || value.startsWith("? ")) { + return "'" + value.replace("'", "''") + "'"; + } + // Quote numeric strings so SnakeYAML doesn't parse them as Integer/Double + try { + Double.parseDouble(value); + return "'" + value + "'"; + } catch (NumberFormatException ignored) { + } + return value; + } + + private static String escapeYaml(final String s) { + if (s == null) { + return ""; + } + return s.replace("'", "''").replace("\n", " "); + } + + Path findScriptsDir() { + final String[] candidates = { + "test/script-cases/scripts/mal", + "../../scripts/mal" + }; + for (final String candidate : candidates) { + final Path path = Path.of(candidate); + if (Files.isDirectory(path)) { + return path; + } + } + return null; + } + + static final class ExpectedOutput { + final List<EntityInfo> entities; + final List<SampleInfo> samples; + final String error; + + ExpectedOutput(final List<EntityInfo> entities, final List<SampleInfo> samples) { + this.entities = entities; + this.samples = samples; + this.error = null; + } + + private ExpectedOutput(final String error, final boolean isEmpty) { + this.entities = null; + this.samples = null; + this.error = isEmpty ? null : error; + } + + static ExpectedOutput error(final String message) { + return new ExpectedOutput(message, false); + } + + static ExpectedOutput empty() { + return new ExpectedOutput(null, true); + } + } + + static final class EntityInfo { + final String scope; + final String service; + final String instance; + final String endpoint; + final String layer; + final String[] attrs; + + EntityInfo(final String scope, final String service, final String instance, + final String endpoint, final String layer, final String[] attrs) { + this.scope = scope; + this.service = service; + this.instance = instance; + this.endpoint = endpoint; + this.layer = layer; + this.attrs = attrs; + } + } + + static final class SampleInfo { + final Map<String, String> labels; + final double value; + + SampleInfo(final Map<String, String> labels, final double value) { + this.labels = labels; + this.value = value; + } + } +} diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalExpectedDataGeneratorTest.java b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalExpectedDataGeneratorTest.java new file mode 100644 index 000000000000..8837a6d654ff --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalExpectedDataGeneratorTest.java @@ -0,0 +1,49 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.checker.mal; + +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertEquals; + +/** + * Runs {@link MalExpectedDataGenerator} to generate rich expected sections + * in all .data.yaml files by executing v1 (Groovy) MAL expressions. + */ +class MalExpectedDataGeneratorTest { + + private static final MalExpectedDataGenerator GENERATOR = new MalExpectedDataGenerator(); + + @BeforeAll + static void setup() { + GENERATOR.setupK8sMocks(); + } + + @AfterAll + static void teardown() { + GENERATOR.teardownK8sMocks(); + } + + @Test + void generateAllExpected() throws Exception { + final int[] counts = GENERATOR.processAll(); + assertEquals(0, counts[2], "Expected zero errors during generation"); + } +} diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalFilterComparisonTest.java b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalFilterComparisonTest.java new file mode 100644 index 000000000000..7db7b9473313 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalFilterComparisonTest.java @@ -0,0 +1,205 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.checker.mal; + +import java.io.File; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.Collection; +import java.util.HashMap; +import java.util.LinkedHashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import groovy.lang.Closure; +import groovy.lang.GroovyShell; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALClassGenerator; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalFilter; +import org.junit.jupiter.api.DynamicTest; +import org.junit.jupiter.api.TestFactory; +import org.yaml.snakeyaml.Yaml; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.fail; + +/** + * Dual-path comparison test for MAL filter expressions. + * For each unique filter expression across all MAL YAML files: + * <ul> + * <li>Path A (v1): Groovy {@code GroovyShell.evaluate()} -> {@code Closure<Boolean>}</li> + * <li>Path B (v2): ANTLR4 + Javassist compilation via {@link MALClassGenerator}</li> + * </ul> + * Both paths are invoked with representative tag maps and results compared. + */ +class MalFilterComparisonTest { + + @TestFactory + Collection<DynamicTest> filterExpressionsMatch() throws Exception { + final Set<String> filters = collectAllFilterExpressions(); + final List<DynamicTest> tests = new ArrayList<>(); + + for (final String filterExpr : filters) { + tests.add(DynamicTest.dynamicTest( + "filter: " + filterExpr, + () -> compareFilter(filterExpr) + )); + } + + return tests; + } + + @SuppressWarnings("unchecked") + private void compareFilter(final String filterExpr) throws Exception { + final List<Map<String, String>> testTags = buildTestTags(filterExpr); + + // ---- V1: Groovy closure ---- + final Closure<Boolean> v1Closure; + try { + v1Closure = (Closure<Boolean>) new GroovyShell().evaluate(filterExpr); + } catch (Exception e) { + fail("V1 (Groovy) failed to evaluate filter: " + filterExpr + " - " + e.getMessage()); + return; + } + + // ---- V2: ANTLR4 + Javassist compilation ---- + final MalFilter v2Filter; + try { + final MALClassGenerator generator = new MALClassGenerator(); + v2Filter = generator.compileFilter(filterExpr); + } catch (Exception e) { + fail("V2 (Java) failed for filter: " + filterExpr + " - " + e.getMessage()); + return; + } + + // ---- Compare with test data ---- + for (final Map<String, String> tags : testTags) { + boolean v1Result; + try { + v1Result = v1Closure.call(tags); + } catch (Exception e) { + continue; + } + boolean v2Result; + try { + v2Result = v2Filter.test(tags); + } catch (NullPointerException e) { + v2Result = false; + } + assertEquals(v1Result, v2Result, + "Filter diverged for tags=" + tags + ": v1=" + v1Result + ", v2=" + v2Result + + " (filter: " + filterExpr + ")"); + } + } + + private List<Map<String, String>> buildTestTags(final String filterExpr) { + final List<Map<String, String>> testTags = new ArrayList<>(); + + testTags.add(new HashMap<>()); + + final java.util.regex.Pattern kvPattern = + java.util.regex.Pattern.compile("tags\\.(\\w+)\\s*==\\s*'([^']+)'"); + final java.util.regex.Matcher matcher = kvPattern.matcher(filterExpr); + + final Map<String, String> matchingTags = new HashMap<>(); + final Map<String, String> mismatchTags = new HashMap<>(); + while (matcher.find()) { + final String key = matcher.group(1); + final String value = matcher.group(2); + matchingTags.put(key, value); + mismatchTags.put(key, value + "_wrong"); + } + + if (!matchingTags.isEmpty()) { + testTags.add(matchingTags); + testTags.add(mismatchTags); + } + + final Map<String, String> unrelatedTags = new HashMap<>(); + unrelatedTags.put("unrelated_key", "some_value"); + testTags.add(unrelatedTags); + + return testTags; + } + + @SuppressWarnings("unchecked") + private Set<String> collectAllFilterExpressions() throws Exception { + final Set<String> filters = new LinkedHashSet<>(); + final Yaml yaml = new Yaml(); + + final String[] dirs = { + "test-meter-analyzer-config", "test-otel-rules", + "test-log-mal-rules", "test-envoy-metrics-rules" + }; + final Path scriptsDir = findScriptsDir("mal"); + if (scriptsDir != null) { + for (final String dir : dirs) { + final Path dirPath = scriptsDir.resolve(dir); + if (Files.isDirectory(dirPath)) { + collectFiltersFromDir(dirPath.toFile(), yaml, filters); + } + } + } + + return filters; + } + + @SuppressWarnings("unchecked") + private void collectFiltersFromDir(final File dir, final Yaml yaml, + final Set<String> filters) throws Exception { + final File[] files = dir.listFiles(); + if (files == null) { + return; + } + for (final File file : files) { + if (file.isDirectory()) { + collectFiltersFromDir(file, yaml, filters); + continue; + } + if (!file.getName().endsWith(".yaml") && !file.getName().endsWith(".yml")) { + continue; + } + final String content = Files.readString(file.toPath()); + final Map<String, Object> config = yaml.load(content); + if (config == null) { + continue; + } + final Object filterObj = config.get("filter"); + if (filterObj instanceof String) { + final String filter = ((String) filterObj).trim(); + if (!filter.isEmpty()) { + filters.add(filter); + } + } + } + } + + private Path findScriptsDir(final String language) { + final String[] candidates = { + "test/script-cases/scripts/" + language, + "../../scripts/" + language + }; + for (final String candidate : candidates) { + final Path path = Path.of(candidate); + if (Files.isDirectory(path)) { + return path; + } + } + return null; + } +} diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalInputDataGenerator.java b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalInputDataGenerator.java new file mode 100644 index 000000000000..3d782697c1c3 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalInputDataGenerator.java @@ -0,0 +1,935 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.checker.mal; + +import java.io.File; +import java.io.IOException; +import java.nio.file.Files; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.LinkedHashMap; +import java.util.LinkedHashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.regex.Matcher; +import java.util.regex.Pattern; +import org.apache.skywalking.oap.meter.analyzer.v2.compiler.MALClassGenerator; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.ExpressionMetadata; +import org.apache.skywalking.oap.meter.analyzer.v2.dsl.MalExpression; +import lombok.extern.slf4j.Slf4j; +import org.yaml.snakeyaml.Yaml; + +/** + * Generates companion {@code .data.yaml} files for all MAL test YAML scripts. + * Each generated file contains realistic mock input data (sample names, labels, values) + * derived from compiling the MAL expression and parsing filter/tag constraints. + * + * <p>Run via: {@code main()} or as a JUnit test via {@link MalInputDataGeneratorTest}. + */ +@Slf4j +public final class MalInputDataGenerator { + + private static final String LICENSE_HEADER = + "# Licensed to the Apache Software Foundation (ASF) under one or more\n" + + "# contributor license agreements. See the NOTICE file distributed with\n" + + "# this work for additional information regarding copyright ownership.\n" + + "# The ASF licenses this file to You under the Apache License, Version 2.0\n" + + "# (the \"License\"); you may not use this file except in compliance with\n" + + "# the License. You may obtain a copy of the License at\n" + + "#\n" + + "# http://www.apache.org/licenses/LICENSE-2.0\n" + + "#\n" + + "# Unless required by applicable law or agreed to in writing, software\n" + + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n" + + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n" + + "# See the License for the specific language governing permissions and\n" + + "# limitations under the License.\n\n"; + + private static final String[] HISTOGRAM_LE_VALUES = + {"50", "100", "250", "500", "1000"}; + + // Patterns for extracting constraints from MAL expressions + private static final Pattern TAG_EQUAL_PATTERN = + Pattern.compile("\\.tagEqual\\s*\\(\\s*'([^']+)'\\s*,\\s*'([^']+)'\\s*\\)"); + private static final Pattern TAG_NOT_EQUAL_NULL_PATTERN = + Pattern.compile("\\.tagNotEqual\\s*\\(\\s*'([^']+)'\\s*,\\s*null\\s*\\)"); + private static final Pattern TAG_NOT_EQUAL_PATTERN = + Pattern.compile("\\.tagNotEqual\\s*\\(\\s*'([^']+)'\\s*,\\s*'([^']+)'\\s*\\)"); + private static final Pattern TAG_MATCH_PATTERN = + Pattern.compile("\\.tagMatch\\s*\\(\\s*'([^']+)'\\s*,\\s*'([^']+)'\\s*\\)"); + private static final Pattern TAG_NOT_MATCH_PATTERN = + Pattern.compile("\\.tagNotMatch\\s*\\(\\s*'([^']+)'\\s*,\\s*'([^']+)'\\s*\\)"); + private static final Pattern CLOSURE_TAG_ACCESS_PATTERN = + Pattern.compile("tags\\.([a-zA-Z_][a-zA-Z0-9_]*)"); + private static final Pattern CLOSURE_TAG_BRACKET_PATTERN = + Pattern.compile("tags\\['([^']+)'\\]"); + private static final Pattern VALUE_EQUAL_PATTERN = + Pattern.compile("\\.valueEqual\\s*\\(\\s*([0-9.]+)\\s*\\)"); + // Extracts labels from entity functions: instance(['a'], ['b'], Layer.X) + // Matches each ['label'] argument in service/instance/endpoint/process calls + private static final Pattern ENTITY_FUNC_PATTERN = + Pattern.compile("\\.(service|instance|endpoint|process|serviceRelation|processRelation)\\s*\\("); + private static final Pattern STRING_LIST_ARG_PATTERN = + Pattern.compile("\\[\\s*'([^']+)'\\s*\\]"); + + private static final String[] DIRS = { + "test-meter-analyzer-config", + "test-otel-rules", + "test-envoy-metrics-rules", + "test-log-mal-rules", + "test-telegraf-rules", + "test-zabbix-rules" + }; + + private final MALClassGenerator generator = new MALClassGenerator(); + + public static void main(final String[] args) throws Exception { + final MalInputDataGenerator gen = new MalInputDataGenerator(); + final Path scriptsDir = gen.findScriptsDir(); + if (scriptsDir == null) { + log.warn("Cannot find scripts/mal directory"); + return; + } + int generated = 0; + int skipped = 0; + for (final String dir : DIRS) { + final Path dirPath = scriptsDir.resolve(dir); + if (Files.isDirectory(dirPath)) { + final int[] counts = gen.processDirectory(dirPath); + generated += counts[0]; + skipped += counts[1]; + } + } + log.info("Generated: {}, Skipped (already exists): {}", generated, skipped); + } + + /** + * Process a directory (recursively) and generate .data.yaml for each MAL YAML. + * + * @return int[2]: [generated, skipped] + */ + int[] processDirectory(final Path dir) throws Exception { + int generated = 0; + int skipped = 0; + final File[] files = dir.toFile().listFiles(); + if (files == null) { + return new int[]{0, 0}; + } + for (final File file : files) { + if (file.isDirectory()) { + final int[] sub = processDirectory(file.toPath()); + generated += sub[0]; + skipped += sub[1]; + continue; + } + if (!file.getName().endsWith(".yaml") && !file.getName().endsWith(".yml")) { + continue; + } + if (file.getName().endsWith(".data.yaml") || file.getName().endsWith(".data.yml")) { + continue; + } + final String baseName = file.getName().replaceAll("\\.(yaml|yml)$", ""); + final File inputFile = new File(file.getParentFile(), baseName + ".data.yaml"); + if (inputFile.exists()) { + skipped++; + continue; + } + try { + final String content = generateInputYaml(file); + if (content != null) { + Files.writeString(inputFile.toPath(), content); + log.info(" Generated: {}", inputFile.getPath()); + generated++; + } + } catch (Exception e) { + log.warn(" Error processing {}: {}", file.getName(), e.getMessage()); + } + } + return new int[]{generated, skipped}; + } + + @SuppressWarnings("unchecked") + String generateInputYaml(final File yamlFile) throws IOException { + final Yaml yaml = new Yaml(); + final String content = Files.readString(yamlFile.toPath()); + final Map<String, Object> config = yaml.load(content); + if (config == null + || (!config.containsKey("metricsRules") && !config.containsKey("metrics"))) { + return null; + } + + final Object rawPrefix = config.get("expPrefix"); + final String expPrefix = rawPrefix instanceof String ? (String) rawPrefix : ""; + final Object rawSuffix = config.get("expSuffix"); + final String expSuffix = rawSuffix instanceof String ? (String) rawSuffix : ""; + + // Support both "metricsRules" (standard) and "metrics" (zabbix) + List<Map<String, String>> rules = + (List<Map<String, String>>) config.get("metricsRules"); + if (rules == null) { + rules = (List<Map<String, String>>) config.get("metrics"); + } + if (rules == null || rules.isEmpty()) { + return null; + } + + // Collect all sample names and labels across all rules in this file + final Map<String, Set<String>> sampleLabels = new LinkedHashMap<>(); + // sampleName -> label -> {all distinct tagEqual values across all rules} + final Map<String, Map<String, Set<String>>> perSampleTagEqual = new LinkedHashMap<>(); + // sampleName -> label -> [all tagMatch patterns across all rules] + final Map<String, Map<String, List<String>>> perSampleTagMatch = new LinkedHashMap<>(); + // Global tagEqual for expSuffix (applies to all samples) + final Map<String, Set<String>> globalTagEqualValues = new LinkedHashMap<>(); + // Global tagMatch from expSuffix + final Map<String, List<String>> globalTagMatch = new LinkedHashMap<>(); + final Set<String> closureAccessedLabels = new LinkedHashSet<>(); + final List<String> metricNames = new ArrayList<>(); + boolean anyHistogram = false; + double valueForEqual = 100.0; + + // Extract constraints from expSuffix (applies to all samples) + if (!expSuffix.isEmpty()) { + extractTagEqualAllValues(expSuffix, globalTagEqualValues); + extractTagMatchAllPatterns(expSuffix, globalTagMatch); + extractClosureAccessedLabels(expSuffix, closureAccessedLabels); + extractEntityFunctionLabels(expSuffix, closureAccessedLabels); + } + + for (final Map<String, String> rule : rules) { + final String name = rule.get("name"); + final String exp = rule.get("exp"); + if (name == null || exp == null) { + continue; + } + metricNames.add(name); + + String fullExp = exp; + if (!expPrefix.isEmpty()) { + fullExp = expPrefix + "." + fullExp; + } + + // Compile to get metadata (sample names, labels) + Set<String> ruleSamples = new LinkedHashSet<>(); + try { + final MalExpression compiled = generator.compile(name, fullExp); + final ExpressionMetadata meta = compiled.metadata(); + + for (final String sample : meta.getSamples()) { + ruleSamples.add(sample); + final Set<String> labels = sampleLabels.computeIfAbsent( + sample, k -> new LinkedHashSet<>()); + labels.addAll(meta.getAggregationLabels()); + labels.addAll(meta.getScopeLabels()); + } + if (meta.isHistogram()) { + anyHistogram = true; + } + } catch (Exception e) { + // Compilation failed — extract sample names from expression text + extractSampleNamesFromText(fullExp, sampleLabels); + } + + // Extract per-rule tagEqual constraints and associate with this rule's samples + final Map<String, Set<String>> ruleTagEqual = new LinkedHashMap<>(); + extractTagEqualAllValues(exp, ruleTagEqual); + for (final String sample : ruleSamples) { + final Map<String, Set<String>> sampleTe = + perSampleTagEqual.computeIfAbsent(sample, k -> new LinkedHashMap<>()); + for (final Map.Entry<String, Set<String>> te : ruleTagEqual.entrySet()) { + sampleTe.computeIfAbsent(te.getKey(), k -> new LinkedHashSet<>()) + .addAll(te.getValue()); + } + } + + // Extract per-rule tagMatch — infer a matching value for each label and + // treat it as a multi-value entry (like tagEqual) so that each rule gets + // a sample variant with the right tagMatch value. + final Map<String, List<String>> ruleTagMatch = new LinkedHashMap<>(); + extractTagMatchAllPatterns(exp, ruleTagMatch); + for (final Map.Entry<String, List<String>> tm : ruleTagMatch.entrySet()) { + final String matchLabel = tm.getKey(); + final String inferredValue = generateMatchingValue( + matchLabel, tm.getValue(), name); + for (final String sample : ruleSamples) { + // Add as multi-value (like tagEqual) + perSampleTagEqual.computeIfAbsent(sample, k -> new LinkedHashMap<>()) + .computeIfAbsent(matchLabel, k -> new LinkedHashSet<>()) + .add(inferredValue); + // Also keep patterns for inferLabelValue fallback + perSampleTagMatch.computeIfAbsent(sample, k -> new LinkedHashMap<>()) + .computeIfAbsent(matchLabel, k -> new ArrayList<>()) + .addAll(tm.getValue()); + } + } + + // Extract tagNotEqual (non-null) and tagNotMatch labels + extractClosureAccessedLabels(exp, closureAccessedLabels); + extractTagNotEqualNullLabels(exp, closureAccessedLabels); + extractTagNotEqualLabels(exp, ruleSamples, sampleLabels); + extractTagNotMatchLabels(exp, ruleSamples, sampleLabels); + + // Check for valueEqual + final Matcher veMatch = VALUE_EQUAL_PATTERN.matcher(exp); + if (veMatch.find()) { + valueForEqual = Double.parseDouble(veMatch.group(1)); + } + } + + if (sampleLabels.isEmpty()) { + return null; + } + + // Merge all constraint labels into each sample + for (final Map.Entry<String, Set<String>> entry : sampleLabels.entrySet()) { + final String sampleName = entry.getKey(); + final Set<String> labels = entry.getValue(); + labels.addAll(closureAccessedLabels); + // Add global tagEqual labels (from expSuffix) + labels.addAll(globalTagEqualValues.keySet()); + // Add global tagMatch labels (from expSuffix) + labels.addAll(globalTagMatch.keySet()); + // Add per-sample tagEqual labels + final Map<String, Set<String>> sampleTe = perSampleTagEqual.get(sampleName); + if (sampleTe != null) { + labels.addAll(sampleTe.keySet()); + } + // Add per-sample tagMatch labels + final Map<String, List<String>> sampleTm = perSampleTagMatch.get(sampleName); + if (sampleTm != null) { + labels.addAll(sampleTm.keySet()); + } + } + + // Build the YAML content + final StringBuilder sb = new StringBuilder(); + sb.append(LICENSE_HEADER); + sb.append("input:\n"); + + for (final Map.Entry<String, Set<String>> entry : sampleLabels.entrySet()) { + final String sampleName = entry.getKey(); + final Set<String> labels = entry.getValue(); + + sb.append(" ").append(yamlKey(sampleName)).append(":\n"); + + // Build effective constraints for THIS sample + final Map<String, Set<String>> sampleTe = perSampleTagEqual.get(sampleName); + final Map<String, Set<String>> effectiveTagEqual = new LinkedHashMap<>(globalTagEqualValues); + if (sampleTe != null) { + for (final Map.Entry<String, Set<String>> te : sampleTe.entrySet()) { + effectiveTagEqual.computeIfAbsent(te.getKey(), k -> new LinkedHashSet<>()) + .addAll(te.getValue()); + } + } + final Map<String, List<String>> sampleTm = perSampleTagMatch.get(sampleName); + final Map<String, List<String>> effectiveTagMatch = new LinkedHashMap<>(globalTagMatch); + if (sampleTm != null) { + for (final Map.Entry<String, List<String>> tm : sampleTm.entrySet()) { + effectiveTagMatch.computeIfAbsent(tm.getKey(), k -> new ArrayList<>()) + .addAll(tm.getValue()); + } + } + final Map<String, Set<String>> multiValueLabels = new LinkedHashMap<>(); + for (final String label : labels) { + final Set<String> vals = effectiveTagEqual.get(label); + if (vals != null && vals.size() > 1) { + multiValueLabels.put(label, vals); + } + } + + if (anyHistogram && labels.contains("le")) { + // Generate multiple samples with cumulative le bucket values + double cumulativeValue = 0; + for (final String le : HISTOGRAM_LE_VALUES) { + cumulativeValue += 10.0; + sb.append(" - labels:\n"); + for (final String label : labels) { + if ("le".equals(label)) { + sb.append(" le: '").append(le).append("'\n"); + } else { + final String value = inferLabelValue( + label, sampleName, effectiveTagEqual, + effectiveTagMatch); + sb.append(" ").append(yamlKey(label)) + .append(": ").append(yamlValue(value)).append("\n"); + } + } + sb.append(" value: ").append(cumulativeValue).append("\n"); + } + } else if (!multiValueLabels.isEmpty()) { + // Generate one sample per combination of multi-value tagEqual labels + final List<Map<String, String>> variants = + buildLabelVariants(labels, multiValueLabels, + sampleName, effectiveTagEqual, effectiveTagMatch); + for (final Map<String, String> variant : variants) { + sb.append(" - labels:\n"); + for (final Map.Entry<String, String> le : variant.entrySet()) { + sb.append(" ").append(yamlKey(le.getKey())) + .append(": ").append(yamlValue(le.getValue())).append("\n"); + } + sb.append(" value: ").append(valueForEqual).append("\n"); + } + } else { + sb.append(" - labels:\n"); + for (final String label : labels) { + final String value = inferLabelValue( + label, sampleName, effectiveTagEqual, effectiveTagMatch); + sb.append(" ").append(yamlKey(label)) + .append(": ").append(yamlValue(value)).append("\n"); + } + sb.append(" value: ").append(valueForEqual).append("\n"); + } + } + + sb.append("expected:\n"); + for (final String metricName : metricNames) { + sb.append(" ").append(yamlKey(metricName)).append(":\n"); + sb.append(" min_samples: 1\n"); + } + + return sb.toString(); + } + + private void extractSampleNamesFromText(final String expression, + final Map<String, Set<String>> sampleLabels) { + // Heuristic: identifiers at start, after binary ops, or after }). + // Also match after dot when preceded by a closing paren/brace + final Pattern p = Pattern.compile( + "(?:^|[+\\-*/()]\\s*|\\}\\)\\s*\\.\\s*)([a-zA-Z_][a-zA-Z0-9_]*)"); + final Matcher m = p.matcher(expression); + while (m.find()) { + final String name = m.group(1); + if (!isKeyword(name) && name.length() > 3) { + sampleLabels.computeIfAbsent(name, k -> new LinkedHashSet<>()); + } + } + } + + private void extractTagEqualAllValues(final String expression, + final Map<String, Set<String>> allValues) { + final Matcher m = TAG_EQUAL_PATTERN.matcher(expression); + while (m.find()) { + allValues.computeIfAbsent(m.group(1), k -> new LinkedHashSet<>()) + .add(m.group(2)); + } + } + + /** + * Build label variant maps for samples that need multiple tagEqual values. + * Produces one map per distinct value of each multi-value label. + */ + private List<Map<String, String>> buildLabelVariants( + final Set<String> allLabels, + final Map<String, Set<String>> multiValueLabels, + final String sampleName, + final Map<String, Set<String>> tagEqualAllValues, + final Map<String, List<String>> tagMatchPatterns) { + // Start with a single base variant containing all non-multi-value labels + List<Map<String, String>> variants = new ArrayList<>(); + final Map<String, String> base = new LinkedHashMap<>(); + for (final String label : allLabels) { + if (!multiValueLabels.containsKey(label)) { + base.put(label, inferLabelValue( + label, sampleName, tagEqualAllValues, tagMatchPatterns)); + } + } + variants.add(base); + + // For each multi-value label, expand: each existing variant × each value + for (final Map.Entry<String, Set<String>> mvEntry : multiValueLabels.entrySet()) { + final String label = mvEntry.getKey(); + final Set<String> values = mvEntry.getValue(); + final List<Map<String, String>> expanded = new ArrayList<>(); + for (final Map<String, String> existing : variants) { + for (final String val : values) { + final Map<String, String> copy = new LinkedHashMap<>(existing); + copy.put(label, val); + expanded.add(copy); + } + } + variants = expanded; + } + return variants; + } + + private void extractTagMatchAllPatterns(final String expression, + final Map<String, List<String>> patterns) { + final Matcher m = TAG_MATCH_PATTERN.matcher(expression); + while (m.find()) { + patterns.computeIfAbsent(m.group(1), k -> new ArrayList<>()) + .add(m.group(2)); + } + } + + private void extractClosureAccessedLabels(final String expression, + final Set<String> labels) { + final Matcher m1 = CLOSURE_TAG_ACCESS_PATTERN.matcher(expression); + while (m1.find()) { + final String label = m1.group(1); + if (!"put".equals(label) && !"get".equals(label) && !"trim".equals(label) + && !"toString".equals(label) && !"size".equals(label) + && !"length".equals(label)) { + labels.add(label); + } + } + final Matcher m2 = CLOSURE_TAG_BRACKET_PATTERN.matcher(expression); + while (m2.find()) { + labels.add(m2.group(1)); + } + } + + /** + * Extract label names from entity function arguments in expSuffix. + * Entity functions like {@code instance(['host_name'], ['service_instance_id'], Layer.MYSQL)} + * use {@code ['label']} arguments to identify which input labels map to entity fields. + * These labels must be present in all input samples. + */ + private void extractEntityFunctionLabels(final String expression, + final Set<String> labels) { + final Matcher funcMatcher = ENTITY_FUNC_PATTERN.matcher(expression); + while (funcMatcher.find()) { + // Find the matching closing paren for this entity function call + final int argsStart = funcMatcher.end(); + int depth = 1; + int argsEnd = argsStart; + for (int i = argsStart; i < expression.length() && depth > 0; i++) { + final char c = expression.charAt(i); + if (c == '(') { + depth++; + } else if (c == ')') { + depth--; + } + argsEnd = i; + } + final String argsStr = expression.substring(argsStart, argsEnd); + // Extract all ['label'] arguments within the function call + final Matcher labelMatcher = STRING_LIST_ARG_PATTERN.matcher(argsStr); + while (labelMatcher.find()) { + labels.add(labelMatcher.group(1)); + } + } + } + + private void extractTagNotEqualNullLabels(final String expression, + final Set<String> labels) { + final Matcher m = TAG_NOT_EQUAL_NULL_PATTERN.matcher(expression); + while (m.find()) { + labels.add(m.group(1)); + } + } + + private void extractTagNotEqualLabels(final String expression, + final Set<String> ruleSamples, + final Map<String, Set<String>> sampleLabels) { + final Matcher m = TAG_NOT_EQUAL_PATTERN.matcher(expression); + while (m.find()) { + final String label = m.group(1); + for (final String sample : ruleSamples) { + sampleLabels.computeIfAbsent(sample, k -> new LinkedHashSet<>()).add(label); + } + } + } + + private void extractTagNotMatchLabels(final String expression, + final Set<String> ruleSamples, + final Map<String, Set<String>> sampleLabels) { + final Matcher m = TAG_NOT_MATCH_PATTERN.matcher(expression); + while (m.find()) { + final String label = m.group(1); + for (final String sample : ruleSamples) { + sampleLabels.computeIfAbsent(sample, k -> new LinkedHashSet<>()).add(label); + } + } + } + + String inferLabelValue(final String label, + final String sampleName, + final Map<String, Set<String>> tagEqualAllValues, + final Map<String, List<String>> tagMatchPatterns) { + // Check tagEqual constraints — use first value (for single-value labels) + final Set<String> eqVals = tagEqualAllValues.get(label); + if (eqVals != null && !eqVals.isEmpty()) { + return eqVals.iterator().next(); + } + + // Check tagMatch constraints — generate a value matching ALL patterns + final List<String> patterns = tagMatchPatterns.get(label); + if (patterns != null && !patterns.isEmpty()) { + return generateMatchingValue(label, patterns, sampleName); + } + + // Known label patterns + switch (label) { + case "service": + case "service_name": + return "test-service"; + case "service_namespace": + return "test-ns"; + case "cluster_name": + case "cluster": + return "test-cluster"; + case "instance": + case "service_instance_id": + return "test-instance"; + case "endpoint": + return "/test"; + case "host_name": + case "node_identifier_host_name": + return "test-host"; + case "node": + case "node_id": + return "test-node"; + case "app": + return "test-app"; + case "job_name": + return "test-monitoring"; + case "topic": + return "test-topic"; + case "queue": + return "test-queue"; + case "broker": + case "brokerName": + return "test-broker"; + case "pod": + case "pod_name": + return "test-pod"; + case "namespace": + return "test-namespace"; + case "container": + return "test-container"; + case "mode": + return "user"; + case "mountpoint": + return "/"; + case "device": + return "eth0"; + case "fstype": + return "ext4"; + case "area": + return "heap"; + case "pool": + return "PS_Eden_Space"; + case "gc": + return "PS Scavenge"; + case "le": + return "100"; + case "type": + return "cds"; + case "status": + case "state": + return "active"; + case "code": + return "200"; + case "name": + return "test-name"; + case "level": + return "ERROR"; + case "pipe": + return "test-pipe"; + case "pipeline": + return "test-pipeline"; + case "direction": + return "in"; + case "route": + return "test-route"; + case "protocol": + return "http"; + case "is_ssl": + return "false"; + case "component": + return "49"; + case "created_by": + return "test-creator"; + case "source": + return "test-source"; + case "plugin_name": + return "test-plugin"; + case "inter_type": + return "test-type"; + case "metric_type": + case "metricName": + return "test-metric"; + case "kind": + return "test-kind"; + case "operation": + return "test-op"; + case "catalog": + return "test-catalog"; + case "listener": + return "test-listener"; + case "event": + return "test-event"; + case "dimensionality": + return "minute"; + case "shared_dict": + return "test-dict"; + case "key": + return "test-key"; + case "color": + return "green"; + case "process_name": + return "test-process"; + case "layer": + return "GENERAL"; + case "uri": + return "/test-uri"; + case "pid": + return "12345"; + case "side": + return "client"; + case "client_local": + return "false"; + case "server_local": + return "true"; + case "client_address": + return "10.0.0.1:8080"; + case "server_address": + return "10.0.0.2:9090"; + case "client_process_id": + case "server_process_id": + return "test-process-id"; + case "cloud_account_id": + return "123456789"; + case "cloud_region": + return "us-east-1"; + case "cloud_provider": + return "aws"; + case "Namespace": + return "AWS/DynamoDB"; + case "Operation": + return "GetItem"; + case "TableName": + return "test-table"; + case "destinationName": + return "test-destination"; + case "destinationType": + return "Queue"; + case "tag": + return "1.19.0"; + case "skywalking_service": + return "test-sw-service"; + case "oscal_control_bundle": + case "control_bundle": + return "nist-800-53"; + case "oscal_control_name": + case "control_name": + return "AC-1"; + case "secret_name": + return "test-secret"; + case "partition": + return "0"; + case "metrics_name": + return "test-metrics"; + case "cmd": + return "get"; + default: + return "test-value"; + } + } + + /** + * Generate a value that matches ALL given tagMatch regex patterns for a label. + */ + private String generateMatchingValue(final String label, final List<String> patterns, + final String sampleName) { + // Handle common multi-pattern label cases + if ("metrics_name".equals(label)) { + return generateMetricsNameValue(patterns); + } + + // For single pattern, use simple matching + final String pattern = patterns.get(0); + if ("gc".equals(label)) { + if (pattern.contains("PS Scavenge")) { + return "PS Scavenge"; + } + if (pattern.contains("PS MarkSweep")) { + return "PS MarkSweep"; + } + return pattern.split("\\|")[0]; + } + if ("Operation".equals(label)) { + return pattern.split("\\|")[0]; + } + // Generic: take first alternative or strip regex syntax + final String stripped = pattern + .replace(".*", "test") + .replace(".+", "test") + .replaceAll("\\[\\^.\\]\\+", "test") + .replace("(", "").replace(")", "") + .replace("^", "").replace("$", ""); + if (stripped.contains("|")) { + return stripped.split("\\|")[0]; + } + return stripped; + } + + private String generateMetricsNameValue(final List<String> patterns) { + // Envoy metrics_name patterns combine prefix matching with suffix matching + // e.g., [".+membership_healthy", "cluster.outbound.+|cluster.inbound.+"] + // Need a value matching ALL patterns simultaneously + + // Collect suffix/content requirements and prefix requirements + String prefix = "cluster.outbound.test-cluster"; + String suffix = ""; + + for (final String p : patterns) { + if (p.contains("ssl") && p.contains("expiration")) { + return "cluster.outbound.test-cluster.ssl.certificate.test-cert.expiration_unix_time_seconds"; + } + if (p.contains("membership_healthy")) { + suffix = ".membership_healthy"; + } else if (p.contains("membership_total")) { + suffix = ".membership_total"; + } else if (p.contains("cx_active")) { + suffix = ".upstream_cx_active"; + } else if (p.contains("cx_connect_fail")) { + suffix = ".upstream_cx_connect_fail"; + } else if (p.contains("rq_active")) { + suffix = ".upstream_rq_active"; + } else if (p.contains("rq_pending_active")) { + suffix = ".upstream_rq_pending_active"; + } else if (p.contains("lb_healthy_panic")) { + suffix = ".lb_healthy_panic"; + } else if (p.contains("cx_none_healthy")) { + suffix = ".upstream_cx_none_healthy"; + } else if (p.contains("cluster.outbound") || p.contains("cluster.inbound")) { + // Prefix pattern — already handled + prefix = "cluster.outbound.test-cluster"; + } + } + + if (!suffix.isEmpty()) { + return prefix + suffix; + } + + // Fallback: try to satisfy all patterns + for (final String p : patterns) { + final String stripped = p + .replace(".*", "test") + .replace(".+", "test") + .replace("(", "").replace(")", "") + .replace("^", "").replace("$", ""); + if (stripped.contains("|")) { + return stripped.split("\\|")[0]; + } + return stripped; + } + return "test-metric"; + } + + private static boolean isKeyword(final String name) { + switch (name) { + case "def": + case "if": + case "else": + case "return": + case "null": + case "true": + case "false": + case "in": + case "String": + case "tag": + case "sum": + case "avg": + case "min": + case "max": + case "count": + case "rate": + case "increase": + case "irate": + case "histogram": + case "service": + case "instance": + case "endpoint": + case "downsampling": + case "forEach": + case "tagEqual": + case "tagNotEqual": + case "tagMatch": + case "tagNotMatch": + case "valueEqual": + case "multiply": + case "filter": + case "time": + case "Layer": + case "SUM": + case "AVG": + case "MIN": + case "MAX": + case "LATEST": + case "MEAN": + case "ProcessRegistry": + // Closure parameter names (not sample names) + case "tags": + case "me": + case "prefix": + case "key": + case "result": + case "protocol": + case "ssl": + case "matcher": + case "parts": + case "java": + return true; + default: + return false; + } + } + + private static String yamlKey(final String key) { + // Quote keys that contain special chars + if (key.contains("-") || key.contains(".") || key.contains(" ")) { + return "'" + key + "'"; + } + return key; + } + + private static String yamlValue(final String value) { + if (value == null) { + return "''"; + } + // Quote values that could be misinterpreted + if (value.contains(":") || value.contains("#") || value.contains("{") + || value.contains("}") || value.contains("[") || value.contains("]") + || value.contains("'") || value.contains("\"") || value.contains(",") + || value.contains("&") || value.contains("*") || value.contains("!") + || value.contains("|") || value.contains(">") || value.contains("%") + || value.contains("@") || value.contains("`") + || "true".equals(value) || "false".equals(value) + || "null".equals(value) || "yes".equals(value) || "no".equals(value)) { + return "'" + value.replace("'", "''") + "'"; + } + return value; + } + + Path findScriptsDir() { + final String[] candidates = { + "test/script-cases/scripts/mal", + "../../scripts/mal" + }; + for (final String candidate : candidates) { + final Path path = Path.of(candidate); + if (Files.isDirectory(path)) { + return path; + } + } + return null; + } +} diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalInputDataGeneratorTest.java b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalInputDataGeneratorTest.java new file mode 100644 index 000000000000..19f52cb73bf6 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/java/org/apache/skywalking/oap/server/checker/mal/MalInputDataGeneratorTest.java @@ -0,0 +1,65 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.skywalking.oap.server.checker.mal; + +import java.nio.file.Files; +import java.nio.file.Path; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.assertTrue; + +/** + * Runs {@link MalInputDataGenerator} to generate .data.yaml companion files + * for all MAL test YAML scripts. This test is idempotent — it skips files + * that already have companions. + */ +class MalInputDataGeneratorTest { + + @Test + void generateAllInputFiles() throws Exception { + final MalInputDataGenerator gen = new MalInputDataGenerator(); + final Path scriptsDir = gen.findScriptsDir(); + assertTrue(scriptsDir != null && Files.isDirectory(scriptsDir), + "Cannot find scripts/mal directory"); + + final String[] dirs = { + "test-meter-analyzer-config", + "test-otel-rules", + "test-envoy-metrics-rules", + "test-log-mal-rules", + "test-telegraf-rules", + "test-zabbix-rules" + }; + + int totalGenerated = 0; + for (final String dir : dirs) { + final Path dirPath = scriptsDir.resolve(dir); + if (Files.isDirectory(dirPath)) { + final int[] counts = gen.processDirectory(dirPath); + totalGenerated += counts[0]; + } + } + + // Verify at least some files were generated or already existed + final long inputFileCount = Files.walk(scriptsDir) + .filter(p -> p.toString().endsWith(".data.yaml")) + .count(); + assertTrue(inputFileCount > 0, + "Expected at least one .data.yaml file to exist"); + } +} diff --git a/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/resources/META-INF/services/org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/resources/META-INF/services/org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider new file mode 100644 index 000000000000..6ac11b017493 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-lal-v1-v2-checker/src/test/resources/META-INF/services/org.apache.skywalking.oap.log.analyzer.v2.spi.LALSourceTypeProvider @@ -0,0 +1,19 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# + +org.apache.skywalking.oap.server.checker.lal.TestMeshLALSourceTypeProvider diff --git a/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/pom.xml b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/pom.xml new file mode 100644 index 000000000000..38aba3c4458c --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/pom.xml @@ -0,0 +1,51 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!-- + ~ Licensed to the Apache Software Foundation (ASF) under one or more + ~ contributor license agreements. See the NOTICE file distributed with + ~ this work for additional information regarding copyright ownership. + ~ The ASF licenses this file to You under the Apache License, Version 2.0 + ~ (the "License"); you may not use this file except in compliance with + ~ the License. You may obtain a copy of the License at + ~ + ~ http://www.apache.org/licenses/LICENSE-2.0 + ~ + ~ Unless required by applicable law or agreed to in writing, software + ~ distributed under the License is distributed on an "AS IS" BASIS, + ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + ~ See the License for the specific language governing permissions and + ~ limitations under the License. + ~ + --> + +<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> + <parent> + <artifactId>script-runtime-with-groovy</artifactId> + <groupId>org.apache.skywalking</groupId> + <version>${revision}</version> + </parent> + <modelVersion>4.0.0</modelVersion> + + <artifactId>mal-v1-with-groovy</artifactId> + + <dependencies> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>meter-analyzer</artifactId> + <version>${project.version}</version> + </dependency> + <dependency> + <groupId>org.apache.groovy</groupId> + <artifactId>groovy</artifactId> + </dependency> + <dependency> + <groupId>io.vavr</groupId> + <artifactId>vavr</artifactId> + </dependency> + <dependency> + <groupId>org.apache.skywalking</groupId> + <artifactId>server-testing</artifactId> + <version>${project.version}</version> + <scope>test</scope> + </dependency> + </dependencies> +</project> diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/Analyzer.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/Analyzer.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/Analyzer.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/Analyzer.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/MetricConvert.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/MetricConvert.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/MetricConvert.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/MetricConvert.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/MetricRuleConfig.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/MetricRuleConfig.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/MetricRuleConfig.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/MetricRuleConfig.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/DSL.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/DSL.java similarity index 98% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/DSL.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/DSL.java index e723b6add348..161894bd01a6 100644 --- a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/DSL.java +++ b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/DSL.java @@ -75,6 +75,7 @@ public static Expression parse(final String metricName, final String expression) .add(Map.class) .add(List.class) .add(Array.class) + .add(String[].class) .add(K8sRetagType.class) .add(DetectPoint.class) .add(Layer.class) diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/DownsamplingType.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/DownsamplingType.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/DownsamplingType.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/DownsamplingType.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/EndpointEntityDescription.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/EndpointEntityDescription.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/EndpointEntityDescription.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/EndpointEntityDescription.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/EntityDescription.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/EntityDescription.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/EntityDescription.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/EntityDescription.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/InstanceEntityDescription.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/InstanceEntityDescription.java similarity index 93% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/InstanceEntityDescription.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/InstanceEntityDescription.java index 04c0a2a5dad6..83f1e5f87f23 100644 --- a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/InstanceEntityDescription.java +++ b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/InstanceEntityDescription.java @@ -20,6 +20,7 @@ import java.util.List; import java.util.Map; +import java.util.function.Function; import java.util.stream.Collectors; import java.util.stream.Stream; import lombok.Getter; @@ -27,7 +28,6 @@ import lombok.ToString; import org.apache.skywalking.oap.server.core.analysis.Layer; import org.apache.skywalking.oap.server.core.analysis.meter.ScopeType; -import groovy.lang.Closure; @Getter @RequiredArgsConstructor @@ -39,7 +39,7 @@ public class InstanceEntityDescription implements EntityDescription { private final Layer layer; private final String serviceDelimiter; private final String instanceDelimiter; - private final Closure<Map<String, String>> propertiesExtractor; + private final Function<Map<String, String>, Map<String, String>> propertiesExtractor; @Override public List<String> getLabelKeys() { diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ProcessEntityDescription.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ProcessEntityDescription.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ProcessEntityDescription.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ProcessEntityDescription.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ProcessRelationEntityDescription.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ProcessRelationEntityDescription.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ProcessRelationEntityDescription.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ProcessRelationEntityDescription.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ServiceEntityDescription.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ServiceEntityDescription.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ServiceEntityDescription.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ServiceEntityDescription.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ServiceRelationEntityDescription.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ServiceRelationEntityDescription.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ServiceRelationEntityDescription.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/EntityDescription/ServiceRelationEntityDescription.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/Expression.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/Expression.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/Expression.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/Expression.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/ExpressionParsingContext.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/ExpressionParsingContext.java similarity index 96% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/ExpressionParsingContext.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/ExpressionParsingContext.java index a09eaedb1fbc..d7a2844110ab 100644 --- a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/ExpressionParsingContext.java +++ b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/ExpressionParsingContext.java @@ -41,7 +41,7 @@ @Builder public class ExpressionParsingContext implements Closeable { - static ExpressionParsingContext create() { + public static ExpressionParsingContext create() { if (CACHE.get() == null) { CACHE.set(ExpressionParsingContext.builder() .samples(Lists.newArrayList()) @@ -52,7 +52,7 @@ static ExpressionParsingContext create() { return CACHE.get(); } - static Optional<ExpressionParsingContext> get() { + public static Optional<ExpressionParsingContext> get() { return Optional.ofNullable(CACHE.get()); } diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/ExpressionParsingException.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/ExpressionParsingException.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/ExpressionParsingException.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/ExpressionParsingException.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/FilterExpression.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/FilterExpression.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/FilterExpression.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/FilterExpression.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/NumberClosure.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/NumberClosure.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/NumberClosure.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/NumberClosure.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/Result.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/Result.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/Result.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/Result.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/Sample.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/Sample.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/Sample.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/Sample.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/SampleFamily.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/SampleFamily.java similarity index 91% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/SampleFamily.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/SampleFamily.java index f72fed4cef8d..5a979d0a381d 100644 --- a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/SampleFamily.java +++ b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/SampleFamily.java @@ -60,6 +60,11 @@ import groovy.lang.Closure; import io.vavr.Function2; import io.vavr.Function3; +import org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamilyFunctions.DecorateFunction; +import org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamilyFunctions.ForEachFunction; +import org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamilyFunctions.PropertiesExtractor; +import org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamilyFunctions.SampleFilter; +import org.apache.skywalking.oap.meter.analyzer.dsl.SampleFamilyFunctions.TagFunction; import lombok.AccessLevel; import lombok.Builder; import lombok.EqualsAndHashCode; @@ -400,6 +405,26 @@ public SampleFamily tag(Closure<?> cl) { ); } + @SuppressWarnings(value = "unchecked") + public SampleFamily tag(TagFunction fn) { + if (this == EMPTY) { + return EMPTY; + } + return SampleFamily.build( + this.context, + Arrays.stream(samples) + .map(sample -> { + Map<String, String> arg = Maps.newHashMap(sample.labels); + Map<String, String> r = fn.apply(arg); + return sample.toBuilder() + .labels( + ImmutableMap.copyOf( + Optional.ofNullable(r).orElse(arg))) + .build(); + }).toArray(Sample[]::new) + ); + } + public SampleFamily filter(Closure<Boolean> filter) { if (this == EMPTY) { return EMPTY; @@ -413,6 +438,19 @@ public SampleFamily filter(Closure<Boolean> filter) { return SampleFamily.build(context, filtered); } + public SampleFamily filter(SampleFilter filter) { + if (this == EMPTY) { + return EMPTY; + } + final Sample[] filtered = Arrays.stream(samples) + .filter(it -> filter.test(it.labels)) + .toArray(Sample[]::new); + if (filtered.length == 0) { + return EMPTY; + } + return SampleFamily.build(context, filtered); + } + /* k8s retags*/ public SampleFamily retagByK8sMeta(String newLabelName, K8sRetagType type, @@ -516,12 +554,30 @@ public SampleFamily instance(List<String> serviceKeys, String serviceDelimiter, if (this == EMPTY) { return EMPTY; } + return createMeterSamples(new InstanceEntityDescription( + serviceKeys, instanceKeys, layer, serviceDelimiter, instanceDelimiter, + propertiesExtractor == null ? null : propertiesExtractor::call)); + } + + public SampleFamily instance(List<String> serviceKeys, String serviceDelimiter, + List<String> instanceKeys, String instanceDelimiter, + Layer layer, PropertiesExtractor propertiesExtractor) { + Preconditions.checkArgument(serviceKeys.size() > 0); + Preconditions.checkArgument(instanceKeys.size() > 0); + ExpressionParsingContext.get().ifPresent(ctx -> { + ctx.scopeType = ScopeType.SERVICE_INSTANCE; + ctx.scopeLabels.addAll(serviceKeys); + ctx.scopeLabels.addAll(instanceKeys); + }); + if (this == EMPTY) { + return EMPTY; + } return createMeterSamples(new InstanceEntityDescription( serviceKeys, instanceKeys, layer, serviceDelimiter, instanceDelimiter, propertiesExtractor)); } public SampleFamily instance(List<String> serviceKeys, List<String> instanceKeys, Layer layer) { - return instance(serviceKeys, Const.POINT, instanceKeys, Const.POINT, layer, null); + return instance(serviceKeys, Const.POINT, instanceKeys, Const.POINT, layer, (Closure<Map<String, String>>) null); } public SampleFamily endpoint(List<String> serviceKeys, List<String> endpointKeys, String delimiter, Layer layer) { @@ -601,6 +657,19 @@ public SampleFamily forEach(List<String> array, Closure<Void> each) { }).toArray(Sample[]::new)); } + public SampleFamily forEach(List<String> array, ForEachFunction each) { + if (this == EMPTY) { + return EMPTY; + } + return SampleFamily.build(this.context, Arrays.stream(this.samples).map(sample -> { + Map<String, String> labels = Maps.newHashMap(sample.getLabels()); + for (String element : array) { + each.accept(element, labels); + } + return sample.toBuilder().labels(ImmutableMap.copyOf(labels)).build(); + }).toArray(Sample[]::new)); + } + public SampleFamily processRelation(String detectPointKey, List<String> serviceKeys, List<String> instanceKeys, String sourceProcessIdKey, String destProcessIdKey, String componentKey) { Preconditions.checkArgument(serviceKeys.size() > 0); Preconditions.checkArgument(instanceKeys.size() > 0); @@ -717,6 +786,26 @@ public SampleFamily decorate(Closure<Void> c) { return this; } + public SampleFamily decorate(DecorateFunction c) { + ExpressionParsingContext.get().ifPresent(ctx -> { + if (ctx.getScopeType() != ScopeType.SERVICE) { + throw new IllegalStateException("decorate() should be invoked after service()"); + } + if (ctx.isHistogram()) { + throw new IllegalStateException("decorate() not supported for histogram metrics"); + } + }); + if (this == EMPTY) { + return EMPTY; + } + this.context.getMeterSamples().keySet().forEach(meterEntity -> { + if (meterEntity.getScopeType().equals(ScopeType.SERVICE)) { + c.accept(meterEntity); + } + }); + return this; + } + /** * The parsing context holds key results more than sample collection. */ @@ -777,7 +866,7 @@ private static MeterEntity buildMeterEntity(List<Sample> samples, InstanceEntityDescription instanceEntityDescription = (InstanceEntityDescription) entityDescription; Map<String, String> properties = null; if (instanceEntityDescription.getPropertiesExtractor() != null) { - properties = instanceEntityDescription.getPropertiesExtractor().call(samples.get(0).labels); + properties = instanceEntityDescription.getPropertiesExtractor().apply(samples.get(0).labels); } return MeterEntity.newServiceInstance( InternalOps.dim(samples, instanceEntityDescription.getServiceKeys(), instanceEntityDescription.getServiceDelimiter()), diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/SampleFamilyBuilder.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/SampleFamilyBuilder.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/SampleFamilyBuilder.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/SampleFamilyBuilder.java diff --git a/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/SampleFamilyFunctions.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/SampleFamilyFunctions.java new file mode 100644 index 000000000000..1c9747b56a25 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/SampleFamilyFunctions.java @@ -0,0 +1,77 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.apache.skywalking.oap.meter.analyzer.dsl; + +import java.util.Map; +import java.util.function.Consumer; +import java.util.function.Function; +import java.util.function.Predicate; +import org.apache.skywalking.oap.server.core.analysis.meter.MeterEntity; + +/** + * Pure Java functional interfaces replacing Groovy Closure parameters in SampleFamily methods. + */ +public final class SampleFamilyFunctions { + + private SampleFamilyFunctions() { + } + + /** + * Replaces {@code Closure<?>} in {@link SampleFamily#tag(groovy.lang.Closure)}. + * Receives a mutable label map and returns the (possibly modified) map. + */ + @FunctionalInterface + public interface TagFunction extends Function<Map<String, String>, Map<String, String>> { + } + + /** + * Replaces {@code Closure<Boolean>} in {@link SampleFamily#filter(groovy.lang.Closure)}. + * Tests whether a sample's labels match the filter criteria. + */ + @FunctionalInterface + public interface SampleFilter extends Predicate<Map<String, String>> { + } + + /** + * Replaces {@code Closure<Void>} in {@link SampleFamily#forEach(java.util.List, groovy.lang.Closure)}. + * Called for each element in the array with the element value and a mutable labels map. + */ + @FunctionalInterface + public interface ForEachFunction { + void accept(String element, Map<String, String> tags); + } + + /** + * Replaces {@code Closure<Void>} in {@link SampleFamily#decorate(groovy.lang.Closure)}. + * Decorates service meter entities. + */ + @FunctionalInterface + public interface DecorateFunction extends Consumer<MeterEntity> { + } + + /** + * Replaces {@code Closure<Map<String, String>>} in + * {@link SampleFamily#instance(java.util.List, String, java.util.List, String, + * org.apache.skywalking.oap.server.core.analysis.Layer, groovy.lang.Closure)}. + * Extracts instance properties from sample labels. + */ + @FunctionalInterface + public interface PropertiesExtractor extends Function<Map<String, String>, Map<String, String>> { + } +} diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/counter/CounterWindow.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/counter/CounterWindow.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/counter/CounterWindow.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/counter/CounterWindow.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/counter/ID.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/counter/ID.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/counter/ID.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/counter/ID.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/registry/ProcessRegistry.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/registry/ProcessRegistry.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/registry/ProcessRegistry.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/registry/ProcessRegistry.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/tagOpt/K8sRetagType.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/tagOpt/K8sRetagType.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/tagOpt/K8sRetagType.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/tagOpt/K8sRetagType.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/tagOpt/Retag.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/tagOpt/Retag.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/tagOpt/Retag.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/dsl/tagOpt/Retag.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/k8s/K8sInfoRegistry.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/k8s/K8sInfoRegistry.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/k8s/K8sInfoRegistry.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/k8s/K8sInfoRegistry.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/PrometheusMetricConverter.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/PrometheusMetricConverter.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/PrometheusMetricConverter.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/PrometheusMetricConverter.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/rule/MetricsRule.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/rule/MetricsRule.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/rule/MetricsRule.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/rule/MetricsRule.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/rule/Rule.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/rule/Rule.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/rule/Rule.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/rule/Rule.java diff --git a/oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/rule/Rules.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/rule/Rules.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/rule/Rules.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/main/java/org/apache/skywalking/oap/meter/analyzer/prometheus/rule/Rules.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/MetricConvertTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/MetricConvertTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/MetricConvertTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/MetricConvertTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/AggregationTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/AggregationTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/AggregationTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/AggregationTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/AnalyzerTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/AnalyzerTest.java similarity index 98% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/AnalyzerTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/AnalyzerTest.java index 52d1467db3ec..b7bb8a910b39 100644 --- a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/AnalyzerTest.java +++ b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/AnalyzerTest.java @@ -42,7 +42,7 @@ import org.mockito.Mock; import org.mockito.Mockito; import org.mockito.junit.jupiter.MockitoExtension; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.HashMap; import java.util.Map; @@ -69,7 +69,7 @@ public void setup() throws StorageException { // Fix for JDK 25 / Mockito 5: Prevent double-spying on the singleton MetricsStreamProcessor instance = MetricsStreamProcessor.getInstance(); if (!Mockito.mockingDetails(instance).isMock()) { - Whitebox.setInternalState(MetricsStreamProcessor.class, "PROCESSOR", Mockito.spy(instance)); + ReflectUtil.setInternalState(MetricsStreamProcessor.class, "PROCESSOR", Mockito.spy(instance)); } doNothing().when(MetricsStreamProcessor.getInstance()).create(any(), (StreamDefinition) any(), any()); diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ArithmeticTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ArithmeticTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ArithmeticTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ArithmeticTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/BasicTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/BasicTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/BasicTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/BasicTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/DecorateTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/DecorateTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/DecorateTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/DecorateTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ExpressionParsingTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ExpressionParsingTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ExpressionParsingTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ExpressionParsingTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/FilterTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/FilterTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/FilterTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/FilterTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/FunctionTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/FunctionTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/FunctionTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/FunctionTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/IncreaseTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/IncreaseTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/IncreaseTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/IncreaseTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/K8sTagTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/K8sTagTest.java similarity index 98% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/K8sTagTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/K8sTagTest.java index 652d514517e7..0fc3e83d4847 100644 --- a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/K8sTagTest.java +++ b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/K8sTagTest.java @@ -43,7 +43,7 @@ import org.mockito.junit.jupiter.MockitoExtension; import org.mockito.junit.jupiter.MockitoSettings; import org.mockito.quality.Strictness; -import org.powermock.reflect.Whitebox; +import org.apache.skywalking.oap.server.testing.util.ReflectUtil; import java.util.Arrays; import java.util.Collection; @@ -237,10 +237,10 @@ public static Collection<Object[]> data() { @SneakyThrows @BeforeEach public void setup() { - Whitebox.setInternalState(KubernetesServices.class, "INSTANCE", + ReflectUtil.setInternalState(KubernetesServices.class, "INSTANCE", Mockito.mock(KubernetesServices.class) ); - Whitebox.setInternalState(KubernetesPods.class, "INSTANCE", + ReflectUtil.setInternalState(KubernetesPods.class, "INSTANCE", Mockito.mock(KubernetesPods.class) ); diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ScopeTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ScopeTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ScopeTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ScopeTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/TagFilterTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/TagFilterTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/TagFilterTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/TagFilterTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ValueFilterTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ValueFilterTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ValueFilterTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/ValueFilterTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/counter/CounterWindowTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/counter/CounterWindowTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/counter/CounterWindowTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/counter/CounterWindowTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/rule/RuleLoaderFailTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/rule/RuleLoaderFailTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/rule/RuleLoaderFailTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/rule/RuleLoaderFailTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/rule/RuleLoaderTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/rule/RuleLoaderTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/rule/RuleLoaderTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/rule/RuleLoaderTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/rule/RuleLoaderYAMLFailTest.java b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/rule/RuleLoaderYAMLFailTest.java similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/rule/RuleLoaderYAMLFailTest.java rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/java/org/apache/skywalking/oap/meter/analyzer/dsl/rule/RuleLoaderYAMLFailTest.java diff --git a/oap-server/analyzer/meter-analyzer/src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker diff --git a/oap-server/analyzer/meter-analyzer/src/test/resources/otel-rules/illegal-yaml/test.yml b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/otel-rules/illegal-yaml/test.yml similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/resources/otel-rules/illegal-yaml/test.yml rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/otel-rules/illegal-yaml/test.yml diff --git a/oap-server/analyzer/meter-analyzer/src/test/resources/otel-rules/single-file-case.yaml b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/otel-rules/single-file-case.yaml similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/resources/otel-rules/single-file-case.yaml rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/otel-rules/single-file-case.yaml diff --git a/oap-server/analyzer/meter-analyzer/src/test/resources/otel-rules/test-folder/case1.yaml b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/otel-rules/test-folder/case1.yaml similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/resources/otel-rules/test-folder/case1.yaml rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/otel-rules/test-folder/case1.yaml diff --git a/oap-server/analyzer/meter-analyzer/src/test/resources/otel-rules/test-folder/case2.yml b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/otel-rules/test-folder/case2.yml similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/resources/otel-rules/test-folder/case2.yml rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/otel-rules/test-folder/case2.yml diff --git a/oap-server/analyzer/meter-analyzer/src/test/resources/otel-rules/test-folder/case3.yaml b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/otel-rules/test-folder/case3.yaml similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/resources/otel-rules/test-folder/case3.yaml rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/otel-rules/test-folder/case3.yaml diff --git a/oap-server/analyzer/meter-analyzer/src/test/resources/otel-rules/test-folder/deeperFolder/caseUnReach.yaml b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/otel-rules/test-folder/deeperFolder/caseUnReach.yaml similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/resources/otel-rules/test-folder/deeperFolder/caseUnReach.yaml rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/otel-rules/test-folder/deeperFolder/caseUnReach.yaml diff --git a/oap-server/analyzer/meter-analyzer/src/test/resources/otel-rules/test-folder/empty.yaml b/test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/otel-rules/test-folder/empty.yaml similarity index 100% rename from oap-server/analyzer/meter-analyzer/src/test/resources/otel-rules/test-folder/empty.yaml rename to test/script-cases/script-runtime-with-groovy/mal-v1-with-groovy/src/test/resources/otel-rules/test-folder/empty.yaml diff --git a/test/script-cases/script-runtime-with-groovy/pom.xml b/test/script-cases/script-runtime-with-groovy/pom.xml new file mode 100644 index 000000000000..caafca02b679 --- /dev/null +++ b/test/script-cases/script-runtime-with-groovy/pom.xml @@ -0,0 +1,39 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!-- + ~ Licensed to the Apache Software Foundation (ASF) under one or more + ~ contributor license agreements. See the NOTICE file distributed with + ~ this work for additional information regarding copyright ownership. + ~ The ASF licenses this file to You under the Apache License, Version 2.0 + ~ (the "License"); you may not use this file except in compliance with + ~ the License. You may obtain a copy of the License at + ~ + ~ http://www.apache.org/licenses/LICENSE-2.0 + ~ + ~ Unless required by applicable law or agreed to in writing, software + ~ distributed under the License is distributed on an "AS IS" BASIS, + ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + ~ See the License for the specific language governing permissions and + ~ limitations under the License. + ~ + --> + +<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> + <parent> + <artifactId>oap-server</artifactId> + <groupId>org.apache.skywalking</groupId> + <version>${revision}</version> + <relativePath>../../../oap-server/pom.xml</relativePath> + </parent> + <modelVersion>4.0.0</modelVersion> + + <artifactId>script-runtime-with-groovy</artifactId> + <packaging>pom</packaging> + + <modules> + <module>mal-v1-with-groovy</module> + <module>lal-v1-with-groovy</module> + <module>hierarchy-v1-with-groovy</module> + <module>mal-lal-v1-v2-checker</module> + <module>hierarchy-v1-v2-checker</module> + </modules> +</project> diff --git a/test/script-cases/scripts/hierarchy-rule/test-hierarchy-definition.data.yaml b/test/script-cases/scripts/hierarchy-rule/test-hierarchy-definition.data.yaml new file mode 100644 index 000000000000..738ae0c4ac73 --- /dev/null +++ b/test/script-cases/scripts/hierarchy-rule/test-hierarchy-definition.data.yaml @@ -0,0 +1,111 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + name: + - description: "exact match" + upper: { name: "my-service", shortName: "my-service" } + lower: { name: "my-service", shortName: "my-service" } + expected: true + - description: "mismatch" + upper: { name: "svc-a", shortName: "svc-a" } + lower: { name: "svc-b", shortName: "svc-b" } + expected: false + - description: "same shortName different name" + upper: { name: "svc-a", shortName: "same" } + lower: { name: "svc-b", shortName: "same" } + expected: false + - description: "empty names" + upper: { name: "", shortName: "" } + lower: { name: "", shortName: "" } + expected: true + + short-name: + - description: "exact shortName match" + upper: { name: "full-a", shortName: "svc" } + lower: { name: "full-b", shortName: "svc" } + expected: true + - description: "shortName mismatch" + upper: { name: "a", shortName: "svc-1" } + lower: { name: "b", shortName: "svc-2" } + expected: false + - description: "same name different shortName" + upper: { name: "same", shortName: "short-a" } + lower: { name: "same", shortName: "short-b" } + expected: false + - description: "empty shortNames" + upper: { name: "a", shortName: "" } + lower: { name: "b", shortName: "" } + expected: true + + lower-short-name-remove-ns: + - description: "match: svc == svc.namespace" + upper: { name: "a", shortName: "svc" } + lower: { name: "b", shortName: "svc.namespace" } + expected: true + - description: "match: app == app.default" + upper: { name: "a", shortName: "app" } + lower: { name: "b", shortName: "app.default" } + expected: true + - description: "no dot in lower" + upper: { name: "a", shortName: "svc" } + lower: { name: "b", shortName: "svc" } + expected: false + - description: "mismatch prefix" + upper: { name: "a", shortName: "other" } + lower: { name: "b", shortName: "svc.namespace" } + expected: false + - description: "dot at position 0" + upper: { name: "a", shortName: "" } + lower: { name: "b", shortName: ".namespace" } + expected: false + - description: "multiple dots - uses last" + upper: { name: "a", shortName: "svc.ns1" } + lower: { name: "b", shortName: "svc.ns1.ns2" } + expected: true + - description: "empty lower" + upper: { name: "a", shortName: "svc" } + lower: { name: "b", shortName: "" } + expected: false + + lower-short-name-with-fqdn: + - description: "match: db.svc.cluster.local:3306 vs db" + upper: { name: "a", shortName: "db.svc.cluster.local:3306" } + lower: { name: "b", shortName: "db" } + expected: true + - description: "match: redis.svc.cluster.local:6379 vs redis" + upper: { name: "a", shortName: "redis.svc.cluster.local:6379" } + lower: { name: "b", shortName: "redis" } + expected: true + - description: "no colon in upper" + upper: { name: "a", shortName: "db" } + lower: { name: "b", shortName: "db" } + expected: false + - description: "wrong fqdn suffix" + upper: { name: "a", shortName: "db:3306" } + lower: { name: "b", shortName: "other" } + expected: false + - description: "upper without fqdn" + upper: { name: "a", shortName: "db:3306" } + lower: { name: "b", shortName: "db" } + expected: false + - description: "empty upper" + upper: { name: "a", shortName: "" } + lower: { name: "b", shortName: "db" } + expected: false + - description: "colon at end" + upper: { name: "a", shortName: "db.svc.cluster.local:" } + lower: { name: "b", shortName: "db" } + expected: true diff --git a/test/script-cases/scripts/hierarchy-rule/test-hierarchy-definition.yml b/test/script-cases/scripts/hierarchy-rule/test-hierarchy-definition.yml new file mode 100644 index 000000000000..1f44cf5630b3 --- /dev/null +++ b/test/script-cases/scripts/hierarchy-rule/test-hierarchy-definition.yml @@ -0,0 +1,123 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Define the hierarchy of service layers, the layers under the specific layer are related lower of the layer. +# The relation could have a matching rule for auto matching, which are defined in the `auto-matching-rules` section. +# All the layers are defined in the file `org.apache.skywalking.oap.server.core.analysis.Layers.java`. +# Notice: some hierarchy relations and auto matching rules are only works on k8s env. + +hierarchy: + MESH: + MESH_DP: name + K8S_SERVICE: short-name + + MESH_DP: + K8S_SERVICE: short-name + + GENERAL: + APISIX: lower-short-name-remove-ns + K8S_SERVICE: lower-short-name-remove-ns + KONG: lower-short-name-remove-ns + + MYSQL: + K8S_SERVICE: short-name + + POSTGRESQL: + K8S_SERVICE: short-name + + APISIX: + K8S_SERVICE: short-name + + NGINX: + K8S_SERVICE: short-name + + SO11Y_OAP: + K8S_SERVICE: short-name + + ROCKETMQ: + K8S_SERVICE: short-name + + RABBITMQ: + K8S_SERVICE: short-name + + KAFKA: + K8S_SERVICE: short-name + + CLICKHOUSE: + K8S_SERVICE: short-name + + PULSAR: + K8S_SERVICE: short-name + + ACTIVEMQ: + K8S_SERVICE: short-name + + KONG: + K8S_SERVICE: short-name + + VIRTUAL_DATABASE: + MYSQL: lower-short-name-with-fqdn + POSTGRESQL: lower-short-name-with-fqdn + CLICKHOUSE: lower-short-name-with-fqdn + + VIRTUAL_MQ: + ROCKETMQ: lower-short-name-with-fqdn + RABBITMQ: lower-short-name-with-fqdn + KAFKA: lower-short-name-with-fqdn + PULSAR: lower-short-name-with-fqdn + + CILIUM_SERVICE: + K8S_SERVICE: short-name + +# Use Groovy script to define the matching rules, the input parameters are the upper service(u) and the lower service(l) and the return value is a boolean, +# which are used to match the relation between the upper service(u) and the lower service(l) on the different layers. +auto-matching-rules: + # the name of the upper service is equal to the name of the lower service + name: "{ (u, l) -> u.name == l.name }" + # the short name of the upper service is equal to the short name of the lower service + short-name: "{ (u, l) -> u.shortName == l.shortName }" + # remove the k8s namespace from the lower service short name + # this rule is only works on k8s env. + lower-short-name-remove-ns: "{ (u, l) -> { if(l.shortName.lastIndexOf('.') > 0) return u.shortName == l.shortName.substring(0, l.shortName.lastIndexOf('.')); return false; } }" + # the short name of the upper remove port is equal to the short name of the lower service with fqdn suffix + # this rule is only works on k8s env. + lower-short-name-with-fqdn: "{ (u, l) -> { if(u.shortName.lastIndexOf(':') > 0) return u.shortName.substring(0, u.shortName.lastIndexOf(':')) == l.shortName.concat('.svc.cluster.local'); return false; } }" + +# The hierarchy level of the service layer, the level is used to define the order of the service layer for UI presentation. +# The level of the upper service should greater than the level of the lower service in `hierarchy` section. +layer-levels: + MESH: 3 + GENERAL: 3 + SO11Y_OAP: 3 + VIRTUAL_DATABASE: 3 + VIRTUAL_MQ: 3 + + MYSQL: 2 + POSTGRESQL: 2 + APISIX: 2 + NGINX: 2 + ROCKETMQ: 2 + CLICKHOUSE: 2 + RABBITMQ: 2 + KAFKA: 2 + PULSAR: 2 + ACTIVEMQ: 2 + KONG: 2 + + MESH_DP: 1 + CILIUM_SERVICE: 1 + + K8S_SERVICE: 0 + diff --git a/test/script-cases/scripts/lal/test-lal/feature-cases/execution-basic.input.data b/test/script-cases/scripts/lal/test-lal/feature-cases/execution-basic.input.data new file mode 100644 index 000000000000..b16538351316 --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/feature-cases/execution-basic.input.data @@ -0,0 +1,161 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Mock input data and expected output for execution-basic.yaml rules. +# Format: YAML keyed by rule name. Each entry describes body-type, body, +# optional tags, and expected assertions after execution. + +json-parse-extract: + body-type: json + body: '{"service":"my-svc","instance":"inst-01","endpoint":"/api","layer":"GENERAL"}' + expect: + service: my-svc + instance: inst-01 + endpoint: /api + layer: GENERAL + save: true + abort: false + +tag-condition-true: + body-type: json + body: '{"service":"db-svc"}' + tags: + LOG_KIND: SLOW_SQL + expect: + service: db-svc + +tag-condition-false: + body-type: json + body: '{"service":"db-svc"}' + tags: + LOG_KIND: NORMAL + expect: + service: "" + +tag-assignment: + body-type: json + body: '{"env":"prod","region":"us-east"}' + expect: + tag.key1: prod + tag.key2: us-east + +safe-nav-missing: + body-type: json + body: '{"other":"val"}' + expect: + service: "" + abort: false + +safe-nav-present: + body-type: json + body: '{"data":{"name":"found"}}' + expect: + service: found + +if-else-if-error: + body-type: json + body: '{"level":"ERROR"}' + expect: + service: error-handler + +if-else-if-warn: + body-type: json + body: '{"level":"WARN"}' + expect: + service: warn-handler + +if-else-if-default: + body-type: json + body: '{"level":"DEBUG"}' + expect: + service: default-handler + +sink-enforcer: + body-type: json + body: '{}' + expect: + save: true + +sink-dropper: + body-type: json + body: '{}' + expect: + save: false + +sampler-rate-limit: + body-type: json + body: '{}' + expect: + save: true + +sampler-interpolated-id: + body-type: json + body: '{"code":"200"}' + expect: + save: true + +abort-stops-pipeline: + body-type: json + body: '{"service":"should-not-be-set"}' + expect: + abort: true + service: "" + +conditional-abort-true: + body-type: json + body: '{"skip":"true","service":"my-svc"}' + expect: + abort: true + service: "" + +conditional-abort-false: + body-type: json + body: '{"skip":"false","service":"my-svc"}' + expect: + abort: false + service: my-svc + +timestamp-extraction: + body-type: json + body: '{"time":"1609459200000"}' + expect: + timestamp: 1609459200000 + +text-parser-regexp: + body-type: text + body: "1609459200000 ERROR Something failed" + expect: + service: ERROR + +sampled-trace-basic: + body-type: json + body: '{"latency":150,"uri":"/test","reason":"slow","pid":"proc-a","dpid":"proc-b","dp":"client"}' + service: trace-svc + instance: trace-inst + trace-id: trace-basic-001 + timestamp: 1609459200000 + expect: + save: true + sampledTrace.traceId: trace-basic-001 + sampledTrace.serviceName: trace-svc + sampledTrace.serviceInstanceName: trace-inst + sampledTrace.timestamp: 1609459200000 + sampledTrace.latency: 150 + sampledTrace.uri: /test + sampledTrace.reason: SLOW + sampledTrace.processId: proc-a + sampledTrace.destProcessId: proc-b + sampledTrace.detectPoint: CLIENT + sampledTrace.componentId: 49 diff --git a/test/script-cases/scripts/lal/test-lal/feature-cases/execution-basic.yaml b/test/script-cases/scripts/lal/test-lal/feature-cases/execution-basic.yaml new file mode 100644 index 000000000000..71e3bb7d401b --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/feature-cases/execution-basic.yaml @@ -0,0 +1,272 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Feature-focused execution test rules for LAL v2 compiler. +# Each rule exercises a specific LAL language feature. +# Paired with execution-basic.input.data for mock input and expected output. +rules: + - name: json-parse-extract + layer: GENERAL + dsl: | + filter { + json {} + extractor { + service parsed.service as String + instance parsed.instance as String + endpoint parsed.endpoint as String + layer parsed.layer as String + } + sink {} + } + + - name: tag-condition-true + layer: GENERAL + dsl: | + filter { + json {} + if (tag("LOG_KIND") == "SLOW_SQL") { + extractor { + service parsed.service as String + } + sink {} + } + } + + - name: tag-condition-false + layer: GENERAL + dsl: | + filter { + json {} + if (tag("LOG_KIND") == "SLOW_SQL") { + extractor { + service parsed.service as String + } + sink {} + } + } + + - name: tag-assignment + layer: GENERAL + dsl: | + filter { + json {} + extractor { + tag key1: parsed.env as String, key2: parsed.region as String + } + sink {} + } + + - name: safe-nav-missing + layer: GENERAL + dsl: | + filter { + json {} + extractor { + service parsed?.missing?.deep as String + } + sink {} + } + + - name: safe-nav-present + layer: GENERAL + dsl: | + filter { + json {} + extractor { + service parsed?.data?.name as String + } + sink {} + } + + - name: if-else-if-error + layer: GENERAL + dsl: | + filter { + json {} + if (parsed.level == "ERROR") { + extractor { service "error-handler" as String } + sink {} + } else if (parsed.level == "WARN") { + extractor { service "warn-handler" as String } + sink {} + } else { + extractor { service "default-handler" as String } + sink {} + } + } + + - name: if-else-if-warn + layer: GENERAL + dsl: | + filter { + json {} + if (parsed.level == "ERROR") { + extractor { service "error-handler" as String } + sink {} + } else if (parsed.level == "WARN") { + extractor { service "warn-handler" as String } + sink {} + } else { + extractor { service "default-handler" as String } + sink {} + } + } + + - name: if-else-if-default + layer: GENERAL + dsl: | + filter { + json {} + if (parsed.level == "ERROR") { + extractor { service "error-handler" as String } + sink {} + } else if (parsed.level == "WARN") { + extractor { service "warn-handler" as String } + sink {} + } else { + extractor { service "default-handler" as String } + sink {} + } + } + + - name: sink-enforcer + layer: GENERAL + dsl: | + filter { + json {} + sink { + enforcer {} + } + } + + - name: sink-dropper + layer: GENERAL + dsl: | + filter { + json {} + sink { + dropper {} + } + } + + - name: sampler-rate-limit + layer: GENERAL + dsl: | + filter { + json {} + sink { + sampler { + rateLimit('test:svc') { + rpm 6000 + } + } + } + } + + - name: sampler-interpolated-id + layer: GENERAL + dsl: | + filter { + json {} + sink { + sampler { + rateLimit("${parsed.code}") { + rpm 6000 + } + } + } + } + + - name: abort-stops-pipeline + layer: GENERAL + dsl: | + filter { + json {} + abort {} + extractor { + service parsed.service as String + } + sink {} + } + + - name: conditional-abort-true + layer: GENERAL + dsl: | + filter { + json {} + if (parsed.skip == "true") { + abort {} + } + extractor { + service parsed.service as String + } + sink {} + } + + - name: conditional-abort-false + layer: GENERAL + dsl: | + filter { + json {} + if (parsed.skip == "true") { + abort {} + } + extractor { + service parsed.service as String + } + sink {} + } + + - name: timestamp-extraction + layer: GENERAL + dsl: | + filter { + json {} + extractor { + timestamp parsed.time as String + } + sink {} + } + + - name: text-parser-regexp + layer: GENERAL + dsl: | + filter { + text { + regexp $/(?<ts>\d+) (?<lvl>\w+) (?<msg>.*)/$ + } + extractor { + service parsed.lvl as String + } + sink {} + } + + - name: sampled-trace-basic + layer: MESH_DP + dsl: | + filter { + json {} + extractor { + sampledTrace { + latency parsed.latency as Long + uri parsed.uri as String + reason parsed.reason as String + processId parsed.pid as String + destProcessId parsed.dpid as String + detectPoint parsed.dp as String + componentId 49 + } + } + } diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/default.input.data b/test/script-cases/scripts/lal/test-lal/oap-cases/default.input.data new file mode 100644 index 000000000000..6823207fb044 --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/default.input.data @@ -0,0 +1,23 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Mock input data for default.yaml rules. + +default: + body-type: json + body: '{}' + expect: + save: true + abort: false diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/default.yaml b/test/script-cases/scripts/lal/test-lal/oap-cases/default.yaml new file mode 100644 index 000000000000..12317a95bf55 --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/default.yaml @@ -0,0 +1,24 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# The default LAL script to save all logs, behaving like the versions before 8.5.0. +rules: + - name: default + layer: GENERAL + dsl: | + filter { + sink { + } + } diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/envoy-als.input.data b/test/script-cases/scripts/lal/test-lal/oap-cases/envoy-als.input.data new file mode 100644 index 000000000000..cf400f7661ee --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/envoy-als.input.data @@ -0,0 +1,111 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Mock input data for envoy-als.yaml rules. +# The envoy-als rule processes protobuf HTTPAccessLogEntry as extraLog, +# not JSON body. The proto-json is parsed via protobuf JsonFormat. + +envoy-als: + - service: test-mesh-svc + body-type: none + extra-log: + proto-class: io.envoyproxy.envoy.data.accesslog.v3.HTTPAccessLogEntry + proto-json: '{"response":{"responseCode":500},"commonProperties":{"upstreamCluster":"outbound|80||backend.default.svc"}}' + expect: + tag.status.code: "500" + save: true + abort: false + + - service: test-mesh-svc-abort + body-type: none + extra-log: + proto-class: io.envoyproxy.envoy.data.accesslog.v3.HTTPAccessLogEntry + proto-json: '{"response":{"responseCode":200},"commonProperties":{"upstreamCluster":"outbound|80||backend.default.svc"}}' + expect: + save: true + abort: true + + - service: test-mesh-svc-with-flags + body-type: none + extra-log: + proto-class: io.envoyproxy.envoy.data.accesslog.v3.HTTPAccessLogEntry + proto-json: '{"response":{"responseCode":200},"commonProperties":{"upstreamCluster":"outbound|80||backend.default.svc","responseFlags":{"upstreamConnectionFailure":true}}}' + expect: + save: true + abort: false + tag.response.flag: "upstream_connection_failure: true\n" + +network-profiling-slow-trace: + - body-type: json + body: '{"latency":200,"uri":"/mesh/api","reason":"slow","client_process":{"process_id":"client-proc-1","local":false,"address":"10.0.0.1:8080"},"server_process":{"process_id":"server-proc-2","local":true,"address":"10.0.0.2:9090"},"detect_point":"client","component":"http","ssl":false}' + service: mesh-envoy-svc + instance: mesh-envoy-instance + trace-id: trace-envoy-mesh-001 + timestamp: 1609459200000 + tags: + LOG_KIND: NET_PROFILING_SAMPLED_TRACE + expect: + save: true + abort: false + sampledTrace.traceId: trace-envoy-mesh-001 + sampledTrace.processId: client-proc-1 + sampledTrace.destProcessId: server-proc-2 + sampledTrace.componentId: 49 + + - body-type: json + body: '{"latency":500,"uri":"/mesh/api/ssl","reason":"slow","client_process":{"process_id":"","local":true,"address":"127.0.0.1:8080"},"server_process":{"process_id":"","local":false,"address":"10.0.0.3:9090"},"detect_point":"server","component":"http","ssl":true}' + service: mesh-envoy-svc-ssl + instance: mesh-envoy-instance-ssl + trace-id: trace-envoy-mesh-002 + timestamp: 1609459300000 + tags: + LOG_KIND: NET_PROFILING_SAMPLED_TRACE + expect: + save: true + abort: false + sampledTrace.traceId: trace-envoy-mesh-002 + # processId/destProcessId omitted: virtual process IDs depend on ProcessRegistry + # implementation (mock vs production). Validated via v1-v2 comparison in checker test. + sampledTrace.componentId: 129 + + - body-type: json + body: '{"latency":100,"uri":"/mesh/api/tcp","reason":"slow","client_process":{"process_id":"","local":false,"address":"10.0.0.4:8080"},"server_process":{"process_id":"","local":false,"address":"10.0.0.5:9090"},"detect_point":"client","component":"tcp","ssl":true}' + service: mesh-envoy-svc-tcp-ssl + instance: mesh-envoy-instance-tcp-ssl + trace-id: trace-envoy-mesh-003 + timestamp: 1609459400000 + tags: + LOG_KIND: NET_PROFILING_SAMPLED_TRACE + expect: + save: true + abort: false + # processId/destProcessId omitted: virtual process IDs depend on ProcessRegistry + # implementation (mock vs production). Validated via v1-v2 comparison in checker test. + sampledTrace.componentId: 130 + + - body-type: json + body: '{"latency":50,"uri":"/mesh/api/other","reason":"slow","client_process":{"process_id":"","local":false,"address":"10.0.0.6:8080"},"server_process":{"process_id":"","local":false,"address":"10.0.0.7:9090"},"detect_point":"client","component":"other","ssl":false}' + service: mesh-envoy-svc-other + instance: mesh-envoy-instance-other + trace-id: trace-envoy-mesh-004 + timestamp: 1609459500000 + tags: + LOG_KIND: NET_PROFILING_SAMPLED_TRACE + expect: + save: true + abort: false + # processId/destProcessId omitted: virtual process IDs depend on ProcessRegistry + # implementation (mock vs production). Validated via v1-v2 comparison in checker test. + sampledTrace.componentId: 110 diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/envoy-als.yaml b/test/script-cases/scripts/lal/test-lal/oap-cases/envoy-als.yaml new file mode 100644 index 000000000000..4e2708e0d2cb --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/envoy-als.yaml @@ -0,0 +1,94 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +rules: + # The extra log type (io.envoyproxy.envoy.data.accesslog.v3.HTTPAccessLogEntry) is + # registered via SPI by EnvoyHTTPLALSourceTypeProvider in envoy-metrics-receiver-plugin. + - name: envoy-als + layer: MESH + dsl: | + filter { + // only collect abnormal logs (http status code >= 300, or commonProperties?.responseFlags is not empty) + if (parsed?.response?.responseCode?.value as Integer < 400 && !parsed?.commonProperties?.responseFlags?.toString()?.trim()) { + abort {} + } + extractor { + if (parsed?.response?.responseCode) { + tag 'status.code': parsed?.response?.responseCode?.value + } + tag 'response.flag': parsed?.commonProperties?.responseFlags + } + sink { + sampler { + if (parsed?.commonProperties?.responseFlags?.toString()) { + // use service:errorCode as sampler id so that each service:errorCode has its own sampler, + // e.g. checkoutservice:[upstreamConnectionFailure], checkoutservice:[upstreamRetryLimitExceeded] + rateLimit("${log.service}:${parsed?.commonProperties?.responseFlags?.toString()}") { + rpm 6000 + } + } else { + // use service:responseCode as sampler id so that each service:responseCode has its own sampler, + // e.g. checkoutservice:500, checkoutservice:404. + rateLimit("${log.service}:${parsed?.response?.responseCode}") { + rpm 6000 + } + } + } + } + } + - name: network-profiling-slow-trace + layer: MESH + dsl: | + filter { + json{ + } + extractor{ + if (tag("LOG_KIND") == "NET_PROFILING_SAMPLED_TRACE") { + sampledTrace { + latency parsed.latency as Long + uri parsed.uri as String + reason parsed.reason as String + + if (parsed.client_process.process_id as String != "") { + processId parsed.client_process.process_id as String + } else if (parsed.client_process.local as Boolean) { + processId ProcessRegistry.generateVirtualLocalProcess(parsed.service as String, parsed.serviceInstance as String) as String + } else { + processId ProcessRegistry.generateVirtualRemoteProcess(parsed.service as String, parsed.serviceInstance as String, parsed.client_process.address as String) as String + } + + if (parsed.server_process.process_id as String != "") { + destProcessId parsed.server_process.process_id as String + } else if (parsed.server_process.local as Boolean) { + destProcessId ProcessRegistry.generateVirtualLocalProcess(parsed.service as String, parsed.serviceInstance as String) as String + } else { + destProcessId ProcessRegistry.generateVirtualRemoteProcess(parsed.service as String, parsed.serviceInstance as String, parsed.server_process.address as String) as String + } + + detectPoint parsed.detect_point as String + + if (parsed.component as String == "http" && parsed.ssl as Boolean) { + componentId 129 + } else if (parsed.component as String == "http") { + componentId 49 + } else if (parsed.ssl as Boolean) { + componentId 130 + } else { + componentId 110 + } + } + } + } + } diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/k8s-service.input.data b/test/script-cases/scripts/lal/test-lal/oap-cases/k8s-service.input.data new file mode 100644 index 000000000000..79c4d96dcd4d --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/k8s-service.input.data @@ -0,0 +1,123 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Mock input data for k8s-service.yaml rules. +# Covers all branches: process_id resolution (direct/virtual-local/virtual-remote), +# component+ssl combinations (http+ssl/http/tcp+ssl/other), and LOG_KIND false path. + +network-profiling-slow-trace: + # [0] Direct process IDs (non-empty), HTTPS (http+ssl=true) → componentId 129 + - body-type: json + body: '{"latency":350,"uri":"/k8s/endpoint","reason":"status_5xx","client_process":{"process_id":"k8s-client-proc","local":true,"address":"10.1.0.1:80"},"server_process":{"process_id":"k8s-server-proc","local":false,"address":"10.1.0.2:443"},"detect_point":"server","component":"http","ssl":true}' + service: k8s-test-svc + instance: k8s-test-instance + trace-id: trace-k8s-001 + timestamp: 1609459300000 + tags: + LOG_KIND: NET_PROFILING_SAMPLED_TRACE + expect: + save: true + abort: false + sampledTrace.traceId: trace-k8s-001 + sampledTrace.serviceName: k8s-test-svc + sampledTrace.serviceInstanceName: k8s-test-instance + sampledTrace.timestamp: 1609459300000 + sampledTrace.latency: 350 + sampledTrace.uri: /k8s/endpoint + sampledTrace.reason: STATUS_5XX + sampledTrace.processId: k8s-client-proc + sampledTrace.destProcessId: k8s-server-proc + sampledTrace.detectPoint: SERVER + sampledTrace.componentId: 129 + + # [1] Virtual local process: client process_id empty + local=true → generateVirtualLocalProcess() + # Virtual remote process: server process_id empty + local=false → generateVirtualRemoteProcess() + # HTTP without SSL → componentId 49 + - body-type: json + body: '{"latency":200,"uri":"/k8s/api","reason":"slow","client_process":{"process_id":"","local":true,"address":"10.1.0.3:8080"},"server_process":{"process_id":"","local":false,"address":"10.1.0.4:9090"},"detect_point":"client","component":"http","ssl":false}' + service: k8s-svc-virtual + instance: k8s-inst-virtual + trace-id: trace-k8s-002 + timestamp: 1609459400000 + tags: + LOG_KIND: NET_PROFILING_SAMPLED_TRACE + expect: + save: true + abort: false + sampledTrace.traceId: trace-k8s-002 + sampledTrace.latency: 200 + sampledTrace.uri: /k8s/api + sampledTrace.reason: SLOW + # processId/destProcessId omitted: virtual process IDs depend on ProcessRegistry + # implementation (mock vs production). Validated via v1-v2 comparison in checker test. + sampledTrace.detectPoint: CLIENT + sampledTrace.componentId: 49 + + # [2] Virtual remote process: client process_id empty + local=false → generateVirtualRemoteProcess() + # Virtual local process: server process_id empty + local=true → generateVirtualLocalProcess() + # TCP with SSL → componentId 130 + - body-type: json + body: '{"latency":100,"uri":"/k8s/tcp","reason":"slow","client_process":{"process_id":"","local":false,"address":"10.1.0.5:8080"},"server_process":{"process_id":"","local":true,"address":"10.1.0.6:9090"},"detect_point":"client","component":"tcp","ssl":true}' + service: k8s-svc-tcp-ssl + instance: k8s-inst-tcp-ssl + trace-id: trace-k8s-003 + timestamp: 1609459500000 + tags: + LOG_KIND: NET_PROFILING_SAMPLED_TRACE + expect: + save: true + abort: false + sampledTrace.traceId: trace-k8s-003 + sampledTrace.latency: 100 + sampledTrace.uri: /k8s/tcp + sampledTrace.reason: SLOW + # processId/destProcessId omitted: virtual process IDs depend on ProcessRegistry + # implementation (mock vs production). Validated via v1-v2 comparison in checker test. + sampledTrace.detectPoint: CLIENT + sampledTrace.componentId: 130 + + # [3] Default component: component=other, ssl=false → componentId 110 + - body-type: json + body: '{"latency":50,"uri":"/k8s/other","reason":"slow","client_process":{"process_id":"other-client","local":false,"address":"10.1.0.7:8080"},"server_process":{"process_id":"other-server","local":false,"address":"10.1.0.8:9090"},"detect_point":"server","component":"other","ssl":false}' + service: k8s-svc-other + instance: k8s-inst-other + trace-id: trace-k8s-004 + timestamp: 1609459600000 + tags: + LOG_KIND: NET_PROFILING_SAMPLED_TRACE + expect: + save: true + abort: false + sampledTrace.traceId: trace-k8s-004 + sampledTrace.latency: 50 + sampledTrace.uri: /k8s/other + sampledTrace.reason: SLOW + sampledTrace.processId: other-client + sampledTrace.destProcessId: other-server + sampledTrace.detectPoint: SERVER + sampledTrace.componentId: 110 + + # [4] LOG_KIND false path: tag is not NET_PROFILING_SAMPLED_TRACE → no sampledTrace + - body-type: json + body: '{"latency":300,"uri":"/k8s/normal","reason":"slow","client_process":{"process_id":"proc-a","local":false,"address":"10.1.0.9:80"},"server_process":{"process_id":"proc-b","local":false,"address":"10.1.0.10:443"},"detect_point":"client","component":"http","ssl":true}' + service: k8s-svc-normal + instance: k8s-inst-normal + trace-id: trace-k8s-005 + timestamp: 1609459700000 + tags: + LOG_KIND: NORMAL_LOG + expect: + save: true + abort: false diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/k8s-service.yaml b/test/script-cases/scripts/lal/test-lal/oap-cases/k8s-service.yaml new file mode 100644 index 000000000000..2992b39ed7a7 --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/k8s-service.yaml @@ -0,0 +1,61 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# The default LAL script to save all logs, behaving like the versions before 8.5.0. +rules: + - name: network-profiling-slow-trace + layer: K8S_SERVICE + dsl: | + filter { + json{ + } + extractor{ + if (tag("LOG_KIND") == "NET_PROFILING_SAMPLED_TRACE") { + sampledTrace { + latency parsed.latency as Long + uri parsed.uri as String + reason parsed.reason as String + + if (parsed.client_process.process_id as String != "") { + processId parsed.client_process.process_id as String + } else if (parsed.client_process.local as Boolean) { + processId ProcessRegistry.generateVirtualLocalProcess(parsed.service as String, parsed.serviceInstance as String) as String + } else { + processId ProcessRegistry.generateVirtualRemoteProcess(parsed.service as String, parsed.serviceInstance as String, parsed.client_process.address as String) as String + } + + if (parsed.server_process.process_id as String != "") { + destProcessId parsed.server_process.process_id as String + } else if (parsed.server_process.local as Boolean) { + destProcessId ProcessRegistry.generateVirtualLocalProcess(parsed.service as String, parsed.serviceInstance as String) as String + } else { + destProcessId ProcessRegistry.generateVirtualRemoteProcess(parsed.service as String, parsed.serviceInstance as String, parsed.server_process.address as String) as String + } + + detectPoint parsed.detect_point as String + + if (parsed.component as String == "http" && parsed.ssl as Boolean) { + componentId 129 + } else if (parsed.component as String == "http") { + componentId 49 + } else if (parsed.ssl as Boolean) { + componentId 130 + } else { + componentId 110 + } + } + } + } + } \ No newline at end of file diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/mesh-dp.input.data b/test/script-cases/scripts/lal/test-lal/oap-cases/mesh-dp.input.data new file mode 100644 index 000000000000..65ac8bf42b01 --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/mesh-dp.input.data @@ -0,0 +1,89 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Mock input data for mesh-dp.yaml rules. +# Covers all conditional branches: process resolution (explicit, local, remote), +# component types (http, tcp, other), and SSL combinations. + +network-profiling-slow-trace: + # Case 1: HTTP + no SSL, explicit process IDs (componentId 49) + - body-type: json + body: '{"latency":200,"uri":"/mesh/api","reason":"slow","client_process":{"process_id":"client-proc-1","local":false,"address":"10.0.0.1:8080"},"server_process":{"process_id":"server-proc-2","local":true,"address":"10.0.0.2:9090"},"detect_point":"client","component":"http","ssl":false}' + service: mesh-test-svc + instance: mesh-test-instance + trace-id: trace-mesh-001 + timestamp: 1609459200000 + tags: + LOG_KIND: NET_PROFILING_SAMPLED_TRACE + expect: + save: true + abort: false + sampledTrace.traceId: trace-mesh-001 + sampledTrace.serviceName: mesh-test-svc + sampledTrace.serviceInstanceName: mesh-test-instance + sampledTrace.timestamp: 1609459200000 + sampledTrace.latency: 200 + sampledTrace.uri: /mesh/api + sampledTrace.reason: SLOW + sampledTrace.processId: client-proc-1 + sampledTrace.destProcessId: server-proc-2 + sampledTrace.detectPoint: CLIENT + sampledTrace.componentId: 49 + + # Case 2: HTTP + SSL, virtual local/remote process IDs (componentId 129) + - body-type: json + body: '{"latency":500,"uri":"/mesh/api/ssl","reason":"slow","client_process":{"process_id":"","local":true,"address":"127.0.0.1:8080"},"server_process":{"process_id":"","local":false,"address":"10.0.0.3:9090"},"detect_point":"server","component":"http","ssl":true}' + service: mesh-test-svc-ssl + instance: mesh-test-instance-ssl + trace-id: trace-mesh-002 + timestamp: 1609459300000 + tags: + LOG_KIND: NET_PROFILING_SAMPLED_TRACE + expect: + save: true + abort: false + sampledTrace.traceId: trace-mesh-002 + # processId/destProcessId: virtual IDs from ProcessRegistry mock. + # Validated via v1-v2 comparison (not asserted here). + sampledTrace.componentId: 129 + sampledTrace.detectPoint: SERVER + + # Case 3: non-HTTP + SSL (componentId 130), both remote + - body-type: json + body: '{"latency":100,"uri":"/mesh/api/tcp","reason":"slow","client_process":{"process_id":"","local":false,"address":"10.0.0.4:8080"},"server_process":{"process_id":"","local":false,"address":"10.0.0.5:9090"},"detect_point":"client","component":"tcp","ssl":true}' + service: mesh-test-svc-tcp-ssl + instance: mesh-test-instance-tcp-ssl + trace-id: trace-mesh-003 + timestamp: 1609459400000 + tags: + LOG_KIND: NET_PROFILING_SAMPLED_TRACE + expect: + save: true + abort: false + sampledTrace.componentId: 130 + + # Case 4: non-HTTP + no SSL (componentId 110), both remote + - body-type: json + body: '{"latency":50,"uri":"/mesh/api/other","reason":"slow","client_process":{"process_id":"","local":false,"address":"10.0.0.6:8080"},"server_process":{"process_id":"","local":false,"address":"10.0.0.7:9090"},"detect_point":"client","component":"other","ssl":false}' + service: mesh-test-svc-other + instance: mesh-test-instance-other + trace-id: trace-mesh-004 + timestamp: 1609459500000 + tags: + LOG_KIND: NET_PROFILING_SAMPLED_TRACE + expect: + save: true + abort: false + sampledTrace.componentId: 110 diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/mesh-dp.yaml b/test/script-cases/scripts/lal/test-lal/oap-cases/mesh-dp.yaml new file mode 100644 index 000000000000..e8271ef8a708 --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/mesh-dp.yaml @@ -0,0 +1,60 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +rules: + - name: network-profiling-slow-trace + layer: MESH_DP + dsl: | + filter { + json{ + } + extractor{ + if (tag("LOG_KIND") == "NET_PROFILING_SAMPLED_TRACE") { + sampledTrace { + latency parsed.latency as Long + uri parsed.uri as String + reason parsed.reason as String + + if (parsed.client_process.process_id as String != "") { + processId parsed.client_process.process_id as String + } else if (parsed.client_process.local as Boolean) { + processId ProcessRegistry.generateVirtualLocalProcess(parsed.service as String, parsed.serviceInstance as String) as String + } else { + processId ProcessRegistry.generateVirtualRemoteProcess(parsed.service as String, parsed.serviceInstance as String, parsed.client_process.address as String) as String + } + + if (parsed.server_process.process_id as String != "") { + destProcessId parsed.server_process.process_id as String + } else if (parsed.server_process.local as Boolean) { + destProcessId ProcessRegistry.generateVirtualLocalProcess(parsed.service as String, parsed.serviceInstance as String) as String + } else { + destProcessId ProcessRegistry.generateVirtualRemoteProcess(parsed.service as String, parsed.serviceInstance as String, parsed.server_process.address as String) as String + } + + detectPoint parsed.detect_point as String + + if (parsed.component as String == "http" && parsed.ssl as Boolean) { + componentId 129 + } else if (parsed.component as String == "http") { + componentId 49 + } else if (parsed.ssl as Boolean) { + componentId 130 + } else { + componentId 110 + } + } + } + } + } \ No newline at end of file diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/mysql-slowsql.input.data b/test/script-cases/scripts/lal/test-lal/oap-cases/mysql-slowsql.input.data new file mode 100644 index 000000000000..49d3f2ec27fc --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/mysql-slowsql.input.data @@ -0,0 +1,26 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Mock input data for mysql-slowsql.yaml rules. + +mysql-slowsql: + body-type: json + body: '{"layer":"MYSQL","service":"db-svc","time":"1609459200000","id":"slow-1","statement":"SELECT 1","query_time":500}' + tags: + LOG_KIND: SLOW_SQL + expect: + service: db-svc + layer: MYSQL + timestamp: 1609459200000 diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/mysql-slowsql.yaml b/test/script-cases/scripts/lal/test-lal/oap-cases/mysql-slowsql.yaml new file mode 100644 index 000000000000..774da2955db6 --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/mysql-slowsql.yaml @@ -0,0 +1,35 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +rules: + - name: mysql-slowsql + layer: MYSQL + dsl: | + filter { + json{ + } + extractor{ + layer parsed.layer as String + service parsed.service as String + timestamp parsed.time as String + if (tag("LOG_KIND") == "SLOW_SQL") { + slowSql { + id parsed.id as String + statement parsed.statement as String + latency parsed.query_time as Long + } + } + } + } diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/network-profiling-e2e.input.data b/test/script-cases/scripts/lal/test-lal/oap-cases/network-profiling-e2e.input.data new file mode 100644 index 000000000000..764e39e55457 --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/network-profiling-e2e.input.data @@ -0,0 +1,46 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Mock input data for network-profiling-e2e.yaml rules. +# This is the e2e override rule with string concatenation for URI +# and .split(":")[index].endsWith() for process resolution. +# +# Mirrors real e2e input: server_process.process_id is empty, triggering +# ProcessRegistry.generateVirtualProcess() fallback. Both v1 and v2 must +# produce the same destProcessId and successfully call submitSampledTrace() +# → builder.toRecord() → RecordStreamProcessor.getInstance().in(record). + +network-profiling-slow-trace: + body-type: json + body: '{"latency":1001,"uri":"/provider","trace_provider":"skywalking","reason":"slow","client_process":{"process_id":"0c106e314ae0dae7cb655c3835ca77c890caedd78f51971ff60e5919805109f9","local":false,"address":""},"server_process":{"process_id":"","local":false,"address":"10.96.79.185:80"},"detect_point":"client","component":"http","ssl":false,"status":200}' + service: service + instance: test-instance + trace-id: ee615aee17b111f195abba5e9985b671 + timestamp: 1772618841553 + tags: + LOG_KIND: NET_PROFILING_SAMPLED_TRACE + expect: + save: true + abort: false + sampledTrace.traceId: ee615aee17b111f195abba5e9985b671 + sampledTrace.serviceName: service + sampledTrace.serviceInstanceName: test-instance + sampledTrace.timestamp: 1772618841553 + sampledTrace.latency: 1001 + sampledTrace.uri: skywalking-/provider + sampledTrace.reason: SLOW + sampledTrace.processId: 0c106e314ae0dae7cb655c3835ca77c890caedd78f51971ff60e5919805109f9 + sampledTrace.detectPoint: CLIENT + sampledTrace.componentId: 49 diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/network-profiling-e2e.yaml b/test/script-cases/scripts/lal/test-lal/oap-cases/network-profiling-e2e.yaml new file mode 100644 index 000000000000..a9a78d8ec7c2 --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/network-profiling-e2e.yaml @@ -0,0 +1,71 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# LAL script from the eBPF network-profiling e2e test override +# (test/e2e-v2/cases/profiling/ebpf/network/kubernetes-values.yaml) +# +# This uses Groovy constructs beyond the standard k8s-service.yaml: +# - String concatenation with + operator +# - .split(":")[index] array indexing +# - Method chaining after type cast: (x as String).split(...)[0].endsWith(...) +rules: + - name: network-profiling-slow-trace + layer: K8S_SERVICE + dsl: | + filter { + json{ + } + extractor{ + if (tag("LOG_KIND") == "NET_PROFILING_SAMPLED_TRACE") { + sampledTrace { + latency parsed.latency as Long + uri ((parsed.trace_provider as String) + "-" + (parsed.uri as String)) + reason parsed.reason as String + + if (parsed.client_process.process_id as String != "") { + processId parsed.client_process.process_id as String + } else if (parsed.client_process.local as Boolean + || (parsed.client_process.address as String).split(":")[0].endsWith('.1') + || (parsed.client_process.address as String).split(":")[1] == "53") { + processId ProcessRegistry.generateVirtualLocalProcess(parsed.service as String, parsed.serviceInstance as String) as String + } else { + processId ProcessRegistry.generateVirtualProcess(parsed.service as String, parsed.serviceInstance as String, 'UNKNOWN_REMOTE') as String + } + + if (parsed.server_process.process_id as String != "") { + destProcessId parsed.server_process.process_id as String + } else if (parsed.server_process.local as Boolean + || (parsed.server_process.address as String).split(":")[0].endsWith('.1') + || (parsed.server_process.address as String).split(":")[1] == "53") { + destProcessId ProcessRegistry.generateVirtualLocalProcess(parsed.service as String, parsed.serviceInstance as String) as String + } else { + destProcessId ProcessRegistry.generateVirtualProcess(parsed.service as String, parsed.serviceInstance as String, 'UNKNOWN_REMOTE') as String + } + + detectPoint parsed.detect_point as String + + if (parsed.component as String == "http" && parsed.ssl as Boolean) { + componentId 129 + } else if (parsed.component as String == "http") { + componentId 49 + } else if (parsed.ssl as Boolean) { + componentId 130 + } else { + componentId 110 + } + } + } + } + } diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/nginx.input.data b/test/script-cases/scripts/lal/test-lal/oap-cases/nginx.input.data new file mode 100644 index 000000000000..5e5125eb7a47 --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/nginx.input.data @@ -0,0 +1,32 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Mock input data for nginx.yaml rules. + +nginx-access-log: + body-type: text + body: '10.0.0.1 - - [01/Jan/2021:00:00:00 +0000] "GET /api HTTP/1.1" 200 1234' + tags: + LOG_KIND: NGINX_ACCESS_LOG + expect: + tag.http.status_code: "200" + +nginx-error-log: + body-type: text + body: '2021/01/01 00:00:00 [error] 123#123: test error message' + tags: + LOG_KIND: NGINX_ERROR_LOG + expect: + tag.level: error diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/nginx.yaml b/test/script-cases/scripts/lal/test-lal/oap-cases/nginx.yaml new file mode 100644 index 000000000000..d6c50dd4c0fd --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/nginx.yaml @@ -0,0 +1,60 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +rules: + - name: nginx-access-log + layer: NGINX + dsl: | + filter { + if (tag("LOG_KIND") == "NGINX_ACCESS_LOG") { + text { + regexp $/.+ \"(?<request>.+)\" (?<status>\d{3}) .+/$ + } + + extractor { + if (parsed.status) { + tag 'http.status_code': parsed.status + } + } + + sink { + } + } + } + - name: nginx-error-log + layer: NGINX + dsl: | + filter { + if (tag("LOG_KIND") == "NGINX_ERROR_LOG") { + text { + regexp $/(?<time>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}) \[(?<level>.+)].*/$ + } + + extractor { + tag level: parsed.level + timestamp parsed.time as String, "yyyy/MM/dd HH:mm:ss" + + metrics { + timestamp log.timestamp as Long + labels level: parsed.level, service: log.service, service_instance_id: log.serviceInstance + name "nginx_error_log_count" + value 1 + } + } + + sink { + } + } + } diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/pgsql-slowsql.input.data b/test/script-cases/scripts/lal/test-lal/oap-cases/pgsql-slowsql.input.data new file mode 100644 index 000000000000..ffdae9b64bce --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/pgsql-slowsql.input.data @@ -0,0 +1,26 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Mock input data for pgsql-slowsql.yaml rules. + +pgsql-slowsql: + body-type: json + body: '{"layer":"POSTGRESQL","service":"pg-svc","time":"1609459200000","id":"slow-pg-1","statement":"SELECT 1","query_time":300}' + tags: + LOG_KIND: SLOW_SQL + expect: + service: pg-svc + layer: POSTGRESQL + timestamp: 1609459200000 diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/pgsql-slowsql.yaml b/test/script-cases/scripts/lal/test-lal/oap-cases/pgsql-slowsql.yaml new file mode 100644 index 000000000000..be3aeb291d9c --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/pgsql-slowsql.yaml @@ -0,0 +1,35 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +rules: + - name: pgsql-slowsql + layer: POSTGRESQL + dsl: | + filter { + json{ + } + extractor{ + layer parsed.layer as String + service parsed.service as String + timestamp parsed.time as String + if (tag("LOG_KIND") == "SLOW_SQL") { + slowSql { + id parsed.id as String + statement parsed.statement as String + latency parsed.query_time as Long + } + } + } + } diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/redis-slowsql.input.data b/test/script-cases/scripts/lal/test-lal/oap-cases/redis-slowsql.input.data new file mode 100644 index 000000000000..95dc05b4946b --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/redis-slowsql.input.data @@ -0,0 +1,26 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Mock input data for redis-slowsql.yaml rules. + +redis-slowsql: + body-type: json + body: '{"layer":"REDIS","service":"redis-svc","time":"1609459200000","id":"slow-redis-1","statement":"GET key","query_time":200}' + tags: + LOG_KIND: SLOW_SQL + expect: + service: redis-svc + layer: REDIS + timestamp: 1609459200000 diff --git a/test/script-cases/scripts/lal/test-lal/oap-cases/redis-slowsql.yaml b/test/script-cases/scripts/lal/test-lal/oap-cases/redis-slowsql.yaml new file mode 100644 index 000000000000..b4873ad88c73 --- /dev/null +++ b/test/script-cases/scripts/lal/test-lal/oap-cases/redis-slowsql.yaml @@ -0,0 +1,35 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +rules: + - name: redis-slowsql + layer: REDIS + dsl: | + filter { + json{ + } + extractor{ + layer parsed.layer as String + service parsed.service as String + timestamp parsed.time as String + if (tag("LOG_KIND") == "SLOW_SQL") { + slowSql { + id parsed.id as String + statement parsed.statement as String + latency parsed.query_time as Long + } + } + } + } diff --git a/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy-ca.data.yaml b/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy-ca.data.yaml new file mode 100644 index 000000000000..940dbd467400 --- /dev/null +++ b/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy-ca.data.yaml @@ -0,0 +1,75 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + envoy_cluster_metrics: + - labels: + app: test-app + secret_name: test-secret + instance: test-instance + metrics_name: cluster.outbound.test-cluster.ssl.certificate.test-cert.expiration_unix_time_seconds + value: 100.0 + envoy_listener_metrics: + - labels: + app: test-app + secret_name: test-secret + instance: test-instance + metrics_name: cluster.outbound.test-cluster.ssl.certificate.test-cert.expiration_unix_time_seconds + value: 100.0 +expected: + envoy_service_cluster_ssl_ca_expiration_seconds: + entities: + - scope: SERVICE + service: test-app + layer: MESH_DP + samples: + - labels: + app: test-app + secret_name: test-cert + value: -1.772760608E9 + envoy_service_listener_ssl_ca_expiration_seconds: + entities: + - scope: SERVICE + service: test-app + layer: MESH_DP + samples: + - labels: + app: test-app + secret_name: test-cert + value: -1.772760608E9 + envoy_instance_cluster_ssl_ca_expiration_seconds: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + secret_name: test-cert + value: -1.772760608E9 + envoy_instance_listener_ssl_ca_expiration_seconds: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + secret_name: test-cert + value: -1.772760608E9 diff --git a/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy-ca.yaml b/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy-ca.yaml new file mode 100644 index 000000000000..570dec64dff2 --- /dev/null +++ b/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy-ca.yaml @@ -0,0 +1,60 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +metricPrefix: envoy +metricsRules: + - name: service_cluster_ssl_ca_expiration_seconds + exp: |- + (envoy_cluster_metrics.tagMatch('metrics_name' , '.*ssl.*expiration_unix_time_seconds').tag({ tags -> + def matcher = (tags.metrics_name =~ /\.ssl.certificate\.([^.]+)\.expiration_unix_time_seconds/) + tags.secret_name = matcher ? matcher[0][1] : "unknown" + }).min(['app', 'secret_name']) - time()).downsampling(MIN).service(['app'], Layer.MESH_DP) + + - name: service_listener_ssl_ca_expiration_seconds + exp: |- + (envoy_listener_metrics.tagMatch('metrics_name' , '.*ssl.*expiration_unix_time_seconds').tag({ tags -> + def matcher = (tags.metrics_name =~ /\.ssl.certificate\.([^.]+)\.expiration_unix_time_seconds/) + tags.secret_name = matcher ? matcher[0][1] : "unknown" + }).min(['app', 'secret_name']) - time()).downsampling(MIN).service(['app'], Layer.MESH_DP) + + - name: instance_cluster_ssl_ca_expiration_seconds + exp: |- + (envoy_cluster_metrics.tagMatch('metrics_name' , '.*ssl.*expiration_unix_time_seconds').tag({ tags -> + def matcher = (tags.metrics_name =~ /\.ssl.certificate\.([^.]+)\.expiration_unix_time_seconds/) + tags.secret_name = matcher ? matcher[0][1] : "unknown" + }).min(['app', 'instance', 'secret_name']) - time()).downsampling(MIN).instance(['app'], ['instance'], Layer.MESH_DP) + + - name: instance_listener_ssl_ca_expiration_seconds + exp: |- + (envoy_listener_metrics.tagMatch('metrics_name' , '.*ssl.*expiration_unix_time_seconds').tag({ tags -> + def matcher = (tags.metrics_name =~ /\.ssl.certificate\.([^.]+)\.expiration_unix_time_seconds/) + tags.secret_name = matcher ? matcher[0][1] : "unknown" + }).min(['app', 'instance', 'secret_name']) - time()).downsampling(MIN).instance(['app'], ['instance'], Layer.MESH_DP) diff --git a/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy-svc-relation.data.yaml b/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy-svc-relation.data.yaml new file mode 100644 index 000000000000..b05f6d98572e --- /dev/null +++ b/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy-svc-relation.data.yaml @@ -0,0 +1,116 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + envoy_cluster_metrics: + - labels: + app: test-app + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.upstream_cx_active + value: 100.0 + - labels: + app: test-app + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.upstream_cx_total + value: 100.0 + - labels: + app: test-app + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.upstream_rq_active + value: 100.0 + - labels: + app: test-app + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.upstream_rq_total + value: 100.0 + - labels: + app: test-app + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.upstream_rq_pending_active + value: 100.0 + - labels: + app: test-app + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.lb_healthy_panic + value: 100.0 + - labels: + app: test-app + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.upstream_cx_none_healthy + value: 100.0 +expected: + envoy_sr_cluster_up_cx_active: + entities: + - scope: SERVICE_RELATION + layer: MESH_DP + samples: + - labels: + app: test-app + cluster_name: test-cluster + value: 100.0 + envoy_sr_cluster_up_cx_incr: + entities: + - scope: SERVICE_RELATION + layer: MESH_DP + samples: + - labels: + app: test-app + cluster_name: test-cluster + value: 50.0 + envoy_sr_cluster_up_rq_active: + entities: + - scope: SERVICE_RELATION + layer: MESH_DP + samples: + - labels: + app: test-app + cluster_name: test-cluster + value: 100.0 + envoy_sr_cluster_up_rq_incr: + entities: + - scope: SERVICE_RELATION + layer: MESH_DP + samples: + - labels: + app: test-app + cluster_name: test-cluster + value: 50.0 + envoy_sr_cluster_up_rq_pending_active: + entities: + - scope: SERVICE_RELATION + layer: MESH_DP + samples: + - labels: + app: test-app + cluster_name: test-cluster + value: 100.0 + envoy_sr_cluster_lb_healthy_panic_incr: + entities: + - scope: SERVICE_RELATION + layer: MESH_DP + samples: + - labels: + app: test-app + cluster_name: test-cluster + value: 50.0 + envoy_sr_cluster_up_cx_none_healthy_incr: + entities: + - scope: SERVICE_RELATION + layer: MESH_DP + samples: + - labels: + app: test-app + cluster_name: test-cluster + value: 50.0 diff --git a/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy-svc-relation.yaml b/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy-svc-relation.yaml new file mode 100644 index 000000000000..131c4db1471c --- /dev/null +++ b/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy-svc-relation.yaml @@ -0,0 +1,48 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +expSuffix: serviceRelation(DetectPoint.CLIENT, ['app'], ['cluster_name'], Layer.MESH_DP) +metricPrefix: envoy_sr +metricsRules: + - name: cluster_up_cx_active + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+upstream_cx_active').tagMatch('metrics_name' , 'cluster.outbound.+').sum(['app' ,'cluster_name']) + - name: cluster_up_cx_incr + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+upstream_cx_total').tagMatch('metrics_name' , 'cluster.outbound.+').sum(['app' , 'cluster_name']).increase('PT1M') + - name: cluster_up_rq_active + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+upstream_rq_active').tagMatch('metrics_name' , 'cluster.outbound.+').sum(['app', 'cluster_name']) + - name: cluster_up_rq_incr + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+upstream_rq_total').tagMatch('metrics_name' , 'cluster.outbound.+').sum(['app' , 'cluster_name']).increase('PT1M') + - name: cluster_up_rq_pending_active + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+upstream_rq_pending_active').tagMatch('metrics_name' , 'cluster.outbound.+').sum(['app' , 'cluster_name']) + - name: cluster_lb_healthy_panic_incr + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+lb_healthy_panic').tagMatch('metrics_name' , 'cluster.outbound.+').sum(['app', 'cluster_name']).increase('PT1M') + - name: cluster_up_cx_none_healthy_incr + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+upstream_cx_none_healthy').tagMatch('metrics_name' , 'cluster.outbound.+').sum(['app', 'cluster_name']).increase('PT1M') diff --git a/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy.data.yaml b/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy.data.yaml new file mode 100644 index 000000000000..57d22d469211 --- /dev/null +++ b/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy.data.yaml @@ -0,0 +1,312 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + server_memory_heap_size: + - labels: + app: test-app + instance: test-instance + value: 100.0 + server_memory_allocated: + - labels: + app: test-app + instance: test-instance + value: 100.0 + server_memory_physical_size: + - labels: + app: test-app + instance: test-instance + value: 100.0 + server_total_connections: + - labels: + app: test-app + instance: test-instance + value: 100.0 + server_parent_connections: + - labels: + app: test-app + instance: test-instance + value: 100.0 + server_concurrency: + - labels: + app: test-app + instance: test-instance + value: 100.0 + server_envoy_bug_failures: + - labels: + value: 100.0 + envoy_cluster_metrics: + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.membership_healthy + value: 100.0 + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.upstream_cx_active + value: 100.0 + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.upstream_cx_total + value: 100.0 + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.upstream_rq_active + value: 100.0 + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.upstream_rq_total + value: 100.0 + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.upstream_rq_pending_active + value: 100.0 + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.lb_healthy_panic + value: 100.0 + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + metrics_name: cluster.outbound.test-cluster.upstream_cx_none_healthy + value: 100.0 +expected: + envoy_heap_memory_used: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + value: 100.0 + envoy_heap_memory_max_used: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + value: 100.0 + envoy_memory_allocated: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + value: 100.0 + envoy_memory_allocated_max: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + value: 100.0 + envoy_memory_physical_size: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + value: 100.0 + envoy_memory_physical_size_max: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + value: 100.0 + envoy_total_connections_used: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + value: 100.0 + envoy_parent_connections_used: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + value: 100.0 + envoy_worker_threads: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + value: 100.0 + envoy_worker_threads_max: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + value: 100.0 + envoy_bug_failures: + entities: + - scope: SERVICE_INSTANCE + layer: MESH_DP + samples: + - labels: + value: 100.0 + envoy_cluster_membership_healthy: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + value: 100.0 + envoy_cluster_up_cx_active: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + value: 100.0 + envoy_cluster_up_cx_incr: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + value: 50.0 + envoy_cluster_up_rq_active: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + value: 100.0 + envoy_cluster_up_rq_incr: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + value: 50.0 + envoy_cluster_up_rq_pending_active: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + value: 100.0 + envoy_cluster_lb_healthy_panic_incr: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + value: 50.0 + envoy_cluster_up_cx_none_healthy_incr: + entities: + - scope: SERVICE_INSTANCE + service: test-app + instance: test-instance + layer: MESH_DP + samples: + - labels: + app: test-app + instance: test-instance + cluster_name: test-cluster + value: 50.0 diff --git a/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy.yaml b/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy.yaml new file mode 100644 index 000000000000..bde5b02f6f1d --- /dev/null +++ b/test/script-cases/scripts/mal/test-envoy-metrics-rules/envoy.yaml @@ -0,0 +1,77 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +expSuffix: instance(['app'], ['instance'], Layer.MESH_DP) +metricPrefix: envoy +metricsRules: + - name: heap_memory_used + exp: server_memory_heap_size + - name: heap_memory_max_used + exp: server_memory_heap_size.max(['app', 'instance']) + - name: memory_allocated + exp: server_memory_allocated + - name: memory_allocated_max + exp: server_memory_allocated.max(['app', 'instance']) + - name: memory_physical_size + exp: server_memory_physical_size + - name: memory_physical_size_max + exp: server_memory_physical_size.max(['app', 'instance']) + + - name: total_connections_used + exp: server_total_connections.max(['app', 'instance']) + - name: parent_connections_used + exp: server_parent_connections.max(['app', 'instance']) + + - name: worker_threads + exp: server_concurrency + - name: worker_threads_max + exp: server_concurrency.max(['app', 'instance']) + + - name: bug_failures + exp: server_envoy_bug_failures + + # envoy_cluster_metrics + - name: cluster_membership_healthy + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+membership_healthy').tagMatch('metrics_name' , 'cluster.outbound.+|cluster.inbound.+').tagNotMatch('cluster_name' , '.+kube-system').sum(['app', 'instance' , 'cluster_name']) + - name: cluster_up_cx_active + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+upstream_cx_active').tagMatch('metrics_name' , 'cluster.outbound.+|cluster.inbound.+').sum(['app', 'instance' , 'cluster_name']) + - name: cluster_up_cx_incr + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+upstream_cx_total').tagMatch('metrics_name' , 'cluster.outbound.+|cluster.inbound.+').sum(['app', 'instance' , 'cluster_name']).increase('PT1M') + - name: cluster_up_rq_active + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+upstream_rq_active').tagMatch('metrics_name' , 'cluster.outbound.+|cluster.inbound.+').sum(['app', 'instance' , 'cluster_name']) + - name: cluster_up_rq_incr + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+upstream_rq_total').tagMatch('metrics_name' , 'cluster.outbound.+|cluster.inbound.+').sum(['app', 'instance' , 'cluster_name']).increase('PT1M') + - name: cluster_up_rq_pending_active + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+upstream_rq_pending_active').tagMatch('metrics_name' , 'cluster.outbound.+|cluster.inbound.+').sum(['app', 'instance' , 'cluster_name']) + - name: cluster_lb_healthy_panic_incr + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+lb_healthy_panic').tagMatch('metrics_name' , 'cluster.outbound.+|cluster.inbound.+').sum(['app', 'instance' , 'cluster_name']).increase('PT1M') + - name: cluster_up_cx_none_healthy_incr + exp: envoy_cluster_metrics.tagMatch('metrics_name' , '.+upstream_cx_none_healthy').tagMatch('metrics_name' , 'cluster.outbound.+|cluster.inbound.+').sum(['app', 'instance' , 'cluster_name']).increase('PT1M') diff --git a/test/script-cases/scripts/mal/test-log-mal-rules/nginx.data.yaml b/test/script-cases/scripts/mal/test-log-mal-rules/nginx.data.yaml new file mode 100644 index 000000000000..9591a5015c31 --- /dev/null +++ b/test/script-cases/scripts/mal/test-log-mal-rules/nginx.data.yaml @@ -0,0 +1,46 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + nginx_error_log_count: + - labels: + level: ERROR + service: test-service + service_instance_id: test-instance + value: 100.0 +expected: + meter_nginx_service_error_log_count: + entities: + - scope: SERVICE + service: test-service + layer: NGINX + samples: + - labels: + level: ERROR + service: test-service + service_instance_id: test-instance + value: 100.0 + meter_nginx_instance_error_log_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: NGINX + samples: + - labels: + level: ERROR + service: test-service + service_instance_id: test-instance + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-log-mal-rules/nginx.yaml b/test/script-cases/scripts/mal/test-log-mal-rules/nginx.yaml new file mode 100644 index 000000000000..92f847fd7a4a --- /dev/null +++ b/test/script-cases/scripts/mal/test-log-mal-rules/nginx.yaml @@ -0,0 +1,36 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +metricPrefix: meter_nginx +metricsRules: + - name: service_error_log_count + exp: nginx_error_log_count.sum(['level','service','service_instance_id']).downsampling(SUM).service(['service'], Layer.NGINX) + - name: instance_error_log_count + exp: nginx_error_log_count.sum(['level','service','service_instance_id']).downsampling(SUM).instance(['service'],['service_instance_id'], Layer.NGINX) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/continuous-profiling.data.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/continuous-profiling.data.yaml new file mode 100644 index 000000000000..0964eac63a4d --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/continuous-profiling.data.yaml @@ -0,0 +1,121 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + rover_con_p_process_cpu: + - labels: + service: test-service + instance: test-instance + process_name: test-process + layer: GENERAL + value: 100.0 + rover_con_p_process_thread_count: + - labels: + service: test-service + instance: test-instance + process_name: test-process + layer: GENERAL + value: 100.0 + rover_con_p_system_load: + - labels: + service: test-service + instance: test-instance + process_name: test-process + layer: GENERAL + value: 100.0 + rover_con_p_http_error_rate: + - labels: + service: test-service + instance: test-instance + process_name: test-process + layer: GENERAL + uri: /test-uri + value: 100.0 + rover_con_p_http_avg_response_time: + - labels: + service: test-service + instance: test-instance + process_name: test-process + layer: GENERAL + uri: /test-uri + value: 100.0 +expected: + continuous_profiling_process_cpu: + entities: + - scope: PROCESS + service: test-service + instance: test-instance + layer: GENERAL + samples: + - labels: + service: test-service + instance: test-instance + process_name: test-process + layer: GENERAL + value: 1000000.0 + continuous_profiling_process_thread_count: + entities: + - scope: PROCESS + service: test-service + instance: test-instance + layer: GENERAL + samples: + - labels: + service: test-service + instance: test-instance + process_name: test-process + layer: GENERAL + value: 100.0 + continuous_profiling_system_load: + entities: + - scope: PROCESS + service: test-service + instance: test-instance + layer: GENERAL + samples: + - labels: + service: test-service + instance: test-instance + process_name: test-process + layer: GENERAL + value: 10000.0 + continuous_profiling_http_error_rate: + entities: + - scope: PROCESS + service: test-service + instance: test-instance + layer: GENERAL + samples: + - labels: + service: test-service + instance: test-instance + process_name: test-process + layer: GENERAL + uri: /test-uri + value: 10000.0 + continuous_profiling_http_avg_response_time: + entities: + - scope: PROCESS + service: test-service + instance: test-instance + layer: GENERAL + samples: + - labels: + service: test-service + instance: test-instance + process_name: test-process + layer: GENERAL + uri: /test-uri + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/continuous-profiling.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/continuous-profiling.yaml new file mode 100644 index 000000000000..a5c6bdddd76f --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/continuous-profiling.yaml @@ -0,0 +1,28 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +expSuffix: process(['service'], ['instance'], ['process_name'], 'layer') +metricPrefix: continuous_profiling +metricsRules: + - name: process_cpu + exp: rover_con_p_process_cpu.avg(["service", "instance", "process_name", "layer"]).multiply(10000) + - name: process_thread_count + exp: rover_con_p_process_thread_count.avg(["service", "instance", "process_name", "layer"]) + - name: system_load + exp: rover_con_p_system_load.avg(["service", "instance", "process_name", "layer"]).multiply(100) + - name: http_error_rate + exp: rover_con_p_http_error_rate.avg(["service", "instance", "process_name", "layer", "uri"]).multiply(100) + - name: http_avg_response_time + exp: rover_con_p_http_avg_response_time.avg(["service", "instance", "process_name", "layer", "uri"]) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/datasource.data.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/datasource.data.yaml new file mode 100644 index 000000000000..ae55320b4ad0 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/datasource.data.yaml @@ -0,0 +1,37 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + datasource: + - labels: + service: test-service + instance: test-instance + name: test-name + status: active + value: 100.0 +expected: + meter_datasource: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: GENERAL + samples: + - labels: + service: test-service + instance: test-instance + name: test-name + status: active + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/datasource.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/datasource.yaml new file mode 100644 index 000000000000..411279a870f8 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/datasource.yaml @@ -0,0 +1,20 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +expSuffix: instance(['service'], ['instance'], Layer.GENERAL) +metricPrefix: meter +metricsRules: + - name: datasource + exp: datasource.sum(['service', 'instance', 'name', 'status']) diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/go-agent.data.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/go-agent.data.yaml new file mode 100644 index 000000000000..9b25bcf02da5 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/go-agent.data.yaml @@ -0,0 +1,179 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + sw_go_created_tracing_context_counter: + - labels: + created_by: test-creator + service: test-service + instance: test-instance + value: 100.0 + sw_go_finished_tracing_context_counter: + - labels: + service: test-service + instance: test-instance + value: 100.0 + sw_go_created_ignored_context_counter: + - labels: + created_by: test-creator + service: test-service + instance: test-instance + value: 100.0 + sw_go_finished_ignored_context_counter: + - labels: + service: test-service + instance: test-instance + value: 100.0 + sw_go_possible_leaked_context_counter: + - labels: + source: test-source + service: test-service + instance: test-instance + value: 100.0 + sw_go_interceptor_error_counter: + - labels: + plugin_name: test-plugin + service: test-service + instance: test-instance + value: 100.0 + sw_go_tracing_context_performance: + - labels: + le: '50' + service: test-service + instance: test-instance + value: 10.0 + - labels: + le: '100' + service: test-service + instance: test-instance + value: 20.0 + - labels: + le: '250' + service: test-service + instance: test-instance + value: 30.0 + - labels: + le: '500' + service: test-service + instance: test-instance + value: 40.0 + - labels: + le: '1000' + service: test-service + instance: test-instance + value: 50.0 +expected: + meter_sw_go_created_tracing_context_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: SO11Y_GO_AGENT + samples: + - labels: + created_by: test-creator + service: test-service + instance: test-instance + value: 50.0 + meter_sw_go_finished_tracing_context_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: SO11Y_GO_AGENT + samples: + - labels: + service: test-service + instance: test-instance + value: 50.0 + meter_sw_go_created_ignored_context_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: SO11Y_GO_AGENT + samples: + - labels: + created_by: test-creator + service: test-service + instance: test-instance + value: 50.0 + meter_sw_go_finished_ignored_context_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: SO11Y_GO_AGENT + samples: + - labels: + service: test-service + instance: test-instance + value: 50.0 + meter_sw_go_possible_leaked_context_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: SO11Y_GO_AGENT + samples: + - labels: + source: test-source + service: test-service + instance: test-instance + value: 50.0 + meter_sw_go_interceptor_error_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: SO11Y_GO_AGENT + samples: + - labels: + plugin_name: test-plugin + service: test-service + instance: test-instance + value: 50.0 + meter_sw_go_tracing_context_execution_time_percentile: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: SO11Y_GO_AGENT + samples: + - labels: + service: test-service + instance: test-instance + le: '1000000' + value: 50.0 + - labels: + service: test-service + instance: test-instance + le: '100000' + value: 20.0 + - labels: + service: test-service + instance: test-instance + le: '250000' + value: 30.0 + - labels: + service: test-service + instance: test-instance + le: '500000' + value: 40.0 + - labels: + service: test-service + instance: test-instance + le: '50000' + value: 10.0 diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/go-agent.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/go-agent.yaml new file mode 100644 index 000000000000..f962c1b18c3c --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/go-agent.yaml @@ -0,0 +1,32 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +expSuffix: instance(['service'], ['instance'], Layer.SO11Y_GO_AGENT) +metricPrefix: meter +metricsRules: + - name: sw_go_created_tracing_context_count + exp: sw_go_created_tracing_context_counter.sum(['created_by', 'service', 'instance']).increase('PT1M') + - name: sw_go_finished_tracing_context_count + exp: sw_go_finished_tracing_context_counter.sum(['service', 'instance']).increase('PT1M') + - name: sw_go_created_ignored_context_count + exp: sw_go_created_ignored_context_counter.sum(['created_by', 'service', 'instance']).increase('PT1M') + - name: sw_go_finished_ignored_context_count + exp: sw_go_finished_ignored_context_counter.sum(['service', 'instance']).increase('PT1M') + - name: sw_go_possible_leaked_context_count + exp: sw_go_possible_leaked_context_counter.sum(['source', 'service', 'instance']).increase('PT1M') + - name: sw_go_interceptor_error_count + exp: sw_go_interceptor_error_counter.sum(['plugin_name', 'service', 'instance']).increase('PT1M') + - name: sw_go_tracing_context_execution_time_percentile + exp: sw_go_tracing_context_performance.sum(['le', 'service', 'instance']).histogram().histogram_percentile([50,75,90,95,99]) diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/go-runtime.data.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/go-runtime.data.yaml new file mode 100644 index 000000000000..a432b18dfec2 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/go-runtime.data.yaml @@ -0,0 +1,359 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + instance_golang_heap_alloc: + - labels: + value: 100.0 + instance_golang_stack_used: + - labels: + value: 100.0 + instance_golang_gc_pause_time: + - labels: + value: 100.0 + instance_golang_gc_count: + - labels: + value: 100.0 + instance_golang_os_threads_num: + - labels: + value: 100.0 + instance_golang_live_goroutines_num: + - labels: + value: 100.0 + instance_host_cpu_used_rate: + - labels: + value: 100.0 + instance_host_mem_used_rate: + - labels: + value: 100.0 + instance_golang_heap_alloc_size: + - labels: + value: 100.0 + instance_golang_gc_count_labeled: + - labels: + service: test-service + instance: test-instance + type: cds + value: 100.0 + instance_golang_heap_alloc_objects: + - labels: + value: 100.0 + instance_golang_heap_frees: + - labels: + value: 100.0 + instance_golang_heap_frees_objects: + - labels: + value: 100.0 + instance_golang_memory_heap_labeled: + - labels: + service: test-service + instance: test-instance + type: cds + value: 100.0 + instance_golang_metadata_mcache_labeled: + - labels: + service: test-service + instance: test-instance + type: cds + value: 100.0 + instance_golang_metadata_mspan_labeled: + - labels: + service: test-service + instance: test-instance + type: cds + value: 100.0 + instance_golang_cgo_calls: + - labels: + value: 100.0 + instance_golang_gc_heap_goal: + - labels: + value: 100.0 + instance_golang_gc_heap_objects: + - labels: + value: 100.0 + instance_golang_gc_heap_tiny_allocs: + - labels: + value: 100.0 + instance_golang_gc_limiter_last_enabled: + - labels: + value: 100.0 + instance_golang_gc_stack_starting_size: + - labels: + value: 100.0 + instance_golang_memory_metadata_other: + - labels: + value: 100.0 + instance_golang_memory_os_stacks: + - labels: + value: 100.0 + instance_golang_memory_other: + - labels: + value: 100.0 + instance_golang_memory_profiling_buckets: + - labels: + value: 100.0 + instance_golang_memory_total: + - labels: + value: 100.0 + instance_golang_gc_heap_allocs_by_size: + - labels: + value: 100.0 + instance_golang_gc_heap_frees_by_size: + - labels: + value: 100.0 + instance_golang_gc_pauses: + - labels: + value: 100.0 + instance_golang_sched_latencies: + - labels: + value: 100.0 +expected: + meter_instance_golang_heap_alloc: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_stack_used: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_gc_pause_time: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 50.0 + meter_instance_golang_gc_count: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 50.0 + meter_instance_golang_os_threads_num: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_live_goroutines_num: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_host_cpu_used_rate: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_host_mem_used_rate: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_heap_alloc_rate: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 50.0 + meter_instance_golang_gc_count_labeled: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: GENERAL + samples: + - labels: + service: test-service + instance: test-instance + type: cds + value: 50.0 + meter_instance_golang_heap_alloc_objects: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_heap_frees: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 50.0 + meter_instance_golang_heap_frees_objects: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 50.0 + meter_instance_golang_memory_heap_labeled: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: GENERAL + samples: + - labels: + service: test-service + instance: test-instance + type: cds + value: 100.0 + meter_instance_golang_metadata_mcache_labeled: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: GENERAL + samples: + - labels: + service: test-service + instance: test-instance + type: cds + value: 100.0 + meter_instance_golang_metadata_mspan_labeled: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: GENERAL + samples: + - labels: + service: test-service + instance: test-instance + type: cds + value: 100.0 + meter_instance_golang_cgo_calls: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 50.0 + meter_instance_golang_gc_heap_goal: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_gc_heap_objects: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_gc_heap_tiny_allocs: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_gc_limiter_last_enabled: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_gc_stack_starting_size: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_memory_metadata_other: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_memory_os_stacks: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_memory_other: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_memory_profiling_buckets: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_memory_total: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_gc_heap_allocs_by_size: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_gc_heap_frees_by_size: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_gc_pauses: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 + meter_instance_golang_sched_latencies: + entities: + - scope: SERVICE_INSTANCE + layer: GENERAL + samples: + - labels: + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/go-runtime.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/go-runtime.yaml new file mode 100644 index 000000000000..713cb11d1f31 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/go-runtime.yaml @@ -0,0 +1,80 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +expSuffix: instance(['service'], ['instance'], Layer.GENERAL) +metricPrefix: meter +metricsRules: + - name: instance_golang_heap_alloc + exp: instance_golang_heap_alloc + - name: instance_golang_stack_used + exp: instance_golang_stack_used + - name: instance_golang_gc_pause_time + exp: instance_golang_gc_pause_time.increase('PT1M') + - name: instance_golang_gc_count + exp: instance_golang_gc_count.increase('PT1M') + - name: instance_golang_os_threads_num + exp: instance_golang_os_threads_num + - name: instance_golang_live_goroutines_num + exp: instance_golang_live_goroutines_num + - name: instance_host_cpu_used_rate + exp: instance_host_cpu_used_rate + - name: instance_host_mem_used_rate + exp: instance_host_mem_used_rate + - name: instance_golang_heap_alloc_rate + exp: instance_golang_heap_alloc_size.increase('PT1M') + - name: instance_golang_gc_count_labeled + exp: instance_golang_gc_count_labeled.sum(['service', 'instance', 'type']).increase('PT1M') + - name: instance_golang_heap_alloc_objects + exp: instance_golang_heap_alloc_objects + - name: instance_golang_heap_frees + exp: instance_golang_heap_frees.increase('PT1M') + - name: instance_golang_heap_frees_objects + exp: instance_golang_heap_frees_objects.increase('PT1M') + - name: instance_golang_memory_heap_labeled + exp: instance_golang_memory_heap_labeled.sum(['service', 'instance', 'type']) + - name: instance_golang_metadata_mcache_labeled + exp: instance_golang_metadata_mcache_labeled.sum(['service', 'instance', 'type']) + - name: instance_golang_metadata_mspan_labeled + exp: instance_golang_metadata_mspan_labeled.sum(['service', 'instance', 'type']) + - name: instance_golang_cgo_calls + exp: instance_golang_cgo_calls.increase('PT1M') + - name: instance_golang_gc_heap_goal + exp: instance_golang_gc_heap_goal + - name: instance_golang_gc_heap_objects + exp: instance_golang_gc_heap_objects + - name: instance_golang_gc_heap_tiny_allocs + exp: instance_golang_gc_heap_tiny_allocs + - name: instance_golang_gc_limiter_last_enabled + exp: instance_golang_gc_limiter_last_enabled + - name: instance_golang_gc_stack_starting_size + exp: instance_golang_gc_stack_starting_size + - name: instance_golang_memory_metadata_other + exp: instance_golang_memory_metadata_other + - name: instance_golang_memory_os_stacks + exp: instance_golang_memory_os_stacks + - name: instance_golang_memory_other + exp: instance_golang_memory_other + - name: instance_golang_memory_profiling_buckets + exp: instance_golang_memory_profiling_buckets + - name: instance_golang_memory_total + exp: instance_golang_memory_total + - name: instance_golang_gc_heap_allocs_by_size + exp: instance_golang_gc_heap_allocs_by_size.histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) + - name: instance_golang_gc_heap_frees_by_size + exp: instance_golang_gc_heap_frees_by_size.histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) + - name: instance_golang_gc_pauses + exp: instance_golang_gc_pauses.histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) + - name: instance_golang_sched_latencies + exp: instance_golang_sched_latencies.histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/java-agent.data.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/java-agent.data.yaml new file mode 100644 index 000000000000..e62a7a0dc556 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/java-agent.data.yaml @@ -0,0 +1,181 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + created_tracing_context_counter: + - labels: + created_by: test-creator + service: test-service + instance: test-instance + value: 100.0 + finished_tracing_context_counter: + - labels: + service: test-service + instance: test-instance + value: 100.0 + created_ignored_context_counter: + - labels: + created_by: test-creator + service: test-service + instance: test-instance + value: 100.0 + finished_ignored_context_counter: + - labels: + service: test-service + instance: test-instance + value: 100.0 + possible_leaked_context_counter: + - labels: + source: test-source + service: test-service + instance: test-instance + value: 100.0 + interceptor_error_counter: + - labels: + plugin_name: test-plugin + inter_type: test-type + service: test-service + instance: test-instance + value: 100.0 + tracing_context_performance: + - labels: + le: '50' + service: test-service + instance: test-instance + value: 10.0 + - labels: + le: '100' + service: test-service + instance: test-instance + value: 20.0 + - labels: + le: '250' + service: test-service + instance: test-instance + value: 30.0 + - labels: + le: '500' + service: test-service + instance: test-instance + value: 40.0 + - labels: + le: '1000' + service: test-service + instance: test-instance + value: 50.0 +expected: + meter_java_agent_created_tracing_context_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: SO11Y_JAVA_AGENT + samples: + - labels: + created_by: test-creator + service: test-service + instance: test-instance + value: 50.0 + meter_java_agent_finished_tracing_context_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: SO11Y_JAVA_AGENT + samples: + - labels: + service: test-service + instance: test-instance + value: 50.0 + meter_java_agent_created_ignored_context_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: SO11Y_JAVA_AGENT + samples: + - labels: + created_by: test-creator + service: test-service + instance: test-instance + value: 50.0 + meter_java_agent_finished_ignored_context_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: SO11Y_JAVA_AGENT + samples: + - labels: + service: test-service + instance: test-instance + value: 50.0 + meter_java_agent_possible_leaked_context_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: SO11Y_JAVA_AGENT + samples: + - labels: + source: test-source + service: test-service + instance: test-instance + value: 50.0 + meter_java_agent_interceptor_error_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: SO11Y_JAVA_AGENT + samples: + - labels: + plugin_name: test-plugin + inter_type: test-type + service: test-service + instance: test-instance + value: 50.0 + meter_java_agent_tracing_context_execution_time_percentile: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: SO11Y_JAVA_AGENT + samples: + - labels: + service: test-service + instance: test-instance + le: '1000000' + value: 50.0 + - labels: + service: test-service + instance: test-instance + le: '100000' + value: 20.0 + - labels: + service: test-service + instance: test-instance + le: '250000' + value: 30.0 + - labels: + service: test-service + instance: test-instance + le: '500000' + value: 40.0 + - labels: + service: test-service + instance: test-instance + le: '50000' + value: 10.0 diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/java-agent.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/java-agent.yaml new file mode 100644 index 000000000000..fb6cc1d6d77c --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/java-agent.yaml @@ -0,0 +1,32 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +expSuffix: instance(['service'], ['instance'], Layer.SO11Y_JAVA_AGENT) +metricPrefix: meter_java_agent +metricsRules: + - name: created_tracing_context_count + exp: created_tracing_context_counter.sum(['created_by', 'service', 'instance']).increase('PT1M') + - name: finished_tracing_context_count + exp: finished_tracing_context_counter.sum(['service', 'instance']).increase('PT1M') + - name: created_ignored_context_count + exp: created_ignored_context_counter.sum(['created_by', 'service', 'instance']).increase('PT1M') + - name: finished_ignored_context_count + exp: finished_ignored_context_counter.sum(['service', 'instance']).increase('PT1M') + - name: possible_leaked_context_count + exp: possible_leaked_context_counter.sum(['source', 'service', 'instance']).increase('PT1M') + - name: interceptor_error_count + exp: interceptor_error_counter.sum(['plugin_name', 'inter_type', 'service', 'instance']).increase('PT1M') + - name: tracing_context_execution_time_percentile + exp: tracing_context_performance.sum(['le', 'service', 'instance']).histogram().histogram_percentile([50,75,90,95,99]) diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/network-profiling-ebpf.data.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/network-profiling-ebpf.data.yaml new file mode 100644 index 000000000000..aa1b97fc8d25 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/network-profiling-ebpf.data.yaml @@ -0,0 +1,113 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + rover_net_p_client_write_counts_counter: + - labels: + service: test-service + instance: test-instance + side: client + client_process_id: proc-1 + server_process_id: proc-2 + protocol: http + is_ssl: 'false' + value: 100.0 + rover_net_p_client_write_bytes_counter: + - labels: + service: test-service + instance: test-instance + side: client + client_process_id: proc-1 + server_process_id: proc-2 + protocol: http + is_ssl: 'false' + value: 100.0 + rover_net_p_client_write_exe_time_counter: + - labels: + service: test-service + instance: test-instance + side: client + client_process_id: proc-1 + server_process_id: proc-2 + protocol: http + is_ssl: 'false' + value: 100.0 + rover_net_p_client_read_counts_counter: + - labels: + service: test-service + instance: test-instance + side: client + client_process_id: proc-1 + server_process_id: proc-2 + protocol: http + is_ssl: 'false' + value: 100.0 +expected: + process_relation_client_write_cpm: + entities: + - scope: PROCESS_RELATION + service: test-service + instance: test-instance + samples: + - labels: + service: test-service + instance: test-instance + side: client + client_process_id: proc-1 + server_process_id: proc-2 + component: '49' + value: 100.0 + process_relation_client_write_total_bytes: + entities: + - scope: PROCESS_RELATION + service: test-service + instance: test-instance + samples: + - labels: + service: test-service + instance: test-instance + side: client + client_process_id: proc-1 + server_process_id: proc-2 + component: '49' + value: 100.0 + process_relation_client_write_avg_exe_time: + entities: + - scope: PROCESS_RELATION + service: test-service + instance: test-instance + samples: + - labels: + service: test-service + instance: test-instance + side: client + client_process_id: proc-1 + server_process_id: proc-2 + component: '49' + value: 100.0 + process_relation_client_read_cpm: + entities: + - scope: PROCESS_RELATION + service: test-service + instance: test-instance + samples: + - labels: + service: test-service + instance: test-instance + side: client + client_process_id: proc-1 + server_process_id: proc-2 + component: '49' + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/network-profiling-ebpf.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/network-profiling-ebpf.yaml new file mode 100644 index 000000000000..53fcf4325236 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/network-profiling-ebpf.yaml @@ -0,0 +1,60 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# eBPF e2e test override of network-profiling.yaml +# (from test/e2e-v2/cases/profiling/ebpf/network/kubernetes-values.yaml) +# This variant uses .split(':') on map access results — exercises +# the "method chaining on bracket-accessed Map values" code path. +expSuffix: |- + processRelation('side', ['service'], ['instance'], 'client_process_id', 'server_process_id', 'component') +expPrefix: |- + forEach(['client', 'server'], { prefix, tags -> + if (tags[prefix + '_process_id'] != null) { + return + } + if (tags[prefix + '_local'] == 'true' + || tags[prefix + '_address'].split(':')[0].endsWith('.1') + || tags[prefix + '_address'].split(':')[1] == '11800' + || tags[prefix + '_address'].split(':')[1] == '53') { + tags[prefix + '_process_id'] = ProcessRegistry.generateVirtualLocalProcess(tags.service, tags.instance) + return + } + tags[prefix + '_process_id'] = ProcessRegistry.generateVirtualProcess(tags.service, tags.instance, 'UNKNOWN_REMOTE') + }) + .forEach(['component'], { key, tags -> + String result = "" + String protocol = tags['protocol'] + String ssl = tags['is_ssl'] + if (protocol == 'http' && ssl == 'true') { + result = '129' + } else if (protocol == 'http') { + result = '49' + } else if (ssl == 'true') { + result = '130' + } else { + result = '110' + } + tags[key] = result + }) +metricPrefix: process_relation +metricsRules: + - name: client_write_cpm + exp: rover_net_p_client_write_counts_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: client_write_total_bytes + exp: rover_net_p_client_write_bytes_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: client_write_avg_exe_time + exp: rover_net_p_client_write_exe_time_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: client_read_cpm + exp: rover_net_p_client_read_counts_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/network-profiling.data.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/network-profiling.data.yaml new file mode 100644 index 000000000000..da7a81af4660 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/network-profiling.data.yaml @@ -0,0 +1,1110 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + libraries: + - labels: + value: 100.0 + https: + - labels: + value: 100.0 + http: + - labels: + value: 100.0 + rover_net_p_client_write_counts_counter: + - labels: + value: 100.0 + SUM_PER_MIN: + - labels: + value: 100.0 + rover_net_p_client_write_bytes_counter: + - labels: + value: 100.0 + rover_net_p_client_write_exe_time_counter: + - labels: + value: 100.0 + rover_net_p_client_read_counts_counter: + - labels: + value: 100.0 + rover_net_p_client_read_bytes_counter: + - labels: + value: 100.0 + rover_net_p_client_read_exe_time_counter: + - labels: + value: 100.0 + rover_net_p_client_write_rtt_exe_time_counter: + - labels: + value: 100.0 + rover_net_p_client_connect_counts_counter: + - labels: + value: 100.0 + rover_net_p_client_connect_exe_time_counter: + - labels: + value: 100.0 + rover_net_p_client_close_counts_counter: + - labels: + value: 100.0 + rover_net_p_client_close_exe_time_counter: + - labels: + value: 100.0 + rover_net_p_client_retransmit_counts_counter: + - labels: + value: 100.0 + rover_net_p_client_drop_counts_counter: + - labels: + value: 100.0 + rover_net_p_client_write_rtt_histogram: + - labels: {le: '50'} + value: 10.0 + - labels: {le: '100'} + value: 20.0 + - labels: {le: '250'} + value: 30.0 + - labels: {le: '500'} + value: 40.0 + - labels: {le: '1000'} + value: 50.0 + rover_net_p_client_write_exe_time_histogram: + - labels: {le: '50'} + value: 10.0 + - labels: {le: '100'} + value: 20.0 + - labels: {le: '250'} + value: 30.0 + - labels: {le: '500'} + value: 40.0 + - labels: {le: '1000'} + value: 50.0 + rover_net_p_client_read_exe_time_histogram: + - labels: {le: '50'} + value: 10.0 + - labels: {le: '100'} + value: 20.0 + - labels: {le: '250'} + value: 30.0 + - labels: {le: '500'} + value: 40.0 + - labels: {le: '1000'} + value: 50.0 + rover_net_p_server_write_counts_counter: + - labels: + value: 100.0 + rover_net_p_server_write_bytes_counter: + - labels: + value: 100.0 + rover_net_p_server_write_exe_time_counter: + - labels: + value: 100.0 + rover_net_p_server_read_counts_counter: + - labels: + value: 100.0 + rover_net_p_server_read_bytes_counter: + - labels: + value: 100.0 + rover_net_p_server_read_exe_time_counter: + - labels: + value: 100.0 + rover_net_p_server_write_rtt_exe_time_counter: + - labels: + value: 100.0 + rover_net_p_server_connect_counts_counter: + - labels: + value: 100.0 + rover_net_p_server_connect_exe_time_counter: + - labels: + value: 100.0 + rover_net_p_server_close_counts_counter: + - labels: + value: 100.0 + rover_net_p_server_close_exe_time_counter: + - labels: + value: 100.0 + rover_net_p_server_retransmit_counts_counter: + - labels: + value: 100.0 + rover_net_p_server_drop_counts_counter: + - labels: + value: 100.0 + rover_net_p_server_write_rtt_histogram: + - labels: {le: '50'} + value: 10.0 + - labels: {le: '100'} + value: 20.0 + - labels: {le: '250'} + value: 30.0 + - labels: {le: '500'} + value: 40.0 + - labels: {le: '1000'} + value: 50.0 + rover_net_p_server_write_exe_time_histogram: + - labels: {le: '50'} + value: 10.0 + - labels: {le: '100'} + value: 20.0 + - labels: {le: '250'} + value: 30.0 + - labels: {le: '500'} + value: 40.0 + - labels: {le: '1000'} + value: 50.0 + rover_net_p_server_read_exe_time_histogram: + - labels: {le: '50'} + value: 10.0 + - labels: {le: '100'} + value: 20.0 + - labels: {le: '250'} + value: 30.0 + - labels: {le: '500'} + value: 40.0 + - labels: {le: '1000'} + value: 50.0 + rover_net_p_http1_request_counter: + - labels: + value: 100.0 + rover_net_p_http1_response_status_counter: + - labels: + value: 100.0 + rover_net_p_http1_request_package_size_avg: + - labels: + value: 100.0 + rover_net_p_http1_response_package_size_avg: + - labels: + value: 100.0 + rover_net_p_http1_request_package_size_histogram: + - labels: {le: '50'} + value: 10.0 + - labels: {le: '100'} + value: 20.0 + - labels: {le: '250'} + value: 30.0 + - labels: {le: '500'} + value: 40.0 + - labels: {le: '1000'} + value: 50.0 + rover_net_p_http1_response_package_size_histogram: + - labels: {le: '50'} + value: 10.0 + - labels: {le: '100'} + value: 20.0 + - labels: {le: '250'} + value: 30.0 + - labels: {le: '500'} + value: 40.0 + - labels: {le: '1000'} + value: 50.0 + rover_net_p_http1_client_duration_avg: + - labels: + value: 100.0 + rover_net_p_http1_server_duration_avg: + - labels: + value: 100.0 + rover_net_p_http1_client_duration_histogram: + - labels: {le: '50'} + value: 10.0 + - labels: {le: '100'} + value: 20.0 + - labels: {le: '250'} + value: 30.0 + - labels: {le: '500'} + value: 40.0 + - labels: {le: '1000'} + value: 50.0 + rover_net_p_http1_server_duration_histogram: + - labels: {le: '50'} + value: 10.0 + - labels: {le: '100'} + value: 20.0 + - labels: {le: '250'} + value: 30.0 + - labels: {le: '500'} + value: 40.0 + - labels: {le: '1000'} + value: 50.0 +expected: + process_relation_client_write_cpm: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_client_write_total_bytes: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_client_write_avg_exe_time: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_client_read_cpm: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_client_read_total_bytes: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_client_read_avg_exe_time: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_client_write_avg_rtt_time: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_client_connect_cpm: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_client_connect_exe_time: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_client_close_cpm: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_client_close_avg_exe_time: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_client_retransmit_cpm: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_client_drop_cpm: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_client_write_rtt_time_percentile: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '1000000' + value: 50.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '100000' + value: 20.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '250000' + value: 30.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '500000' + value: 40.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '50000' + value: 10.0 + process_relation_client_write_exe_time_percentile: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '1000000' + value: 50.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '100000' + value: 20.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '250000' + value: 30.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '500000' + value: 40.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '50000' + value: 10.0 + process_relation_client_read_exe_time_percentile: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '1000000' + value: 50.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '100000' + value: 20.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '250000' + value: 30.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '500000' + value: 40.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '50000' + value: 10.0 + process_relation_server_write_cpm: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_server_write_total_bytes: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_server_write_avg_exe_time: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_server_read_cpm: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_server_read_total_bytes: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_server_read_avg_exe_time: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_server_write_avg_rtt_time: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_server_connect_cpm: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_server_connect_avg_exe_time: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_server_close_cpm: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_server_close_avg_exe_time: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_server_retransmit_cpm: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_server_drop_cpm: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_server_write_rtt_time_percentile: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '1000000' + value: 50.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '100000' + value: 20.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '250000' + value: 30.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '500000' + value: 40.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '50000' + value: 10.0 + process_relation_server_write_exe_time_percentile: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '1000000' + value: 50.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '100000' + value: 20.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '250000' + value: 30.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '500000' + value: 40.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '50000' + value: 10.0 + process_relation_server_read_exe_time_percentile: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '1000000' + value: 50.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '100000' + value: 20.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '250000' + value: 30.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '500000' + value: 40.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '50000' + value: 10.0 + process_relation_http1_request_cpm: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_http1_response_status_cpm: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + code: + value: 100.0 + process_relation_http1_request_package_size: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_http1_response_package_size: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_http1_request_package_size_percentile: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '1000000' + value: 50.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '100000' + value: 20.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '250000' + value: 30.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '500000' + value: 40.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '50000' + value: 10.0 + process_relation_http1_response_package_size_percentile: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '1000000' + value: 50.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '100000' + value: 20.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '250000' + value: 30.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '500000' + value: 40.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '50000' + value: 10.0 + process_relation_http1_client_duration: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_http1_server_duration: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + value: 100.0 + process_relation_http1_client_duration_percentile: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '1000000' + value: 50.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '100000' + value: 20.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '250000' + value: 30.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '500000' + value: 40.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '50000' + value: 10.0 + process_relation_http1_server_duration_percentile: + entities: + - scope: PROCESS_RELATION + samples: + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '1000000' + value: 50.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '100000' + value: 20.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '250000' + value: 30.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '500000' + value: 40.0 + - labels: + service: + instance: + side: + client_process_id: mock-process-id + server_process_id: mock-process-id + component: '110' + le: '50000' + value: 10.0 diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/network-profiling.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/network-profiling.yaml new file mode 100644 index 000000000000..f134866b1c95 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/network-profiling.yaml @@ -0,0 +1,135 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +expSuffix: |- + processRelation('side', ['service'], ['instance'], 'client_process_id', 'server_process_id', 'component') +expPrefix: |- + forEach(['client', 'server'], { prefix, tags -> + if (tags[prefix + '_process_id'] != null) { + return + } + if (tags[prefix + '_local'] == 'true') { + tags[prefix + '_process_id'] = ProcessRegistry.generateVirtualLocalProcess(tags.service, tags.instance) + return + } + tags[prefix + '_process_id'] = ProcessRegistry.generateVirtualRemoteProcess(tags.service, tags.instance, tags[prefix + '_address']) + }) + .forEach(['component'], { key, tags -> + String result = "" + // protocol are defined in the component-libraries.yml + String protocol = tags['protocol'] + String ssl = tags['is_ssl'] + if (protocol == 'http' && ssl == 'true') { + result = '129' // https + } else if (protocol == 'http') { + result = '49' // http + } else if (ssl == 'true') { + result = '130' // tls + } else { + result = '110' // tcp + } + tags[key] = result + }) +metricPrefix: process_relation +metricsRules: + # TCP Metrics: client side + - name: client_write_cpm + exp: rover_net_p_client_write_counts_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: client_write_total_bytes + exp: rover_net_p_client_write_bytes_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: client_write_avg_exe_time + exp: rover_net_p_client_write_exe_time_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: client_read_cpm + exp: rover_net_p_client_read_counts_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: client_read_total_bytes + exp: rover_net_p_client_read_bytes_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: client_read_avg_exe_time + exp: rover_net_p_client_read_exe_time_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: client_write_avg_rtt_time + exp: rover_net_p_client_write_rtt_exe_time_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: client_connect_cpm + exp: rover_net_p_client_connect_counts_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: client_connect_exe_time + exp: rover_net_p_client_connect_exe_time_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: client_close_cpm + exp: rover_net_p_client_close_counts_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: client_close_avg_exe_time + exp: rover_net_p_client_close_exe_time_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: client_retransmit_cpm + exp: rover_net_p_client_retransmit_counts_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: client_drop_cpm + exp: rover_net_p_client_drop_counts_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: client_write_rtt_time_percentile + exp: rover_net_p_client_write_rtt_histogram.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component', 'le']).histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) + - name: client_write_exe_time_percentile + exp: rover_net_p_client_write_exe_time_histogram.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component', 'le']).histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) + - name: client_read_exe_time_percentile + exp: rover_net_p_client_read_exe_time_histogram.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component', 'le']).histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) + + # TCP Metrics: server side + - name: server_write_cpm + exp: rover_net_p_server_write_counts_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: server_write_total_bytes + exp: rover_net_p_server_write_bytes_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: server_write_avg_exe_time + exp: rover_net_p_server_write_exe_time_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: server_read_cpm + exp: rover_net_p_server_read_counts_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: server_read_total_bytes + exp: rover_net_p_server_read_bytes_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: server_read_avg_exe_time + exp: rover_net_p_server_read_exe_time_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: server_write_avg_rtt_time + exp: rover_net_p_server_write_rtt_exe_time_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: server_connect_cpm + exp: rover_net_p_server_connect_counts_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: server_connect_avg_exe_time + exp: rover_net_p_server_connect_exe_time_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: server_close_cpm + exp: rover_net_p_server_close_counts_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: server_close_avg_exe_time + exp: rover_net_p_server_close_exe_time_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: server_retransmit_cpm + exp: rover_net_p_server_retransmit_counts_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: server_drop_cpm + exp: rover_net_p_server_drop_counts_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: server_write_rtt_time_percentile + exp: rover_net_p_server_write_rtt_histogram.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component', 'le']).histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) + - name: server_write_exe_time_percentile + exp: rover_net_p_server_write_exe_time_histogram.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component', 'le']).histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) + - name: server_read_exe_time_percentile + exp: rover_net_p_server_read_exe_time_histogram.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component', 'le']).histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) + + # HTTP/1.x Metrics + - name: http1_request_cpm + exp: rover_net_p_http1_request_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']).downsampling(SUM_PER_MIN) + - name: http1_response_status_cpm + exp: rover_net_p_http1_response_status_counter.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component', 'code']).downsampling(SUM_PER_MIN) + - name: http1_request_package_size + exp: rover_net_p_http1_request_package_size_avg.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: http1_response_package_size + exp: rover_net_p_http1_response_package_size_avg.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: http1_request_package_size_percentile + exp: rover_net_p_http1_request_package_size_histogram.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component', 'le']).histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) + - name: http1_response_package_size_percentile + exp: rover_net_p_http1_response_package_size_histogram.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component', 'le']).histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) + - name: http1_client_duration + exp: rover_net_p_http1_client_duration_avg.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: http1_server_duration + exp: rover_net_p_http1_server_duration_avg.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component']) + - name: http1_client_duration_percentile + exp: rover_net_p_http1_client_duration_histogram.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component', 'le']).histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) + - name: http1_server_duration_percentile + exp: rover_net_p_http1_server_duration_histogram.sum(['service', 'instance', 'side', 'client_process_id', 'server_process_id', 'component', 'le']).histogram().histogram_percentile([50,75,90,95,99]).downsampling(SUM) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/python-runtime.data.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/python-runtime.data.yaml new file mode 100644 index 000000000000..3d573c82a134 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/python-runtime.data.yaml @@ -0,0 +1,134 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + instance_pvm_process_cpu_utilization: + - labels: + instance: test-instance + value: 100.0 + instance_pvm_process_mem_utilization: + - labels: + instance: test-instance + value: 100.0 + instance_pvm_total_cpu_utilization: + - labels: + instance: test-instance + value: 100.0 + instance_pvm_total_mem_utilization: + - labels: + instance: test-instance + value: 100.0 + instance_pvm_gc_g0: + - labels: + instance: test-instance + value: 100.0 + instance_pvm_gc_g1: + - labels: + instance: test-instance + value: 100.0 + instance_pvm_gc_g2: + - labels: + instance: test-instance + value: 100.0 + instance_pvm_gc_time: + - labels: + instance: test-instance + value: 100.0 + instance_pvm_thread_active_count: + - labels: + instance: test-instance + value: 100.0 +expected: + meter_instance_pvm_process_cpu_utilization: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_pvm_process_mem_utilization: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_pvm_total_cpu_utilization: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_pvm_total_mem_utilization: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_pvm_gc_g0: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_pvm_gc_g1: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_pvm_gc_g2: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_pvm_gc_time: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 50.0 + meter_instance_pvm_thread_active_count: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/python-runtime.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/python-runtime.yaml new file mode 100644 index 000000000000..0e902a62056b --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/python-runtime.yaml @@ -0,0 +1,36 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +expSuffix: instance(['service'], ['instance'], Layer.GENERAL) +metricPrefix: meter +metricsRules: + - name: instance_pvm_process_cpu_utilization + exp: instance_pvm_process_cpu_utilization + - name: instance_pvm_process_mem_utilization + exp: instance_pvm_process_mem_utilization + - name: instance_pvm_total_cpu_utilization + exp: instance_pvm_total_cpu_utilization + - name: instance_pvm_total_mem_utilization + exp: instance_pvm_total_mem_utilization + - name: instance_pvm_gc_g0 + exp: instance_pvm_gc_g0 + - name: instance_pvm_gc_g1 + exp: instance_pvm_gc_g1 + - name: instance_pvm_gc_g2 + exp: instance_pvm_gc_g2 + - name: instance_pvm_gc_time + exp: instance_pvm_gc_time.increase("PT1M") + - name: instance_pvm_thread_active_count + exp: instance_pvm_thread_active_count \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/ruby-runtime.data.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/ruby-runtime.data.yaml new file mode 100644 index 000000000000..b9e66b6ed948 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/ruby-runtime.data.yaml @@ -0,0 +1,186 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + instance_ruby_cpu_usage_percent: + - labels: + instance: test-instance + value: 100.0 + instance_ruby_memory_rss_mb: + - labels: + instance: test-instance + value: 100.0 + instance_ruby_memory_usage_percent: + - labels: + instance: test-instance + value: 100.0 + instance_ruby_gc_count_total: + - labels: + instance: test-instance + value: 100.0 + instance_ruby_gc_minor_count_total: + - labels: + instance: test-instance + value: 100.0 + instance_ruby_gc_major_count_total: + - labels: + instance: test-instance + value: 100.0 + instance_ruby_gc_time_total: + - labels: + instance: test-instance + value: 100.0 + instance_ruby_heap_usage_percent: + - labels: + instance: test-instance + value: 100.0 + instance_ruby_thread_count_active: + - labels: + instance: test-instance + value: 100.0 + instance_ruby_thread_count_running: + - labels: + instance: test-instance + value: 100.0 + instance_ruby_total_allocated_objects: + - labels: + instance: test-instance + value: 100.0 + instance_ruby_heap_live_slots_count: + - labels: + instance: test-instance + value: 100.0 + instance_ruby_heap_available_slots_count: + - labels: + instance: test-instance + value: 100.0 +expected: + meter_instance_ruby_cpu_usage_percent: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_ruby_memory_rss_mb: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_ruby_memory_usage_percent: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_ruby_gc_count_total: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 50.0 + meter_instance_ruby_gc_minor_count_total: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 50.0 + meter_instance_ruby_gc_major_count_total: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 50.0 + meter_instance_ruby_gc_time_total: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 50.0 + meter_instance_ruby_heap_usage_percent: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_ruby_thread_count_active: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_ruby_thread_count_running: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_ruby_total_allocated_objects: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_ruby_heap_live_slots_count: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_instance_ruby_heap_available_slots_count: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/ruby-runtime.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/ruby-runtime.yaml new file mode 100644 index 000000000000..4321b8271a93 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/ruby-runtime.yaml @@ -0,0 +1,53 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +expSuffix: instance(['service'], ['instance'], Layer.GENERAL) +metricPrefix: meter +metricsRules: + # CPU Metrics + - name: instance_ruby_cpu_usage_percent + exp: instance_ruby_cpu_usage_percent + + # Memory Metrics + - name: instance_ruby_memory_rss_mb + exp: instance_ruby_memory_rss_mb + - name: instance_ruby_memory_usage_percent + exp: instance_ruby_memory_usage_percent + + # GC Metrics + - name: instance_ruby_gc_count_total + exp: instance_ruby_gc_count_total.increase("PT1M") + - name: instance_ruby_gc_minor_count_total + exp: instance_ruby_gc_minor_count_total.increase("PT1M") + - name: instance_ruby_gc_major_count_total + exp: instance_ruby_gc_major_count_total.increase("PT1M") + - name: instance_ruby_gc_time_total + exp: instance_ruby_gc_time_total.increase("PT1M") + - name: instance_ruby_heap_usage_percent + exp: instance_ruby_heap_usage_percent + + # Thread Metrics + - name: instance_ruby_thread_count_active + exp: instance_ruby_thread_count_active + - name: instance_ruby_thread_count_running + exp: instance_ruby_thread_count_running + + # Ruby runtime Metrics + - name: instance_ruby_total_allocated_objects + exp: instance_ruby_total_allocated_objects + - name: instance_ruby_heap_live_slots_count + exp: instance_ruby_heap_live_slots_count + - name: instance_ruby_heap_available_slots_count + exp: instance_ruby_heap_available_slots_count diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/satellite-tag-prefix.data.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/satellite-tag-prefix.data.yaml new file mode 100644 index 000000000000..304ca0e3d4d1 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/satellite-tag-prefix.data.yaml @@ -0,0 +1,141 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + sw_stl_gatherer_receive_count: + - labels: + pipe: test-pipe + status: active + service: test-service + value: 100.0 + sw_stl_gatherer_fetch_count: + - labels: + pipe: test-pipe + status: active + service: test-service + value: 100.0 + sw_stl_queue_output_count: + - labels: + pipe: test-pipe + status: active + service: test-service + value: 100.0 + sw_stl_sender_output_count: + - labels: + pipe: test-pipe + status: active + service: test-service + value: 100.0 + sw_stl_pipeline_queue_total_capacity: + - labels: + pipeline: test-pipeline + service: test-service + value: 100.0 + sw_stl_pipeline_queue_partition_size: + - labels: + pipeline: test-pipeline + service: test-service + value: 100.0 + sw_stl_grpc_server_cpu_gauge: + - labels: + service: test-service + value: 100.0 + sw_stl_grpc_server_connection_count: + - labels: + service: test-service + value: 100.0 +expected: + satellite_service_receive_event_count: + entities: + - scope: SERVICE + service: 'satellite::test-service' + layer: SO11Y_SATELLITE + samples: + - labels: + pipe: test-pipe + service: 'satellite::test-service' + status: active + value: 50.0 + satellite_service_fetch_event_count: + entities: + - scope: SERVICE + service: 'satellite::test-service' + layer: SO11Y_SATELLITE + samples: + - labels: + pipe: test-pipe + service: 'satellite::test-service' + status: active + value: 50.0 + satellite_service_queue_input_count: + entities: + - scope: SERVICE + service: 'satellite::test-service' + layer: SO11Y_SATELLITE + samples: + - labels: + pipe: test-pipe + service: 'satellite::test-service' + status: active + value: 50.0 + satellite_service_send_event_count: + entities: + - scope: SERVICE + service: 'satellite::test-service' + layer: SO11Y_SATELLITE + samples: + - labels: + pipe: test-pipe + service: 'satellite::test-service' + status: active + value: 50.0 + satellite_service_queue_total_capacity: + entities: + - scope: SERVICE + service: 'satellite::test-service' + layer: SO11Y_SATELLITE + samples: + - labels: + pipeline: test-pipeline + service: 'satellite::test-service' + value: 100.0 + satellite_service_queue_used_count: + entities: + - scope: SERVICE + service: 'satellite::test-service' + layer: SO11Y_SATELLITE + samples: + - labels: + pipeline: test-pipeline + service: 'satellite::test-service' + value: 100.0 + satellite_service_server_cpu_utilization: + entities: + - scope: SERVICE + service: 'satellite::test-service' + layer: SO11Y_SATELLITE + samples: + - labels: + service: 'satellite::test-service' + value: 100.0 + satellite_service_grpc_connect_count: + entities: + - scope: SERVICE + service: 'satellite::test-service' + layer: SO11Y_SATELLITE + samples: + - labels: + service: 'satellite::test-service' + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/satellite-tag-prefix.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/satellite-tag-prefix.yaml new file mode 100644 index 000000000000..46e0c2976520 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/satellite-tag-prefix.yaml @@ -0,0 +1,36 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Variant of satellite.yaml that prepends "satellite::" to the service name +# via a tag() closure in expSuffix before calling service(). +expSuffix: tag({tags -> tags.service = 'satellite::' + tags.service}).service(['service'], Layer.SO11Y_SATELLITE) +metricPrefix: satellite +metricsRules: + - name: service_receive_event_count + exp: sw_stl_gatherer_receive_count.sum(["pipe", "status", "service"]).increase("PT1M") + - name: service_fetch_event_count + exp: sw_stl_gatherer_fetch_count.sum(["pipe", "status", "service"]).increase("PT1M") + - name: service_queue_input_count + exp: sw_stl_queue_output_count.sum(["pipe", "status", "service"]).increase("PT1M") + - name: service_send_event_count + exp: sw_stl_sender_output_count.sum(["pipe", "status", "service"]).increase("PT1M") + - name: service_queue_total_capacity + exp: sw_stl_pipeline_queue_total_capacity.sum(["pipeline", "service"]) + - name: service_queue_used_count + exp: sw_stl_pipeline_queue_partition_size.sum(["pipeline", "service"]) + - name: service_server_cpu_utilization + exp: sw_stl_grpc_server_cpu_gauge + - name: service_grpc_connect_count + exp: sw_stl_grpc_server_connection_count diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/satellite.data.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/satellite.data.yaml new file mode 100644 index 000000000000..866a362b434b --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/satellite.data.yaml @@ -0,0 +1,135 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + sw_stl_gatherer_receive_count: + - labels: + pipe: test-pipe + status: active + service: test-service + value: 100.0 + sw_stl_gatherer_fetch_count: + - labels: + pipe: test-pipe + status: active + service: test-service + value: 100.0 + sw_stl_queue_output_count: + - labels: + pipe: test-pipe + status: active + service: test-service + value: 100.0 + sw_stl_sender_output_count: + - labels: + pipe: test-pipe + status: active + service: test-service + value: 100.0 + sw_stl_pipeline_queue_total_capacity: + - labels: + pipeline: test-pipeline + service: test-service + value: 100.0 + sw_stl_pipeline_queue_partition_size: + - labels: + pipeline: test-pipeline + service: test-service + value: 100.0 + sw_stl_grpc_server_cpu_gauge: + - labels: + value: 100.0 + sw_stl_grpc_server_connection_count: + - labels: + value: 100.0 +expected: + satellite_service_receive_event_count: + entities: + - scope: SERVICE + service: test-service + layer: SO11Y_SATELLITE + samples: + - labels: + pipe: test-pipe + status: active + service: test-service + value: 50.0 + satellite_service_fetch_event_count: + entities: + - scope: SERVICE + service: test-service + layer: SO11Y_SATELLITE + samples: + - labels: + pipe: test-pipe + status: active + service: test-service + value: 50.0 + satellite_service_queue_input_count: + entities: + - scope: SERVICE + service: test-service + layer: SO11Y_SATELLITE + samples: + - labels: + pipe: test-pipe + status: active + service: test-service + value: 50.0 + satellite_service_send_event_count: + entities: + - scope: SERVICE + service: test-service + layer: SO11Y_SATELLITE + samples: + - labels: + pipe: test-pipe + status: active + service: test-service + value: 50.0 + satellite_service_queue_total_capacity: + entities: + - scope: SERVICE + service: test-service + layer: SO11Y_SATELLITE + samples: + - labels: + pipeline: test-pipeline + service: test-service + value: 100.0 + satellite_service_queue_used_count: + entities: + - scope: SERVICE + service: test-service + layer: SO11Y_SATELLITE + samples: + - labels: + pipeline: test-pipeline + service: test-service + value: 100.0 + satellite_service_server_cpu_utilization: + entities: + - scope: SERVICE + layer: SO11Y_SATELLITE + samples: + - labels: + value: 100.0 + satellite_service_grpc_connect_count: + entities: + - scope: SERVICE + layer: SO11Y_SATELLITE + samples: + - labels: + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/satellite.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/satellite.yaml new file mode 100644 index 000000000000..15f84ad82d64 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/satellite.yaml @@ -0,0 +1,34 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +expSuffix: service(['service'], Layer.SO11Y_SATELLITE) +metricPrefix: satellite +metricsRules: + - name: service_receive_event_count + exp: sw_stl_gatherer_receive_count.sum(["pipe", "status", "service"]).increase("PT1M") + - name: service_fetch_event_count + exp: sw_stl_gatherer_fetch_count.sum(["pipe", "status", "service"]).increase("PT1M") + - name: service_queue_input_count + exp: sw_stl_queue_output_count.sum(["pipe", "status", "service"]).increase("PT1M") + - name: service_send_event_count + exp: sw_stl_sender_output_count.sum(["pipe", "status", "service"]).increase("PT1M") + - name: service_queue_total_capacity + exp: sw_stl_pipeline_queue_total_capacity.sum(["pipeline", "service"]) + - name: service_queue_used_count + exp: sw_stl_pipeline_queue_partition_size.sum(["pipeline", "service"]) + - name: service_server_cpu_utilization + exp: sw_stl_grpc_server_cpu_gauge + - name: service_grpc_connect_count + exp: sw_stl_grpc_server_connection_count diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/spring-micrometer.data.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/spring-micrometer.data.yaml new file mode 100644 index 000000000000..6f3bae276742 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/spring-micrometer.data.yaml @@ -0,0 +1,316 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + http_server_requests_count: + - labels: + instance: test-instance + value: 100.0 + http_server_requests_sum: + - labels: + instance: test-instance + value: 100.0 + jdbc_connections_active: + - labels: + instance: test-instance + value: 100.0 + jdbc_connections_idle: + - labels: + instance: test-instance + value: 100.0 + jdbc_connections_max: + - labels: + instance: test-instance + value: 100.0 + jvm_classes_loaded: + - labels: + instance: test-instance + value: 100.0 + jvm_classes_unloaded: + - labels: + instance: test-instance + value: 100.0 + jvm_gc_pause_count: + - labels: + instance: test-instance + value: 100.0 + jvm_gc_pause_sum: + - labels: + instance: test-instance + value: 100.0 + jvm_memory_committed: + - labels: + instance: test-instance + value: 100.0 + jvm_memory_max: + - labels: + instance: test-instance + value: 100.0 + jvm_memory_used: + - labels: + instance: test-instance + value: 100.0 + jvm_threads_daemon: + - labels: + instance: test-instance + value: 100.0 + jvm_threads_live: + - labels: + instance: test-instance + value: 100.0 + jvm_threads_peak: + - labels: + instance: test-instance + value: 100.0 + process_cpu_usage: + - labels: + instance: test-instance + value: 100.0 + system_cpu_usage: + - labels: + instance: test-instance + value: 100.0 + system_load_average_1m: + - labels: + instance: test-instance + value: 100.0 + tomcat_sessions_active_current: + - labels: + instance: test-instance + value: 100.0 + tomcat_sessions_active_max: + - labels: + instance: test-instance + value: 100.0 + tomcat_sessions_rejected: + - labels: + instance: test-instance + value: 100.0 + process_files_max: + - labels: + instance: test-instance + value: 100.0 + process_files_open: + - labels: + instance: test-instance + value: 100.0 +expected: + meter_http_server_requests_count: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 50.0 + meter_http_server_requests_duration: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 50.0 + meter_jdbc_connections_active: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_jdbc_connections_idle: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_jdbc_connections_max: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_jvm_classes_loaded: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_jvm_classes_unloaded: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 50.0 + meter_jvm_gc_pause_count: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 50.0 + meter_jvm_gc_pause_duration: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 50.0 + meter_jvm_memory_committed: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_jvm_memory_max: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_jvm_memory_used: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_jvm_threads_daemon: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_jvm_threads_live: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_jvm_threads_peak: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_process_cpu_usage: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 10000.0 + meter_system_cpu_usage: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 10000.0 + meter_system_load_average_1m: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_tomcat_sessions_active_current: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_tomcat_sessions_active_max: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_tomcat_sessions_rejected: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 50.0 + meter_process_files_max: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 + meter_process_files_open: + entities: + - scope: SERVICE_INSTANCE + instance: test-instance + layer: GENERAL + samples: + - labels: + instance: test-instance + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/spring-micrometer.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/spring-micrometer.yaml new file mode 100644 index 000000000000..6ba95683449a --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/spring-micrometer.yaml @@ -0,0 +1,64 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +expSuffix: instance(['service'], ['instance'], Layer.GENERAL) +metricPrefix: meter +metricsRules: + - name: http_server_requests_count + exp: http_server_requests_count.increase("PT1M") + - name: http_server_requests_duration + exp: http_server_requests_sum.increase("PT1M") + - name: jdbc_connections_active + exp: jdbc_connections_active + - name: jdbc_connections_idle + exp: jdbc_connections_idle + - name: jdbc_connections_max + exp: jdbc_connections_max + - name: jvm_classes_loaded + exp: jvm_classes_loaded + - name: jvm_classes_unloaded + exp: jvm_classes_unloaded.increase("PT1M") + - name: jvm_gc_pause_count + exp: jvm_gc_pause_count.increase("PT1M") + - name: jvm_gc_pause_duration + exp: jvm_gc_pause_sum.increase("PT1M") + - name: jvm_memory_committed + exp: jvm_memory_committed + - name: jvm_memory_max + exp: jvm_memory_max + - name: jvm_memory_used + exp: jvm_memory_used + - name: jvm_threads_daemon + exp: jvm_threads_daemon + - name: jvm_threads_live + exp: jvm_threads_live + - name: jvm_threads_peak + exp: jvm_threads_peak + - name: process_cpu_usage + exp: process_cpu_usage.multiply(100) + - name: system_cpu_usage + exp: system_cpu_usage.multiply(100) + - name: system_load_average_1m + exp: system_load_average_1m + - name: tomcat_sessions_active_current + exp: tomcat_sessions_active_current + - name: tomcat_sessions_active_max + exp: tomcat_sessions_active_max + - name: tomcat_sessions_rejected + exp: tomcat_sessions_rejected.increase("PT1M") + - name: process_files_max + exp: process_files_max + - name: process_files_open + exp: process_files_open diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/threadpool.data.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/threadpool.data.yaml new file mode 100644 index 000000000000..303ec90905fc --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/threadpool.data.yaml @@ -0,0 +1,37 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + thread_pool: + - labels: + metric_type: test-metric + pool_name: test-value + instance: test-instance + service: test-service + value: 100.0 +expected: + meter_thread_pool: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-instance + layer: GENERAL + samples: + - labels: + metric_type: test-metric + pool_name: test-value + instance: test-instance + service: test-service + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-meter-analyzer-config/threadpool.yaml b/test/script-cases/scripts/mal/test-meter-analyzer-config/threadpool.yaml new file mode 100644 index 000000000000..861c48817717 --- /dev/null +++ b/test/script-cases/scripts/mal/test-meter-analyzer-config/threadpool.yaml @@ -0,0 +1,20 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +expSuffix: instance(['service'], ['instance'], Layer.GENERAL) +metricPrefix: meter +metricsRules: + - name: thread_pool + exp: thread_pool.avg(['metric_type', 'pool_name', 'instance', 'service']) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-broker.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-broker.data.yaml new file mode 100644 index 000000000000..55164c6bb292 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-broker.data.yaml @@ -0,0 +1,383 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + org_apache_activemq_Broker_UptimeMillis: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_Slave: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_CurrentConnectionsCount: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_ProducerCount: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_ConsumerCount: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_TotalProducerCount: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_TotalConsumerCount: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_TotalEnqueueCount: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_TotalDequeueCount: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_MemoryPercentUsage: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_MemoryUsageByteCount: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_MemoryLimit: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_StorePercentUsage: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_StoreLimit: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_TempPercentUsage: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_TempLimit: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_AverageMessageSize: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_MaxMessageSize: + - labels: + cluster: test-cluster + brokerName: test-broker + service_instance_id: test-instance + value: 100.0 + org_apache_activemq_Broker_QueueSize: + - labels: + cluster: test-cluster + brokerName: test-broker + destinationName: test-destination + value: 100.0 +expected: + meter_activemq_broker_uptime: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_broker_state: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_broker_current_connections: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_broker_current_producer_count: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_broker_current_consumer_count: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_broker_producer_count: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 50.0 + meter_activemq_broker_consumer_count: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 50.0 + meter_activemq_broker_enqueue_count: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 50.0 + meter_activemq_broker_dequeue_count: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 50.0 + meter_activemq_broker_enqueue_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 25.0 + meter_activemq_broker_dequeue_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 25.0 + meter_activemq_broker_memory_percent_usage: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_broker_memory_usage: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_broker_memory_limit: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_broker_store_percent_usage: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_broker_store_limit: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_broker_temp_percent_usage: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_broker_temp_limit: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_broker_average_message_size: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_broker_max_message_size: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_broker_queue_size: + entities: + - scope: SERVICE_INSTANCE + service: 'activemq::test-cluster' + instance: test-broker + layer: ACTIVEMQ + samples: + - labels: + brokerName: test-broker + cluster: 'activemq::test-cluster' + destinationName: test-destination + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-broker.yaml b/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-broker.yaml new file mode 100644 index 000000000000..8a1ea6509503 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-broker.yaml @@ -0,0 +1,97 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'activemq-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'activemq::' + tags.cluster}).instance(['cluster'], ['brokerName'], Layer.ACTIVEMQ) +metricPrefix: meter_activemq_broker +metricsRules: + # Uptime of the broker in day. + - name: uptime + exp: org_apache_activemq_Broker_UptimeMillis.max(['cluster','brokerName','service_instance_id']) + # If slave broker 1 else 0. + - name: state + exp: org_apache_activemq_Broker_Slave.sum(['cluster','brokerName','service_instance_id']) + # The number of clients connected to the broker currently. + - name: current_connections + exp: org_apache_activemq_Broker_CurrentConnectionsCount.sum(['cluster','brokerName','service_instance_id']) + # The number of producers currently attached to the broker. + - name: current_producer_count + exp: org_apache_activemq_Broker_ProducerCount.sum(['cluster','brokerName','service_instance_id']) + # The number of consumers consuming messages from the broker. + - name: current_consumer_count + exp: org_apache_activemq_Broker_ConsumerCount.sum(['cluster','brokerName','service_instance_id']) + # Number of message producers active on destinations. + - name: producer_count + exp: org_apache_activemq_Broker_TotalProducerCount.sum(['cluster','brokerName','service_instance_id']).increase("PT1M") + # Number of message consumers subscribed to destinations. + - name: consumer_count + exp: org_apache_activemq_Broker_TotalConsumerCount.sum(['cluster','brokerName','service_instance_id']).increase("PT1M") + # The total number of messages sent to the broker. + - name: enqueue_count + exp: org_apache_activemq_Broker_TotalEnqueueCount.sum(['cluster','brokerName','service_instance_id']).increase("PT1M") + # The total number of messages the broker has delivered to consumers. + - name: dequeue_count + exp: org_apache_activemq_Broker_TotalDequeueCount.sum(['cluster','brokerName','service_instance_id']).increase("PT1M") + # The total number of messages sent to the broker per second. + - name: enqueue_rate + exp: org_apache_activemq_Broker_TotalEnqueueCount.sum(['cluster','brokerName','service_instance_id']).rate("PT1M") + # The total number of messages the broker has delivered to consumers per second. + - name: dequeue_rate + exp: org_apache_activemq_Broker_TotalDequeueCount.sum(['cluster','brokerName','service_instance_id']).rate("PT1M") + # Percentage of configured memory used by the broker. + - name: memory_percent_usage + exp: org_apache_activemq_Broker_MemoryPercentUsage.sum(['cluster','brokerName','service_instance_id']) + # Memory used by undelivered messages in bytes. + - name: memory_usage + exp: org_apache_activemq_Broker_MemoryUsageByteCount.sum(['cluster','brokerName','service_instance_id']) + # Memory limited used for holding undelivered messages before paging to temporary storage. + - name: memory_limit + exp: org_apache_activemq_Broker_MemoryLimit.sum(['cluster','brokerName','service_instance_id']) + # Percentage of available disk space used for persistent message storage. + - name: store_percent_usage + exp: org_apache_activemq_Broker_StorePercentUsage.sum(['cluster','brokerName','service_instance_id']) + # Disk limited used for persistent messages before producers are blocked. + - name: store_limit + exp: org_apache_activemq_Broker_StoreLimit.sum(['cluster','brokerName','service_instance_id']) + # Percentage of available disk space used for non-persistent message storage. + - name: temp_percent_usage + exp: org_apache_activemq_Broker_TempPercentUsage.sum(['cluster','brokerName','service_instance_id']) + # Disk limited used for non-persistent messages and temporary data before producers are blocked. + - name: temp_limit + exp: org_apache_activemq_Broker_TempLimit.sum(['cluster','brokerName','service_instance_id']) + # Average message size on this broker. + - name: average_message_size + exp: org_apache_activemq_Broker_AverageMessageSize.avg(['cluster','brokerName','service_instance_id']) + # Max message size on this broker. + - name: max_message_size + exp: org_apache_activemq_Broker_MaxMessageSize.max(['cluster','brokerName','service_instance_id']) + # Number of messages on this broker that have been dispatched but not acknowledged. + - name: queue_size + exp: org_apache_activemq_Broker_QueueSize.sum(['cluster','brokerName','destinationName']) diff --git a/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-cluster.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-cluster.data.yaml new file mode 100644 index 000000000000..3e651d6ba3a0 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-cluster.data.yaml @@ -0,0 +1,311 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + java_lang_OperatingSystem_SystemLoadAverage: + - labels: + cluster: test-cluster + service_instance_id: test-instance + value: 100.0 + java_lang_Threading_ThreadCount: + - labels: + cluster: test-cluster + service_instance_id: test-instance + value: 100.0 + java_lang_Memory_HeapMemoryUsage_init: + - labels: + cluster: test-cluster + service_instance_id: test-instance + value: 100.0 + java_lang_Memory_HeapMemoryUsage_committed: + - labels: + cluster: test-cluster + service_instance_id: test-instance + value: 100.0 + java_lang_Memory_HeapMemoryUsage_used: + - labels: + cluster: test-cluster + service_instance_id: test-instance + value: 100.0 + java_lang_Memory_HeapMemoryUsage_max: + - labels: + cluster: test-cluster + service_instance_id: test-instance + value: 100.0 + java_lang_G1_Old_Generation_CollectionCount: + - labels: + cluster: test-cluster + service_instance_id: test-instance + type: GarbageCollector + value: 100.0 + java_lang_G1_Young_Generation_CollectionCount: + - labels: + cluster: test-cluster + service_instance_id: test-instance + type: GarbageCollector + value: 100.0 + java_lang_G1_Old_Generation_CollectionTime: + - labels: + cluster: test-cluster + service_instance_id: test-instance + type: GarbageCollector + value: 100.0 + java_lang_G1_Young_Generation_CollectionTime: + - labels: + cluster: test-cluster + service_instance_id: test-instance + type: GarbageCollector + value: 100.0 + java_lang_GarbageCollector_CollectionCount: + - labels: + cluster: test-cluster + service_instance_id: test-instance + name: PS MarkSweep + value: 100.0 + - labels: + cluster: test-cluster + service_instance_id: test-instance + name: PS Scavenge + value: 100.0 + java_lang_GarbageCollector_CollectionTime: + - labels: + cluster: test-cluster + service_instance_id: test-instance + name: PS MarkSweep + value: 100.0 + - labels: + cluster: test-cluster + service_instance_id: test-instance + name: PS Scavenge + value: 100.0 + org_apache_activemq_Broker_TotalEnqueueCount: + - labels: + cluster: test-cluster + value: 100.0 + org_apache_activemq_Broker_TotalDequeueCount: + - labels: + cluster: test-cluster + value: 100.0 + org_apache_activemq_Broker_DispatchCount: + - labels: + cluster: test-cluster + value: 100.0 + org_apache_activemq_Broker_ExpiredCount: + - labels: + cluster: test-cluster + value: 100.0 + org_apache_activemq_Broker_AverageEnqueueTime: + - labels: + cluster: test-cluster + value: 100.0 + org_apache_activemq_Broker_MaxEnqueueTime: + - labels: + cluster: test-cluster + value: 100.0 +expected: + meter_activemq_cluster_system_load_average: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 1000000.0 + meter_activemq_cluster_thread_count: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_cluster_heap_memory_usage_init: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_cluster_heap_memory_usage_committed: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_cluster_heap_memory_usage_used: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_cluster_heap_memory_usage_max: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_activemq_cluster_gc_g1_old_collection_count: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 50.0 + meter_activemq_cluster_gc_g1_young_collection_count: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 50.0 + meter_activemq_cluster_gc_g1_old_collection_time: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 50.0 + meter_activemq_cluster_gc_g1_young_collection_time: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 50.0 + meter_activemq_cluster_gc_parallel_old_collection_count: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 50.0 + meter_activemq_cluster_gc_parallel_young_collection_count: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 50.0 + meter_activemq_cluster_gc_parallel_old_collection_time: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 50.0 + meter_activemq_cluster_gc_parallel_young_collection_time: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + service_instance_id: test-instance + value: 50.0 + meter_activemq_cluster_enqueue_rate: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + value: 25.0 + meter_activemq_cluster_dequeue_rate: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + value: 25.0 + meter_activemq_cluster_dispatch_rate: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + value: 25.0 + meter_activemq_cluster_expired_rate: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + value: 25.0 + meter_activemq_cluster_average_enqueue_time: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + value: 100.0 + meter_activemq_cluster_max_enqueue_time: + entities: + - scope: SERVICE + service: 'activemq::test-cluster' + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-cluster.yaml b/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-cluster.yaml new file mode 100644 index 000000000000..a3c2e393f496 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-cluster.yaml @@ -0,0 +1,95 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +filter: "{ tags -> tags.job_name == 'activemq-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'activemq::' + tags.cluster}).service(['cluster'], Layer.ACTIVEMQ) +metricPrefix: meter_activemq_cluster +metricsRules: + # The average system load, range:[0,10000]. + - name: system_load_average + exp: java_lang_OperatingSystem_SystemLoadAverage.avg(['cluster','service_instance_id'])*10000 + # Threads currently used by the JVM. + - name: thread_count + exp: java_lang_Threading_ThreadCount.sum(['cluster','service_instance_id']) + # The initial amount of heap memory available. + - name: heap_memory_usage_init + exp: java_lang_Memory_HeapMemoryUsage_init.sum(['cluster','service_instance_id']) + # The memory is guaranteed to be available for the JVM to use. + - name: heap_memory_usage_committed + exp: java_lang_Memory_HeapMemoryUsage_committed.sum(['cluster','service_instance_id']) + # The amount of JVM heap memory currently in use. + - name: heap_memory_usage_used + exp: java_lang_Memory_HeapMemoryUsage_used.sum(['cluster','service_instance_id']) + # The maximum possible size of the heap memory. + - name: heap_memory_usage_max + exp: java_lang_Memory_HeapMemoryUsage_max.sum(['cluster','service_instance_id']) + # The gc count of G1 Old Generation(JDK[9,17]). + - name: gc_g1_old_collection_count + exp: java_lang_G1_Old_Generation_CollectionCount.tagEqual('type','GarbageCollector').sum(['cluster','service_instance_id']).increase("PT1M") + # The gc count of G1 Young Generation(JDK[9,17]). + - name: gc_g1_young_collection_count + exp: java_lang_G1_Young_Generation_CollectionCount.tagEqual('type','GarbageCollector').sum(['cluster','service_instance_id']).increase("PT1M") + # The gc time spent in G1 Old Generation in milliseconds(JDK[9,17]). + - name: gc_g1_old_collection_time + exp: java_lang_G1_Old_Generation_CollectionTime.tagEqual('type','GarbageCollector').sum(['cluster','service_instance_id']).increase("PT1M") + # The gc time spent in G1 Young Generation in milliseconds(JDK[9,17]). + - name: gc_g1_young_collection_time + exp: java_lang_G1_Young_Generation_CollectionTime.tagEqual('type','GarbageCollector').sum(['cluster','service_instance_id']).increase("PT1M") + # The gc count of PS MarkSweep(JDK[6,8]). + - name: gc_parallel_old_collection_count + exp: java_lang_GarbageCollector_CollectionCount.tagEqual('name','PS MarkSweep').sum(['cluster','service_instance_id']).increase("PT1M") + # The gc count of PS Scavenge(JDK[6,8]). + - name: gc_parallel_young_collection_count + exp: java_lang_GarbageCollector_CollectionCount.tagEqual('name','PS Scavenge').sum(['cluster','service_instance_id']).increase("PT1M") + # The gc time spent in PS MarkSweep in milliseconds(JDK[6,8]). + - name: gc_parallel_old_collection_time + exp: java_lang_GarbageCollector_CollectionTime.tagEqual('name','PS MarkSweep').sum(['cluster','service_instance_id']).increase("PT1M") + # The gc time spent in PS Scavenge in milliseconds(JDK[6,8]). + - name: gc_parallel_young_collection_time + exp: java_lang_GarbageCollector_CollectionTime.tagEqual('name','PS Scavenge').sum(['cluster','service_instance_id']).increase("PT1M") + # Number of messages that have been sent to the broker per second. + - name: enqueue_rate + exp: org_apache_activemq_Broker_TotalEnqueueCount.sum((['cluster'])).rate("PT1M") + # Number of messages that have been acknowledged or discarded on the broker per second. + - name: dequeue_rate + exp: org_apache_activemq_Broker_TotalDequeueCount.sum(['cluster']).rate("PT1M") + # Number of messages that has been delivered to consumers per second. + - name: dispatch_rate + exp: org_apache_activemq_Broker_DispatchCount.sum(['cluster']).rate("PT1M") + # Number of messages that have been expired per second. + - name: expired_rate + exp: org_apache_activemq_Broker_ExpiredCount.sum(['cluster']).rate("PT1M") + # The average time a message was held on this cluster. + - name: average_enqueue_time + exp: org_apache_activemq_Broker_AverageEnqueueTime.avg(['cluster']) + # The max time a message was held on this cluster. + - name: max_enqueue_time + exp: org_apache_activemq_Broker_MaxEnqueueTime.max(['cluster']) diff --git a/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-destination.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-destination.data.yaml new file mode 100644 index 000000000000..c8dd12483e95 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-destination.data.yaml @@ -0,0 +1,280 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + org_apache_activemq_Broker_ProducerCount: + - labels: + cluster: test-cluster + destinationName: test-destination + destinationType: Queue + value: 100.0 + org_apache_activemq_Broker_ConsumerCount: + - labels: + cluster: test-cluster + destinationName: test-destination + destinationType: Topic + value: 100.0 + org_apache_activemq_Broker_QueueSize: + - labels: + cluster: test-cluster + destinationName: test-destination + destinationType: Queue + value: 100.0 + org_apache_activemq_Broker_MemoryUsageByteCount: + - labels: + cluster: test-cluster + destinationName: test-destination + destinationType: Queue + value: 100.0 + org_apache_activemq_Broker_MemoryPercentUsage: + - labels: + cluster: test-cluster + destinationName: test-destination + destinationType: Queue + value: 100.0 + org_apache_activemq_Broker_EnqueueCount: + - labels: + cluster: test-cluster + destinationName: test-destination + destinationType: Queue + value: 100.0 + org_apache_activemq_Broker_DequeueCount: + - labels: + cluster: test-cluster + destinationName: test-destination + destinationType: Queue + value: 100.0 + org_apache_activemq_Broker_AverageEnqueueTime: + - labels: + cluster: test-cluster + destinationName: test-destination + destinationType: Queue + value: 100.0 + org_apache_activemq_Broker_MaxEnqueueTime: + - labels: + cluster: test-cluster + destinationName: test-destination + destinationType: Queue + value: 100.0 + org_apache_activemq_Broker_DispatchCount: + - labels: + cluster: test-cluster + destinationName: test-destination + destinationType: Queue + value: 100.0 + org_apache_activemq_Broker_ExpiredCount: + - labels: + cluster: test-cluster + destinationName: test-destination + destinationType: Queue + value: 100.0 + org_apache_activemq_Broker_InFlightCount: + - labels: + cluster: test-cluster + destinationName: test-destination + destinationType: Queue + value: 100.0 + org_apache_activemq_Broker_AverageMessageSize: + - labels: + cluster: test-cluster + destinationName: test-destination + destinationType: Queue + value: 100.0 + org_apache_activemq_Broker_MaxMessageSize: + - labels: + cluster: test-cluster + destinationName: test-destination + destinationType: Queue + value: 100.0 +expected: + meter_activemq_destination_producer_count: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationType: Queue + destinationName: test-destination + value: 100.0 + meter_activemq_destination_consumer_count: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationType: Topic + destinationName: test-destination + value: 100.0 + meter_activemq_destination_topic_consumer_count: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationName: test-destination + value: 100.0 + meter_activemq_destination_queue_size: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationType: Queue + destinationName: test-destination + value: 100.0 + meter_activemq_destination_memory_usage: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationType: Queue + destinationName: test-destination + value: 100.0 + meter_activemq_destination_memory_percent_usage: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationType: Queue + destinationName: test-destination + value: 100.0 + meter_activemq_destination_enqueue_count: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationType: Queue + destinationName: test-destination + value: 100.0 + meter_activemq_destination_dequeue_count: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationType: Queue + destinationName: test-destination + value: 100.0 + meter_activemq_destination_average_enqueue_time: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationType: Queue + destinationName: test-destination + value: 100.0 + meter_activemq_destination_max_enqueue_time: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationType: Queue + destinationName: test-destination + value: 100.0 + meter_activemq_destination_dispatch_count: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationType: Queue + destinationName: test-destination + value: 100.0 + meter_activemq_destination_expired_count: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationType: Queue + destinationName: test-destination + value: 100.0 + meter_activemq_destination_inflight_count: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationType: Queue + destinationName: test-destination + value: 100.0 + meter_activemq_destination_average_message_size: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationType: Queue + destinationName: test-destination + value: 100.0 + meter_activemq_destination_max_message_size: + entities: + - scope: ENDPOINT + service: 'activemq::test-cluster' + endpoint: test-destination + layer: ACTIVEMQ + samples: + - labels: + cluster: 'activemq::test-cluster' + destinationType: Queue + destinationName: test-destination + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-destination.yaml b/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-destination.yaml new file mode 100644 index 000000000000..c69e6381940e --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/activemq/activemq-destination.yaml @@ -0,0 +1,79 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'activemq-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'activemq::' + tags.cluster}).endpoint(['cluster'], ['destinationName'], Layer.ACTIVEMQ) +metricPrefix: meter_activemq_destination +metricsRules: + # Number of producers attached to this destination. + - name: producer_count + exp: org_apache_activemq_Broker_ProducerCount.sum(['cluster','destinationName','destinationType']) + # Number of consumers subscribed to this destination. + - name: consumer_count + exp: org_apache_activemq_Broker_ConsumerCount.sum(['cluster','destinationName','destinationType']) + # Number of consumers subscribed to the topics. + - name: topic_consumer_count + exp: org_apache_activemq_Broker_ConsumerCount.tagEqual('destinationType','Topic').sum(['cluster','destinationName']) + # The number of messages that have not been acknowledged by a consumer. + - name: queue_size + exp: org_apache_activemq_Broker_QueueSize.sum(['cluster','destinationName','destinationType']) + # Percentage of configured memory used by the destination. + - name: memory_usage + exp: org_apache_activemq_Broker_MemoryUsageByteCount.sum(['cluster','destinationName','destinationType']) + # Percentage of configured memory used by the destination. + - name: memory_percent_usage + exp: org_apache_activemq_Broker_MemoryPercentUsage.sum(['cluster','destinationName','destinationType']) + # The number of messages sent to the destination. + - name: enqueue_count + exp: org_apache_activemq_Broker_EnqueueCount.sum(['cluster','destinationName','destinationType']) + # The number of messages the destination has delivered to consumers. + - name: dequeue_count + exp: org_apache_activemq_Broker_DequeueCount.sum(['cluster','destinationName','destinationType']) + # The average time a message was held on this destination. + - name: average_enqueue_time + exp: org_apache_activemq_Broker_AverageEnqueueTime.sum(['cluster','destinationName','destinationType']) + # The max time a message was held on this destination. + - name: max_enqueue_time + exp: org_apache_activemq_Broker_MaxEnqueueTime.sum(['cluster','destinationName','destinationType']) + # Number of messages that has been delivered to consumers. + - name: dispatch_count + exp: org_apache_activemq_Broker_DispatchCount.sum(['cluster','destinationName','destinationType']) + # Number of messages that have been expired. + - name: expired_count + exp: org_apache_activemq_Broker_ExpiredCount.sum(['cluster','destinationName','destinationType']) + # Number of messages that have been dispatched to but not acknowledged by consumers. + - name: inflight_count + exp: org_apache_activemq_Broker_InFlightCount.sum(['cluster','destinationName','destinationType']) + # Average message size on this destination. + - name: average_message_size + exp: org_apache_activemq_Broker_AverageMessageSize.avg(['cluster','destinationName','destinationType']) + # Max message size on this destination. + - name: max_message_size + exp: org_apache_activemq_Broker_MaxMessageSize.max(['cluster','destinationName','destinationType']) diff --git a/test/script-cases/scripts/mal/test-otel-rules/apisix.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/apisix.data.yaml new file mode 100644 index 000000000000..d2f7eef4769a --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/apisix.data.yaml @@ -0,0 +1,549 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + apisix_nginx_http_current_connections: + - labels: + route: test-route + node: test-node + state: active + value: 100.0 + apisix_http_requests_total: + - labels: + route: test-route + node: test-node + service_instance_id: test-inst + value: 100.0 + apisix_bandwidth: + # matched: non-empty route and node + - labels: + route: test-route + node: test-node + type: ingress + value: 100.0 + # unmatched: empty route and node (for tagEqual('route','','node','')) + - labels: + route: '' + node: '' + type: ingress + value: 50.0 + apisix_http_status: + # matched + - labels: + route: test-route + node: test-node + code: '200' + value: 100.0 + # unmatched + - labels: + route: '' + node: '' + code: '200' + value: 50.0 + apisix_http_latency: + # matched histogram buckets (non-empty route and node) + - labels: {route: test-route, node: test-node, type: request, le: '50'} + value: 10.0 + - labels: {route: test-route, node: test-node, type: request, le: '100'} + value: 20.0 + - labels: {route: test-route, node: test-node, type: request, le: '250'} + value: 30.0 + - labels: {route: test-route, node: test-node, type: request, le: '500'} + value: 40.0 + - labels: {route: test-route, node: test-node, type: request, le: '1000'} + value: 50.0 + # unmatched histogram buckets (empty route and node) + - labels: {route: '', node: '', type: request, le: '50'} + value: 5.0 + - labels: {route: '', node: '', type: request, le: '100'} + value: 10.0 + - labels: {route: '', node: '', type: request, le: '250'} + value: 15.0 + - labels: {route: '', node: '', type: request, le: '500'} + value: 20.0 + - labels: {route: '', node: '', type: request, le: '1000'} + value: 25.0 + apisix_shared_dict_capacity_bytes: + - labels: + route: test-route + node: test-node + value: 100.0 + apisix_shared_dict_free_space_bytes: + - labels: + route: test-route + node: test-node + value: 100.0 + apisix_etcd_modify_indexes: + - labels: + route: test-route + node: test-node + value: 100.0 + apisix_etcd_reachable: + - labels: + route: test-route + node: test-node + value: 100.0 +expected: + meter_apisix_sv_http_connections: + entities: + - scope: SERVICE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + state: active + service_name: 'APISIX::APISIX' + node: test-node + value: 100.0 + meter_apisix_sv_http_requests: + entities: + - scope: SERVICE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + service_instance_id: test-inst + service_name: 'APISIX::APISIX' + node: test-node + value: 25.0 + meter_apisix_sv_bandwidth_unmatched: + entities: + - scope: SERVICE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + type: ingress + service_name: 'APISIX::APISIX' + node: + value: 12.5 + meter_apisix_sv_http_status_unmatched: + entities: + - scope: SERVICE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + code: '200' + service_name: 'APISIX::APISIX' + node: + value: 12.5 + meter_apisix_sv_http_latency_unmatched: + entities: + - scope: SERVICE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + type: request + service_name: 'APISIX::APISIX' + node: + le: '1000000' + value: 25.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + node: + le: '100000' + value: 10.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + node: + le: '250000' + value: 15.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + node: + le: '500000' + value: 20.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + node: + le: '50000' + value: 5.0 + meter_apisix_sv_bandwidth_matched: + entities: + - scope: SERVICE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + type: ingress + service_name: 'APISIX::APISIX' + node: test-node + value: 25.0 + meter_apisix_sv_http_status_matched: + entities: + - scope: SERVICE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + code: '200' + service_name: 'APISIX::APISIX' + node: test-node + value: 25.0 + meter_apisix_sv_http_latency_matched: + entities: + - scope: SERVICE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + type: request + service_name: 'APISIX::APISIX' + node: test-node + le: '1000000' + value: 50.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + node: test-node + le: '100000' + value: 20.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + node: test-node + le: '250000' + value: 30.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + node: test-node + le: '500000' + value: 40.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + node: test-node + le: '50000' + value: 10.0 + meter_apisix_instance_shared_dict_capacity_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + name: + service_name: 'APISIX::APISIX' + service_instance_id: + value: 100.0 + meter_apisix_instance_shared_dict_free_space_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + name: + service_name: 'APISIX::APISIX' + service_instance_id: + value: 100.0 + meter_apisix_instance_etcd_indexes: + entities: + - scope: SERVICE_INSTANCE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + key: + service_name: 'APISIX::APISIX' + service_instance_id: + value: 100.0 + meter_apisix_instance_etcd_reachable: + entities: + - scope: SERVICE_INSTANCE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + node: test-node + service_name: 'APISIX::APISIX' + route: test-route + value: 100.0 + meter_apisix_instance_http_connections: + entities: + - scope: SERVICE_INSTANCE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + state: active + service_name: 'APISIX::APISIX' + service_instance_id: + value: 100.0 + meter_apisix_instance_http_requests: + entities: + - scope: SERVICE_INSTANCE + service: 'APISIX::APISIX' + instance: test-inst + layer: APISIX + samples: + - labels: + node: test-node + route: test-route + service_instance_id: test-inst + service_name: 'APISIX::APISIX' + value: 25.0 + meter_apisix_instance_bandwidth_matched: + entities: + - scope: SERVICE_INSTANCE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + type: ingress + service_name: 'APISIX::APISIX' + service_instance_id: + value: 25.0 + meter_apisix_instance_http_status_matched: + entities: + - scope: SERVICE_INSTANCE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + code: '200' + service_name: 'APISIX::APISIX' + service_instance_id: + value: 25.0 + meter_apisix_instance_http_latency_matched: + entities: + - scope: SERVICE_INSTANCE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + type: request + service_name: 'APISIX::APISIX' + service_instance_id: + le: '1000000' + value: 50.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + service_instance_id: + le: '100000' + value: 20.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + service_instance_id: + le: '250000' + value: 30.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + service_instance_id: + le: '500000' + value: 40.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + service_instance_id: + le: '50000' + value: 10.0 + meter_apisix_instance_bandwidth_unmatched: + entities: + - scope: SERVICE_INSTANCE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + type: ingress + service_name: 'APISIX::APISIX' + service_instance_id: + value: 12.5 + meter_apisix_instance_http_status_unmatched: + entities: + - scope: SERVICE_INSTANCE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + code: '200' + service_name: 'APISIX::APISIX' + service_instance_id: + value: 12.5 + meter_apisix_instance_http_latency_unmatched: + entities: + - scope: SERVICE_INSTANCE + service: 'APISIX::APISIX' + layer: APISIX + samples: + - labels: + type: request + service_name: 'APISIX::APISIX' + service_instance_id: + le: '1000000' + value: 25.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + service_instance_id: + le: '100000' + value: 10.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + service_instance_id: + le: '250000' + value: 15.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + service_instance_id: + le: '500000' + value: 20.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + service_instance_id: + le: '50000' + value: 5.0 + meter_apisix_endpoint_http_status: + entities: + - scope: ENDPOINT + service: 'APISIX::APISIX' + endpoint: route/test-route + layer: APISIX + samples: + - labels: + code: '200' + service_name: 'APISIX::APISIX' + route: route/test-route + node: test-node + value: 25.0 + meter_apisix_endpoint_bandwidth: + entities: + - scope: ENDPOINT + service: 'APISIX::APISIX' + endpoint: route/test-route + layer: APISIX + samples: + - labels: + type: ingress + service_name: 'APISIX::APISIX' + route: route/test-route + node: test-node + value: 25.0 + meter_apisix_endpoint_http_latency: + entities: + - scope: ENDPOINT + service: 'APISIX::APISIX' + endpoint: route/test-route + layer: APISIX + samples: + - labels: + type: request + service_name: 'APISIX::APISIX' + route: route/test-route + node: test-node + le: '1000000' + value: 50.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + route: route/test-route + node: test-node + le: '100000' + value: 20.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + route: route/test-route + node: test-node + le: '250000' + value: 30.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + route: route/test-route + node: test-node + le: '500000' + value: 40.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + route: route/test-route + node: test-node + le: '50000' + value: 10.0 + meter_apisix_endpoint_http_status_2: + entities: + - scope: ENDPOINT + service: 'APISIX::APISIX' + endpoint: upstream/test-node + layer: APISIX + samples: + - labels: + code: '200' + service_name: 'APISIX::APISIX' + node: upstream/test-node + value: 25.0 + meter_apisix_endpoint_bandwidth_2: + entities: + - scope: ENDPOINT + service: 'APISIX::APISIX' + endpoint: upstream/test-node + layer: APISIX + samples: + - labels: + type: ingress + service_name: 'APISIX::APISIX' + node: upstream/test-node + value: 25.0 + meter_apisix_endpoint_http_latency_2: + entities: + - scope: ENDPOINT + service: 'APISIX::APISIX' + endpoint: upstream/test-node + layer: APISIX + samples: + - labels: + type: request + service_name: 'APISIX::APISIX' + node: upstream/test-node + le: '1000000' + value: 50.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + node: upstream/test-node + le: '100000' + value: 20.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + node: upstream/test-node + le: '250000' + value: 30.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + node: upstream/test-node + le: '500000' + value: 40.0 + - labels: + type: request + service_name: 'APISIX::APISIX' + node: upstream/test-node + le: '50000' + value: 10.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/apisix.yaml b/test/script-cases/scripts/mal/test-otel-rules/apisix.yaml new file mode 100644 index 000000000000..2ed1f8fd95f4 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/apisix.yaml @@ -0,0 +1,102 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'apisix-monitoring' }" # The OpenTelemetry job name +expPrefix: tag({tags -> tags.service_name = 'APISIX::'+(tags['skywalking_service']?.trim()?:'APISIX')}) +expSuffix: +metricPrefix: meter_apisix +metricsRules: + # Service + # Ignore http_connections metrics with accepted and handled state as the actual type is counter + - name: sv_http_connections + exp: apisix_nginx_http_current_connections.tagNotMatch('state','accepted|handled').sum(['state','service_name','node']).service(['service_name'] , Layer.APISIX) + - name: sv_http_requests + exp: apisix_http_requests_total.sum(['service_instance_id','service_name','node']).rate('PT1M').service(['service_name'] , Layer.APISIX) + # Not match any route + # Refer to https://apisix.apache.org/docs/apisix/plugins/prometheus/ + - name: sv_bandwidth_unmatched + exp: apisix_bandwidth.tagEqual('route' , '' , 'node' , '').sum(['type','service_name','node']).rate('PT1M').service(['service_name'] , Layer.APISIX) + - name: sv_http_status_unmatched + exp: apisix_http_status.tagEqual('route' , '' , 'node' , '').sum(['code','service_name','node']).rate('PT1M').service(['service_name'] , Layer.APISIX) + - name: sv_http_latency_unmatched + exp: apisix_http_latency.tagEqual('route' , '' , 'node' , '').sum(['type','le','service_name','node']).histogram().histogram_percentile([50,75,90,95,99]).service(['service_name'] , Layer.APISIX) + # Match a route + - name: sv_bandwidth_matched + exp: apisix_bandwidth.tagNotEqual('route' , '' , 'node' , '').sum(['type','service_name','node']).rate('PT1M').service(['service_name'] , Layer.APISIX) + - name: sv_http_status_matched + exp: apisix_http_status.tagNotEqual('route' , '' , 'node' , '').sum(['code','service_name','node']).rate('PT1M').service(['service_name'] , Layer.APISIX) + - name: sv_http_latency_matched + exp: apisix_http_latency.tagNotEqual('route' , '' , 'node' , '').sum(['type','le','service_name','node']).histogram().histogram_percentile([50,75,90,95,99]).service(['service_name'] , Layer.APISIX) + + # Instance + - name: instance_shared_dict_capacity_bytes + exp: apisix_shared_dict_capacity_bytes.sum(['name','service_name','service_instance_id']).instance(['service_name'],['service_instance_id'], Layer.APISIX) + - name: instance_shared_dict_free_space_bytes + exp: apisix_shared_dict_free_space_bytes.sum(['name','service_name','service_instance_id']).instance(['service_name'],['service_instance_id'], Layer.APISIX) + - name: instance_etcd_indexes + exp: apisix_etcd_modify_indexes.sum(['key','service_name','service_instance_id']).instance(['service_name'],['service_instance_id'], Layer.APISIX) + - name: instance_etcd_reachable + exp: apisix_etcd_reachable.downsampling(LATEST).instance(['service_name'],['service_instance_id'], Layer.APISIX) + # Ignore http_connections metrics with accepted and handled state as the actual type is counter + - name: instance_http_connections + exp: apisix_nginx_http_current_connections.tagNotMatch('state','accepted|handled').sum(['state','service_name','service_instance_id']).instance(['service_name'],['service_instance_id'], Layer.APISIX) + - name: instance_http_requests + exp: apisix_http_requests_total.rate('PT1M').instance(['service_name'],['service_instance_id'], Layer.APISIX) + # Match a route + - name: instance_bandwidth_matched + exp: apisix_bandwidth.tagNotEqual('route' , '' , 'node' , '' ).sum(['type','service_name','service_instance_id']).rate('PT1M').instance(['service_name'],['service_instance_id'], Layer.APISIX) + - name: instance_http_status_matched + exp: apisix_http_status.tagNotEqual('route' , '' , 'node' , '').sum(['code','service_name','service_instance_id']).rate('PT1M').instance(['service_name'],['service_instance_id'], Layer.APISIX) + - name: instance_http_latency_matched + exp: apisix_http_latency.tagNotEqual('route' , '' , 'node' , '').sum(['type','le','service_name','service_instance_id']).histogram().histogram_percentile([50,75,90,95,99]).instance(['service_name'],['service_instance_id'], Layer.APISIX) + # Not match any route + # Refer to https://apisix.apache.org/docs/apisix/plugins/prometheus/ + - name: instance_bandwidth_unmatched + exp: apisix_bandwidth.tagEqual('route' , '' , 'node' , '').sum(['type','service_name','service_instance_id']).rate('PT1M').instance(['service_name'],['service_instance_id'], Layer.APISIX) + - name: instance_http_status_unmatched + exp: apisix_http_status.tagEqual('route' , '' , 'node' , '').sum(['code','service_name','service_instance_id']).rate('PT1M').instance(['service_name'],['service_instance_id'], Layer.APISIX) + - name: instance_http_latency_unmatched + exp: apisix_http_latency.tagEqual('route' , '' , 'node' , '').sum(['type','le','service_name','service_instance_id']).histogram().histogram_percentile([50,75,90,95,99]).instance(['service_name'],['service_instance_id'], Layer.APISIX) + + # Endpoint + # Reorganization metrics which has `route` label as endpoint ,that is formatted to `router/{routerId}` + - name: endpoint_http_status + exp: apisix_http_status.tagNotEqual('route','').tag({tags->tags.route = 'route/'+tags['route']}).sum(['code','service_name','route','node']).rate('PT1M').endpoint(['service_name'],['route'], Layer.APISIX) + - name: endpoint_bandwidth + exp: apisix_bandwidth.tagNotEqual('route','').tag({tags->tags.route = 'route/'+tags['route']}).sum(['type','service_name','route','node']).rate('PT1M').endpoint(['service_name'],['route'], Layer.APISIX) + - name: endpoint_http_latency + exp: apisix_http_latency.tagNotEqual('route','').tag({tags->tags.route = 'route/'+tags['route']}).sum(['type','le','service_name','route','node']).histogram().histogram_percentile([50,75,90,95,99]).endpoint(['service_name'],['route'], Layer.APISIX) + # Reorganization metrics which has `node` label as endpoint , that is formatted to `node/{node}` + - name: endpoint_http_status + exp: apisix_http_status.tagNotEqual('node','').tag({tags->tags.node = 'upstream/'+tags['node']}).sum(['code','service_name','node']).rate('PT1M').endpoint(['service_name'],['node'], Layer.APISIX) + - name: endpoint_bandwidth + exp: apisix_bandwidth.tagNotEqual('node','').tag({tags->tags.node = 'upstream/'+tags['node']}).sum(['type','service_name','node']).rate('PT1M').endpoint(['service_name'],['node'], Layer.APISIX) + - name: endpoint_http_latency + exp: apisix_http_latency.tagNotEqual('node','').tag({tags->tags.node = 'upstream/'+tags['node']}).sum(['type','le','service_name','node']).histogram().histogram_percentile([50,75,90,95,99]).endpoint(['service_name'],['node'], Layer.APISIX) diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-dynamodb/dynamodb-endpoint.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-dynamodb/dynamodb-endpoint.data.yaml new file mode 100644 index 000000000000..9253c7efa492 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-dynamodb/dynamodb-endpoint.data.yaml @@ -0,0 +1,359 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + dynamodb: + - labels: + cloud_account_id: test-account + TableName: test-table + value: 100.0 + amazonaws_com_AWS_DynamoDB_ConsumedWriteCapacityUnits: + - labels: + cloud_account_id: test-account + TableName: test-table + value: 100.0 + amazonaws_com_AWS_DynamoDB_ConsumedReadCapacityUnits: + - labels: + cloud_account_id: test-account + TableName: test-table + value: 100.0 + amazonaws_com_AWS_DynamoDB_ProvisionedReadCapacityUnits: + - labels: + cloud_account_id: test-account + TableName: test-table + value: 100.0 + amazonaws_com_AWS_DynamoDB_ProvisionedWriteCapacityUnits: + - labels: + cloud_account_id: test-account + TableName: test-table + value: 100.0 + amazonaws_com_AWS_DynamoDB_SuccessfulRequestLatency: + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: GetItem + value: 100.0 + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: PutItem + value: 100.0 + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: Query + value: 100.0 + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: Scan + value: 100.0 + amazonaws_com_AWS_DynamoDB_ReturnedItemCount: + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: Scan + value: 100.0 + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: Query + value: 100.0 + amazonaws_com_AWS_DynamoDB_TimeToLiveDeletedItemCount: + - labels: + cloud_account_id: test-account + TableName: test-table + value: 100.0 + amazonaws_com_AWS_DynamoDB_ThrottledRequests: + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: GetItem + value: 100.0 + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: PutItem + value: 100.0 + amazonaws_com_AWS_DynamoDB_ReadThrottleEvents: + - labels: + cloud_account_id: test-account + TableName: test-table + value: 100.0 + amazonaws_com_AWS_DynamoDB_WriteThrottleEvents: + - labels: + cloud_account_id: test-account + TableName: test-table + value: 100.0 + amazonaws_com_AWS_DynamoDB_SystemErrors: + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: GetItem + value: 100.0 + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: PutItem + value: 100.0 + amazonaws_com_AWS_DynamoDB_ConditionalCheckFailedRequests: + - labels: + cloud_account_id: test-account + TableName: test-table + value: 100.0 + amazonaws_com_AWS_DynamoDB_TransactionConflict: + - labels: + cloud_account_id: test-account + TableName: test-table + value: 100.0 +expected: + aws_dynamodb_endpoint_consumed_write_capacity_units: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_consumed_read_capacity_units: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_provisioned_read_capacity_units: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_provisioned_write_capacity_units: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_get_successful_request_latency: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: GetItem + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_put_successful_request_latency: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: PutItem + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_query_successful_request_latency: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: Query + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_scan_successful_request_latency: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: Scan + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_scan_returned_item_count: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: Scan + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_query_returned_item_count: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: Query + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_time_to_live_deleted_item_count: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_read_throttled_requests: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: GetItem + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_write_throttled_requests: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: PutItem + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_read_throttle_events: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_write_throttle_events: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_read_system_errors: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: GetItem + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_write_system_errors: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + Operation: PutItem + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_conditional_check_failed_requests: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_endpoint_transaction_conflict: + entities: + - scope: ENDPOINT + service: 'aws-dynamodb::test-account' + endpoint: test-table + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + TableName: test-table + host_name: 'aws-dynamodb::test-account' + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-dynamodb/dynamodb-endpoint.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-dynamodb/dynamodb-endpoint.yaml new file mode 100644 index 000000000000..55dab289504f --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-dynamodb/dynamodb-endpoint.yaml @@ -0,0 +1,85 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +filter: "{ tags -> tags.Namespace == 'AWS/DynamoDB' }" # The OpenTelemetry job name +expPrefix: tag({tags -> tags.host_name = 'aws-dynamodb::' + tags.cloud_account_id}) +expSuffix: service(['host_name'], Layer.AWS_DYNAMODB).endpoint(['host_name'], ['TableName'], Layer.AWS_DYNAMODB) +metricPrefix: aws_dynamodb +metricsRules: + # table metrics + - name: endpoint_consumed_write_capacity_units + exp: amazonaws_com_AWS_DynamoDB_ConsumedWriteCapacityUnits + + - name: endpoint_consumed_read_capacity_units + exp: amazonaws_com_AWS_DynamoDB_ConsumedReadCapacityUnits + + - name: endpoint_provisioned_read_capacity_units + exp: amazonaws_com_AWS_DynamoDB_ProvisionedReadCapacityUnits + + - name: endpoint_provisioned_write_capacity_units + exp: amazonaws_com_AWS_DynamoDB_ProvisionedWriteCapacityUnits + + # table operation metrics + - name: endpoint_get_successful_request_latency + exp: amazonaws_com_AWS_DynamoDB_SuccessfulRequestLatency.tagMatch('Operation','GetItem|BatchGetItem') + - name: endpoint_put_successful_request_latency + exp: amazonaws_com_AWS_DynamoDB_SuccessfulRequestLatency.tagMatch('Operation','PutItem|BatchWriteItem') + - name: endpoint_query_successful_request_latency + exp: amazonaws_com_AWS_DynamoDB_SuccessfulRequestLatency.tagEqual('Operation','Query') + - name: endpoint_scan_successful_request_latency + exp: amazonaws_com_AWS_DynamoDB_SuccessfulRequestLatency.tagEqual('Operation','Scan') + + - name: endpoint_scan_returned_item_count + exp: amazonaws_com_AWS_DynamoDB_ReturnedItemCount.tagEqual('Operation','Scan') + - name: endpoint_query_returned_item_count + exp: amazonaws_com_AWS_DynamoDB_ReturnedItemCount.tagEqual('Operation','Query') + + - name: endpoint_time_to_live_deleted_item_count + exp: amazonaws_com_AWS_DynamoDB_TimeToLiveDeletedItemCount + + - name: endpoint_read_throttled_requests + exp: amazonaws_com_AWS_DynamoDB_ThrottledRequests.tagMatch('Operation','GetItem|Scan|Query|BatchGetItem') + - name: endpoint_write_throttled_requests + exp: amazonaws_com_AWS_DynamoDB_ThrottledRequests.tagMatch('Operation','PutItem|UpdateItem|DeleteItem|BatchWriteItem') + + - name: endpoint_read_throttle_events + exp: amazonaws_com_AWS_DynamoDB_ReadThrottleEvents + - name: endpoint_write_throttle_events + exp: amazonaws_com_AWS_DynamoDB_WriteThrottleEvents + - name: endpoint_read_system_errors + exp: amazonaws_com_AWS_DynamoDB_SystemErrors.tagMatch('Operation','GetItem|Scan|Query|BatchGetItem|TransactGetItems') + - name: endpoint_write_system_errors + exp: amazonaws_com_AWS_DynamoDB_SystemErrors.tagMatch('Operation','PutItem|UpdateItem|DeleteItem|BatchWriteItem|TransactWriteItems') + + - name: endpoint_conditional_check_failed_requests + exp: amazonaws_com_AWS_DynamoDB_ConditionalCheckFailedRequests + - name: endpoint_transaction_conflict + exp: amazonaws_com_AWS_DynamoDB_TransactionConflict \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-dynamodb/dynamodb-service.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-dynamodb/dynamodb-service.data.yaml new file mode 100644 index 000000000000..b3ea7bff7b72 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-dynamodb/dynamodb-service.data.yaml @@ -0,0 +1,427 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + dynamodb: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_AccountMaxWrites: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_AccountMaxReads: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_AccountMaxTableLevelWrites: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_AccountMaxTableLevelReads: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_MaxProvisionedTableWriteCapacityUtilization: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_MaxProvisionedTableReadCapacityUtilization: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_AccountProvisionedReadCapacityUtilization: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_AccountProvisionedWriteCapacityUtilization: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_ConsumedWriteCapacityUnits: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_ConsumedReadCapacityUnits: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_ProvisionedReadCapacityUnits: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_ProvisionedWriteCapacityUnits: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_SuccessfulRequestLatency: + - labels: + cloud_account_id: test-account + Operation: GetItem + value: 100.0 + - labels: + cloud_account_id: test-account + Operation: PutItem + value: 100.0 + - labels: + cloud_account_id: test-account + Operation: Query + value: 100.0 + - labels: + cloud_account_id: test-account + Operation: Scan + value: 100.0 + amazonaws_com_AWS_DynamoDB_ReturnedItemCount: + - labels: + cloud_account_id: test-account + Operation: Scan + value: 100.0 + - labels: + cloud_account_id: test-account + Operation: Query + value: 100.0 + amazonaws_com_AWS_DynamoDB_TimeToLiveDeletedItemCount: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_ThrottledRequests: + - labels: + cloud_account_id: test-account + Operation: GetItem + value: 100.0 + - labels: + cloud_account_id: test-account + Operation: PutItem + value: 100.0 + amazonaws_com_AWS_DynamoDB_ReadThrottleEvents: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_WriteThrottleEvents: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_SystemErrors: + - labels: + cloud_account_id: test-account + Operation: GetItem + value: 100.0 + - labels: + cloud_account_id: test-account + Operation: PutItem + value: 100.0 + amazonaws_com_AWS_DynamoDB_UserErrors: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_ConditionalCheckFailedRequests: + - labels: + cloud_account_id: test-account + value: 100.0 + amazonaws_com_AWS_DynamoDB_TransactionConflict: + - labels: + cloud_account_id: test-account + value: 100.0 +expected: + aws_dynamodb_account_max_writes: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_account_max_reads: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_account_max_table_level_writes: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_account_max_table_level_reads: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_max_provisioned_write_capacity_utilization: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_max_provisioned_read_capacity_utilization: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_account_provisioned_read_capacity_utilization: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_account_provisioned_write_capacity_utilization: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_consumed_write_capacity_units: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_consumed_read_capacity_units: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_provisioned_read_capacity_units: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_provisioned_write_capacity_units: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_get_successful_request_latency: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + Operation: GetItem + value: 100.0 + aws_dynamodb_put_successful_request_latency: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + Operation: PutItem + value: 100.0 + aws_dynamodb_query_successful_request_latency: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + Operation: Query + value: 100.0 + aws_dynamodb_scan_successful_request_latency: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + Operation: Scan + value: 100.0 + aws_dynamodb_scan_returned_item_count: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + Operation: Scan + value: 100.0 + aws_dynamodb_query_returned_item_count: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + Operation: Query + value: 100.0 + aws_dynamodb_time_to_live_deleted_item_count: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_read_throttled_requests: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + Operation: GetItem + value: 100.0 + aws_dynamodb_write_throttled_requests: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + Operation: PutItem + value: 100.0 + aws_dynamodb_read_throttle_events: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_write_throttle_events: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_read_system_errors: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + Operation: GetItem + value: 100.0 + aws_dynamodb_write_system_errors: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + Operation: PutItem + value: 100.0 + aws_dynamodb_user_errors: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_conditional_check_failed_requests: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 + aws_dynamodb_transaction_conflict: + entities: + - scope: SERVICE + service: 'aws-dynamodb::test-account' + layer: AWS_DYNAMODB + samples: + - labels: + cloud_account_id: test-account + host_name: 'aws-dynamodb::test-account' + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-dynamodb/dynamodb-service.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-dynamodb/dynamodb-service.yaml new file mode 100644 index 000000000000..7a66838fa898 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-dynamodb/dynamodb-service.yaml @@ -0,0 +1,108 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +filter: "{ tags -> tags.Namespace == 'AWS/DynamoDB' }" # The OpenTelemetry job name +expPrefix: tag({tags -> tags.host_name = 'aws-dynamodb::' + tags.cloud_account_id}) +expSuffix: service(['host_name'], Layer.AWS_DYNAMODB) +metricPrefix: aws_dynamodb +metricsRules: + # account metrics + - name: account_max_writes + exp: amazonaws_com_AWS_DynamoDB_AccountMaxWrites + - name: account_max_reads + exp: amazonaws_com_AWS_DynamoDB_AccountMaxReads + + - name: account_max_table_level_writes + exp: amazonaws_com_AWS_DynamoDB_AccountMaxTableLevelWrites + - name: account_max_table_level_reads + exp: amazonaws_com_AWS_DynamoDB_AccountMaxTableLevelReads + + - name: max_provisioned_write_capacity_utilization + exp: amazonaws_com_AWS_DynamoDB_MaxProvisionedTableWriteCapacityUtilization + - name: max_provisioned_read_capacity_utilization + exp: amazonaws_com_AWS_DynamoDB_MaxProvisionedTableReadCapacityUtilization + + - name: account_provisioned_read_capacity_utilization + exp: amazonaws_com_AWS_DynamoDB_AccountProvisionedReadCapacityUtilization + - name: account_provisioned_write_capacity_utilization + exp: amazonaws_com_AWS_DynamoDB_AccountProvisionedWriteCapacityUtilization + + # table metrics + - name: consumed_write_capacity_units + exp: amazonaws_com_AWS_DynamoDB_ConsumedWriteCapacityUnits + + - name: consumed_read_capacity_units + exp: amazonaws_com_AWS_DynamoDB_ConsumedReadCapacityUnits + + - name: provisioned_read_capacity_units + exp: amazonaws_com_AWS_DynamoDB_ProvisionedReadCapacityUnits + + - name: provisioned_write_capacity_units + exp: amazonaws_com_AWS_DynamoDB_ProvisionedWriteCapacityUnits + + # table operation metrics + - name: get_successful_request_latency + exp: amazonaws_com_AWS_DynamoDB_SuccessfulRequestLatency.tagMatch('Operation','GetItem|BatchGetItem') + - name: put_successful_request_latency + exp: amazonaws_com_AWS_DynamoDB_SuccessfulRequestLatency.tagMatch('Operation','PutItem|BatchWriteItem') + - name: query_successful_request_latency + exp: amazonaws_com_AWS_DynamoDB_SuccessfulRequestLatency.tagEqual('Operation','Query') + - name: scan_successful_request_latency + exp: amazonaws_com_AWS_DynamoDB_SuccessfulRequestLatency.tagEqual('Operation','Scan') + - name: scan_returned_item_count + exp: amazonaws_com_AWS_DynamoDB_ReturnedItemCount.tagEqual('Operation','Scan') + - name: query_returned_item_count + exp: amazonaws_com_AWS_DynamoDB_ReturnedItemCount.tagEqual('Operation','Query') + + - name: time_to_live_deleted_item_count + exp: amazonaws_com_AWS_DynamoDB_TimeToLiveDeletedItemCount + + - name: read_throttled_requests + exp: amazonaws_com_AWS_DynamoDB_ThrottledRequests.tagMatch('Operation','GetItem|Scan|Query|BatchGetItem') + - name: write_throttled_requests + exp: amazonaws_com_AWS_DynamoDB_ThrottledRequests.tagMatch('Operation','PutItem|UpdateItem|DeleteItem|BatchWriteItem') + + - name: read_throttle_events + exp: amazonaws_com_AWS_DynamoDB_ReadThrottleEvents + - name: write_throttle_events + exp: amazonaws_com_AWS_DynamoDB_WriteThrottleEvents + - name: read_system_errors + exp: amazonaws_com_AWS_DynamoDB_SystemErrors.tagMatch('Operation','GetItem|Scan|Query|BatchGetItem|TransactGetItems') + - name: write_system_errors + exp: amazonaws_com_AWS_DynamoDB_SystemErrors.tagMatch('Operation','PutItem|UpdateItem|DeleteItem|BatchWriteItem|TransactWriteItems') + - name: user_errors + exp: amazonaws_com_AWS_DynamoDB_UserErrors + - name: conditional_check_failed_requests + exp: amazonaws_com_AWS_DynamoDB_ConditionalCheckFailedRequests + - name: transaction_conflict + exp: amazonaws_com_AWS_DynamoDB_TransactionConflict + + diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-cluster.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-cluster.data.yaml new file mode 100644 index 000000000000..2e8662d2e4df --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-cluster.data.yaml @@ -0,0 +1,133 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + cluster: + - labels: + ClusterName: test-cluster + value: 100.0 + cluster_node_count: + - labels: + ClusterName: test-cluster + value: 100.0 + cluster_failed_node_count: + - labels: + ClusterName: test-cluster + value: 100.0 + namespace_number_of_running_pods: + - labels: + ClusterName: test-cluster + value: 100.0 + service_number_of_running_pods: + - labels: + ClusterName: test-cluster + value: 100.0 + node_network_rx_dropped: + - labels: + ClusterName: test-cluster + value: 100.0 + node_network_rx_errors: + - labels: + ClusterName: test-cluster + value: 100.0 + node_network_tx_dropped: + - labels: + ClusterName: test-cluster + value: 100.0 + node_network_tx_errors: + - labels: + ClusterName: test-cluster + value: 100.0 +expected: + eks_cluster_node_count: + entities: + - scope: SERVICE + service: 'aws-eks-cluster::test-cluster' + layer: AWS_EKS + samples: + - labels: + ClusterName: test-cluster + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_failed_node_count: + entities: + - scope: SERVICE + service: 'aws-eks-cluster::test-cluster' + layer: AWS_EKS + samples: + - labels: + ClusterName: test-cluster + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_namespace_count: + entities: + - scope: SERVICE + service: 'aws-eks-cluster::test-cluster' + layer: AWS_EKS + samples: + - labels: + Namespace: + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_service_count: + entities: + - scope: SERVICE + service: 'aws-eks-cluster::test-cluster' + layer: AWS_EKS + samples: + - labels: + Service: + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_net_rx_dropped: + entities: + - scope: SERVICE + service: 'aws-eks-cluster::test-cluster' + layer: AWS_EKS + samples: + - labels: + ClusterName: test-cluster + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_net_rx_error: + entities: + - scope: SERVICE + service: 'aws-eks-cluster::test-cluster' + layer: AWS_EKS + samples: + - labels: + ClusterName: test-cluster + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_net_tx_dropped: + entities: + - scope: SERVICE + service: 'aws-eks-cluster::test-cluster' + layer: AWS_EKS + samples: + - labels: + ClusterName: test-cluster + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_net_tx_error: + entities: + - scope: SERVICE + service: 'aws-eks-cluster::test-cluster' + layer: AWS_EKS + samples: + - labels: + ClusterName: test-cluster + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-cluster.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-cluster.yaml new file mode 100644 index 000000000000..a0cd0fcfc839 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-cluster.yaml @@ -0,0 +1,52 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +filter: "{ tags -> tags.job_name == 'aws-cloud-eks-monitoring' }" # The OpenTelemetry job name +expPrefix: tag({tags -> tags.cluster = 'aws-eks-cluster::' + tags.ClusterName}) +expSuffix: service(['cluster'], Layer.AWS_EKS) +metricPrefix: eks_cluster +metricsRules: + - name: node_count + exp: cluster_node_count.downsampling(LATEST) + - name: failed_node_count + exp: cluster_failed_node_count.downsampling(LATEST) + - name: namespace_count + exp: namespace_number_of_running_pods.sum(['Namespace','cluster']) + - name: service_count + exp: service_number_of_running_pods.sum(['Service','cluster']) + - name: net_rx_dropped + exp: node_network_rx_dropped + - name: net_rx_error + exp: node_network_rx_errors + - name: net_tx_dropped + exp: node_network_tx_dropped + - name: net_tx_error + exp: node_network_tx_errors diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-node.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-node.data.yaml new file mode 100644 index 000000000000..c6cdbc8d95eb --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-node.data.yaml @@ -0,0 +1,294 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + cluster: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + pod_number_of_containers: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + node_cpu_utilization: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + node_memory_utilization: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + node_network_rx_bytes: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + node_network_rx_errors: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + node_network_tx_bytes: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + node_network_tx_errors: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + node_diskio_io_service_bytes_write: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + node_diskio_io_service_bytes_read: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + node_filesystem_utilization: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + pod_cpu_utilization: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + pod_memory_utilization: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + pod_network_rx_bytes: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + pod_network_rx_errors: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + pod_network_tx_bytes: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 + pod_network_tx_errors: + - labels: + ClusterName: test-cluster + NodeName: test-node + value: 100.0 +expected: + eks_cluster_node_pod_number: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + NodeName: test-node + ClusterName: test-cluster + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_node_cpu_utilization: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + NodeName: test-node + ClusterName: test-cluster + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_node_memory_utilization: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + NodeName: test-node + ClusterName: test-cluster + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_node_net_rx_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + NodeName: test-node + ClusterName: test-cluster + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_node_net_rx_error: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + NodeName: test-node + ClusterName: test-cluster + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_node_net_tx_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + NodeName: test-node + ClusterName: test-cluster + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_node_net_tx_error: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + NodeName: test-node + ClusterName: test-cluster + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_node_disk_io_write: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + device: + cluster: 'aws-eks-cluster::test-cluster' + NodeName: test-node + value: 100.0 + eks_cluster_node_disk_io_read: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + device: + cluster: 'aws-eks-cluster::test-cluster' + NodeName: test-node + value: 100.0 + eks_cluster_node_fs_utilization: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + device: + cluster: 'aws-eks-cluster::test-cluster' + NodeName: test-node + value: 100.0 + eks_cluster_node_pod_cpu_utilization: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + PodName: + cluster: 'aws-eks-cluster::test-cluster' + NodeName: test-node + value: 100.0 + eks_cluster_node_pod_memory_utilization: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + PodName: + cluster: 'aws-eks-cluster::test-cluster' + NodeName: test-node + value: 100.0 + eks_cluster_node_pod_net_rx_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + PodName: + cluster: 'aws-eks-cluster::test-cluster' + NodeName: test-node + value: 100.0 + eks_cluster_node_pod_net_rx_error: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + PodName: + cluster: 'aws-eks-cluster::test-cluster' + NodeName: test-node + value: 100.0 + eks_cluster_node_pod_net_tx_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + PodName: + cluster: 'aws-eks-cluster::test-cluster' + NodeName: test-node + value: 100.0 + eks_cluster_node_pod_net_tx_error: + entities: + - scope: SERVICE_INSTANCE + service: 'aws-eks-cluster::test-cluster' + instance: test-node + layer: AWS_EKS + samples: + - labels: + PodName: + cluster: 'aws-eks-cluster::test-cluster' + NodeName: test-node + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-node.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-node.yaml new file mode 100644 index 000000000000..7d27617ca80c --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-node.yaml @@ -0,0 +1,69 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +filter: "{ tags -> tags.job_name == 'aws-cloud-eks-monitoring' }" # The OpenTelemetry job name +expPrefix: tag({tags -> tags.cluster = 'aws-eks-cluster::' + tags.ClusterName}) +expSuffix: instance(['cluster'],['NodeName'], Layer.AWS_EKS) +metricPrefix: eks_cluster_node +metricsRules: + - name: pod_number + exp: pod_number_of_containers.downsampling(SUM) + - name: cpu_utilization + exp: node_cpu_utilization + - name: memory_utilization + exp: node_memory_utilization + - name: net_rx_bytes + exp: node_network_rx_bytes + - name: net_rx_error + exp: node_network_rx_errors + - name: net_tx_bytes + exp: node_network_tx_bytes + - name: net_tx_error + exp: node_network_tx_errors + - name: disk_io_write + exp: node_diskio_io_service_bytes_write.sum(['device','cluster','NodeName']) + - name: disk_io_read + exp: node_diskio_io_service_bytes_read.sum(['device','cluster','NodeName']) + - name: fs_utilization + exp: node_filesystem_utilization.sum(['device','cluster','NodeName']) + # Pod + - name: pod_cpu_utilization + exp: pod_cpu_utilization.sum(['PodName','cluster','NodeName']) + - name: pod_memory_utilization + exp: pod_memory_utilization.sum(['PodName','cluster','NodeName']) + - name: pod_net_rx_bytes + exp: pod_network_rx_bytes.sum(['PodName','cluster','NodeName']) + - name: pod_net_rx_error + exp: pod_network_rx_errors.sum(['PodName','cluster','NodeName']) + - name: pod_net_tx_bytes + exp: pod_network_tx_bytes.sum(['PodName','cluster','NodeName']) + - name: pod_net_tx_error + exp: pod_network_tx_errors.sum(['PodName','cluster','NodeName']) diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-service.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-service.data.yaml new file mode 100644 index 000000000000..4d124a5227b6 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-service.data.yaml @@ -0,0 +1,124 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + cluster: + - labels: + ClusterName: test-cluster + Service: test-service + value: 100.0 + pod_cpu_utilization: + - labels: + ClusterName: test-cluster + Service: test-service + value: 100.0 + pod_memory_utilization: + - labels: + ClusterName: test-cluster + Service: test-service + value: 100.0 + pod_network_rx_bytes: + - labels: + ClusterName: test-cluster + Service: test-service + value: 100.0 + pod_network_rx_errors: + - labels: + ClusterName: test-cluster + Service: test-service + value: 100.0 + pod_network_tx_bytes: + - labels: + ClusterName: test-cluster + Service: test-service + value: 100.0 + pod_network_tx_errors: + - labels: + ClusterName: test-cluster + Service: test-service + value: 100.0 +expected: + eks_cluster_service_pod_cpu_utilization: + entities: + - scope: ENDPOINT + service: 'aws-eks-cluster::test-cluster' + endpoint: test-service + layer: AWS_EKS + samples: + - labels: + ClusterName: test-cluster + Service: test-service + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_service_pod_memory_utilization: + entities: + - scope: ENDPOINT + service: 'aws-eks-cluster::test-cluster' + endpoint: test-service + layer: AWS_EKS + samples: + - labels: + ClusterName: test-cluster + Service: test-service + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_service_pod_net_rx_bytes: + entities: + - scope: ENDPOINT + service: 'aws-eks-cluster::test-cluster' + endpoint: test-service + layer: AWS_EKS + samples: + - labels: + ClusterName: test-cluster + Service: test-service + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_service_pod_net_rx_error: + entities: + - scope: ENDPOINT + service: 'aws-eks-cluster::test-cluster' + endpoint: test-service + layer: AWS_EKS + samples: + - labels: + ClusterName: test-cluster + Service: test-service + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_service_pod_net_tx_bytes: + entities: + - scope: ENDPOINT + service: 'aws-eks-cluster::test-cluster' + endpoint: test-service + layer: AWS_EKS + samples: + - labels: + ClusterName: test-cluster + Service: test-service + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 + eks_cluster_service_pod_net_tx_error: + entities: + - scope: ENDPOINT + service: 'aws-eks-cluster::test-cluster' + endpoint: test-service + layer: AWS_EKS + samples: + - labels: + ClusterName: test-cluster + Service: test-service + cluster: 'aws-eks-cluster::test-cluster' + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-service.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-service.yaml new file mode 100644 index 000000000000..f9727b9445cc --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-eks/eks-service.yaml @@ -0,0 +1,48 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +filter: "{ tags -> tags.job_name == 'aws-cloud-eks-monitoring' && tags.Service?.trim() }" # The OpenTelemetry job name +expPrefix: tag({tags -> tags.cluster = 'aws-eks-cluster::' + tags.ClusterName}) +expSuffix: endpoint(['cluster'],['Service'], Layer.AWS_EKS) +metricPrefix: eks_cluster_service +metricsRules: + - name: pod_cpu_utilization + exp: pod_cpu_utilization + - name: pod_memory_utilization + exp: pod_memory_utilization + - name: pod_net_rx_bytes + exp: pod_network_rx_bytes + - name: pod_net_rx_error + exp: pod_network_rx_errors + - name: pod_net_tx_bytes + exp: pod_network_tx_bytes + - name: pod_net_tx_error + exp: pod_network_tx_errors diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-gateway/gateway-endpoint.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-gateway/gateway-endpoint.data.yaml new file mode 100644 index 000000000000..3a313682b0a5 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-gateway/gateway-endpoint.data.yaml @@ -0,0 +1,293 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + amazonaws_com_AWS_ApiGateway_4xx_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + Method: GET + Resource: /test-resource + value: 100.0 + amazonaws_com_AWS_ApiGateway_5xx_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + Method: GET + Resource: /test-resource + value: 100.0 + amazonaws_com_AWS_ApiGateway_DataProcessed_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + Method: GET + Resource: /test-resource + value: 100.0 + amazonaws_com_AWS_ApiGateway_4XXError_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + Method: GET + Resource: /test-resource + value: 100.0 + amazonaws_com_AWS_ApiGateway_5XXError_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + Method: GET + Resource: /test-resource + value: 100.0 + amazonaws_com_AWS_ApiGateway_CacheHitCount_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + Method: GET + Resource: /test-resource + value: 100.0 + amazonaws_com_AWS_ApiGateway_CacheHitCount_count: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + Method: GET + Resource: /test-resource + value: 100.0 + amazonaws_com_AWS_ApiGateway_CacheMissCount_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + Method: GET + Resource: /test-resource + value: 100.0 + amazonaws_com_AWS_ApiGateway_CacheMissCount_count: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + Method: GET + Resource: /test-resource + value: 100.0 + amazonaws_com_AWS_ApiGateway_Count: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + Method: GET + Resource: /test-resource + value: 100.0 + amazonaws_com_AWS_ApiGateway_IntegrationLatency_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + Method: GET + Resource: /test-resource + value: 100.0 + amazonaws_com_AWS_ApiGateway_IntegrationLatency_count: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + Method: GET + Resource: /test-resource + value: 100.0 + amazonaws_com_AWS_ApiGateway_Latency_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + Method: GET + Resource: /test-resource + value: 100.0 + amazonaws_com_AWS_ApiGateway_Latency_count: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + Method: GET + Resource: /test-resource + value: 100.0 +expected: + aws_gateway_endpoint_4xx: + entities: + - scope: ENDPOINT + service: 'aws-api-gateway::test-value:test-value' + endpoint: 'GET:/test-resource' + layer: AWS_GATEWAY + samples: + - labels: + Resource: /test-resource + Stage: test-value + Method: GET + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 100.0 + aws_gateway_endpoint_5xx: + entities: + - scope: ENDPOINT + service: 'aws-api-gateway::test-value:test-value' + endpoint: 'GET:/test-resource' + layer: AWS_GATEWAY + samples: + - labels: + Resource: /test-resource + Stage: test-value + Method: GET + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 100.0 + aws_gateway_endpoint_DataProcessed: + entities: + - scope: ENDPOINT + service: 'aws-api-gateway::test-value:test-value' + endpoint: 'GET:/test-resource' + layer: AWS_GATEWAY + samples: + - labels: + Resource: /test-resource + Stage: test-value + Method: GET + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 100.0 + aws_gateway_endpoint_4xx_2: + entities: + - scope: ENDPOINT + service: 'aws-api-gateway::test-value:test-value' + endpoint: 'GET:/test-resource' + layer: AWS_GATEWAY + samples: + - labels: + Resource: /test-resource + Stage: test-value + Method: GET + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 100.0 + aws_gateway_endpoint_5xx_2: + entities: + - scope: ENDPOINT + service: 'aws-api-gateway::test-value:test-value' + endpoint: 'GET:/test-resource' + layer: AWS_GATEWAY + samples: + - labels: + Resource: /test-resource + Stage: test-value + Method: GET + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 100.0 + aws_gateway_endpoint_cache_hit_rate: + entities: + - scope: ENDPOINT + service: 'aws-api-gateway::test-value:test-value' + endpoint: 'GET:/test-resource' + layer: AWS_GATEWAY + samples: + - labels: + Resource: /test-resource + Stage: test-value + Method: GET + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 1.0 + aws_gateway_endpoint_cache_miss_rate: + entities: + - scope: ENDPOINT + service: 'aws-api-gateway::test-value:test-value' + endpoint: 'GET:/test-resource' + layer: AWS_GATEWAY + samples: + - labels: + Resource: /test-resource + Stage: test-value + Method: GET + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 1.0 + aws_gateway_endpoint_count: + entities: + - scope: ENDPOINT + service: 'aws-api-gateway::test-value:test-value' + endpoint: 'GET:/test-resource' + layer: AWS_GATEWAY + samples: + - labels: + Resource: /test-resource + Stage: test-value + Method: GET + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 100.0 + aws_gateway_endpoint_integration_latency: + entities: + - scope: ENDPOINT + service: 'aws-api-gateway::test-value:test-value' + endpoint: 'GET:/test-resource' + layer: AWS_GATEWAY + samples: + - labels: + Resource: /test-resource + Stage: test-value + Method: GET + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 1.0 + aws_gateway_endpoint_latency: + entities: + - scope: ENDPOINT + service: 'aws-api-gateway::test-value:test-value' + endpoint: 'GET:/test-resource' + layer: AWS_GATEWAY + samples: + - labels: + Resource: /test-resource + Stage: test-value + Method: GET + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 1.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-gateway/gateway-endpoint.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-gateway/gateway-endpoint.yaml new file mode 100644 index 000000000000..840adde706af --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-gateway/gateway-endpoint.yaml @@ -0,0 +1,63 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +# ApiId for HTTP APS, ApiName for REST API +filter: "{ tags -> {tags.cloud_provider == 'aws' && tags.Namespace == 'AWS/ApiGateway' && (tags.ApiId || tags.ApiName) && tags.Stage && tags.Method && tags.Resource } }" +expSuffix: tag({tags -> tags.service_name= tags.ApiId ? 'aws-api-gateway::'+tags.Stage+':'+tags.ApiId:'aws-api-gateway::'+tags.Stage+':'+tags.ApiName }).endpoint(['service_name'],['Method','Resource'], ':', Layer.AWS_GATEWAY) +metricPrefix: aws_gateway_endpoint +metricsRules: + # Only for HTTP API + - name: 4xx + exp: amazonaws_com_AWS_ApiGateway_4xx_sum.downsampling(SUM) + - name: 5xx + exp: amazonaws_com_AWS_ApiGateway_5xx_sum.downsampling(SUM) + - name: DataProcessed + exp: amazonaws_com_AWS_ApiGateway_DataProcessed_sum.downsampling(SUM) + + # Only for REST API + - name: 4xx + exp: amazonaws_com_AWS_ApiGateway_4XXError_sum.downsampling(SUM) + - name: 5xx + exp: amazonaws_com_AWS_ApiGateway_5XXError_sum.downsampling(SUM) + + - name: cache_hit_rate + exp: amazonaws_com_AWS_ApiGateway_CacheHitCount_sum.div(amazonaws_com_AWS_ApiGateway_CacheHitCount_count) + - name: cache_miss_rate + exp: amazonaws_com_AWS_ApiGateway_CacheMissCount_sum.div(amazonaws_com_AWS_ApiGateway_CacheMissCount_count) + + # Common metrics for HTTP, REST API + - name: count + exp: amazonaws_com_AWS_ApiGateway_Count.downsampling(SUM) + - name: integration_latency + exp: amazonaws_com_AWS_ApiGateway_IntegrationLatency_sum.div(amazonaws_com_AWS_ApiGateway_IntegrationLatency_count) + - name: latency + exp: amazonaws_com_AWS_ApiGateway_Latency_sum.div(amazonaws_com_AWS_ApiGateway_Latency_count) + diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-gateway/gateway-service.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-gateway/gateway-service.data.yaml new file mode 100644 index 000000000000..85fb2cc27db8 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-gateway/gateway-service.data.yaml @@ -0,0 +1,235 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + amazonaws_com_AWS_ApiGateway_4xx_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + value: 100.0 + amazonaws_com_AWS_ApiGateway_5xx_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + value: 100.0 + amazonaws_com_AWS_ApiGateway_DataProcessed_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + value: 100.0 + amazonaws_com_AWS_ApiGateway_4XXError_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + value: 100.0 + amazonaws_com_AWS_ApiGateway_5XXError_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + value: 100.0 + amazonaws_com_AWS_ApiGateway_CacheHitCount_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + value: 100.0 + amazonaws_com_AWS_ApiGateway_CacheHitCount_count: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + value: 100.0 + amazonaws_com_AWS_ApiGateway_CacheMissCount_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + value: 100.0 + amazonaws_com_AWS_ApiGateway_CacheMissCount_count: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + value: 100.0 + amazonaws_com_AWS_ApiGateway_Count: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + value: 100.0 + amazonaws_com_AWS_ApiGateway_IntegrationLatency_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + value: 100.0 + amazonaws_com_AWS_ApiGateway_IntegrationLatency_count: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + value: 100.0 + amazonaws_com_AWS_ApiGateway_Latency_sum: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + value: 100.0 + amazonaws_com_AWS_ApiGateway_Latency_count: + - labels: + service_name: test-service + ApiId: test-value + Stage: test-value + ApiName: test-value + value: 100.0 +expected: + aws_gateway_service_4xx: + entities: + - scope: SERVICE + service: 'aws-api-gateway::test-value:test-value' + layer: AWS_GATEWAY + samples: + - labels: + Stage: test-value + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 100.0 + aws_gateway_service_5xx: + entities: + - scope: SERVICE + service: 'aws-api-gateway::test-value:test-value' + layer: AWS_GATEWAY + samples: + - labels: + Stage: test-value + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 100.0 + aws_gateway_service_data_processed: + entities: + - scope: SERVICE + service: 'aws-api-gateway::test-value:test-value' + layer: AWS_GATEWAY + samples: + - labels: + Stage: test-value + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 100.0 + aws_gateway_service_4xx_2: + entities: + - scope: SERVICE + service: 'aws-api-gateway::test-value:test-value' + layer: AWS_GATEWAY + samples: + - labels: + Stage: test-value + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 100.0 + aws_gateway_service_5xx_2: + entities: + - scope: SERVICE + service: 'aws-api-gateway::test-value:test-value' + layer: AWS_GATEWAY + samples: + - labels: + Stage: test-value + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 100.0 + aws_gateway_service_cache_hit_rate: + entities: + - scope: SERVICE + service: 'aws-api-gateway::test-value:test-value' + layer: AWS_GATEWAY + samples: + - labels: + Stage: test-value + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 100.0 + aws_gateway_service_cache_miss_rate: + entities: + - scope: SERVICE + service: 'aws-api-gateway::test-value:test-value' + layer: AWS_GATEWAY + samples: + - labels: + Stage: test-value + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 100.0 + aws_gateway_service_count: + entities: + - scope: SERVICE + service: 'aws-api-gateway::test-value:test-value' + layer: AWS_GATEWAY + samples: + - labels: + Stage: test-value + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 100.0 + aws_gateway_service_integration_latency: + entities: + - scope: SERVICE + service: 'aws-api-gateway::test-value:test-value' + layer: AWS_GATEWAY + samples: + - labels: + Stage: test-value + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 1.0 + aws_gateway_service_latency: + entities: + - scope: SERVICE + service: 'aws-api-gateway::test-value:test-value' + layer: AWS_GATEWAY + samples: + - labels: + Stage: test-value + ApiName: test-value + ApiId: test-value + service_name: 'aws-api-gateway::test-value:test-value' + value: 1.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-gateway/gateway-service.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-gateway/gateway-service.yaml new file mode 100644 index 000000000000..c1fc82db1f7c --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-gateway/gateway-service.yaml @@ -0,0 +1,63 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +# ApiId for HTTP APS, ApiName for REST API +filter: "{ tags -> {tags.cloud_provider == 'aws' && tags.Namespace == 'AWS/ApiGateway' && (tags.ApiId || tags.ApiName) && tags.Stage && !tags.Method && !tags.Resource } }" +expSuffix: tag({tags -> tags.service_name= tags.ApiId ? 'aws-api-gateway::'+tags.Stage+':'+tags.ApiId:'aws-api-gateway::'+tags.Stage+':'+tags.ApiName }).service(['service_name'], Layer.AWS_GATEWAY) +metricPrefix: aws_gateway_service +metricsRules: + # Only for HTTP API + - name: 4xx + exp: amazonaws_com_AWS_ApiGateway_4xx_sum.downsampling(SUM) + - name: 5xx + exp: amazonaws_com_AWS_ApiGateway_5xx_sum.downsampling(SUM) + - name: data_processed + exp: amazonaws_com_AWS_ApiGateway_DataProcessed_sum.downsampling(SUM) + + # Only for REST API + - name: 4xx + exp: amazonaws_com_AWS_ApiGateway_4XXError_sum.downsampling(SUM) + - name: 5xx + exp: amazonaws_com_AWS_ApiGateway_5XXError_sum.downsampling(SUM) + + - name: cache_hit_rate + exp: amazonaws_com_AWS_ApiGateway_CacheHitCount_sum.div(amazonaws_com_AWS_ApiGateway_CacheHitCount_count).multiply(100) + - name: cache_miss_rate + exp: amazonaws_com_AWS_ApiGateway_CacheMissCount_sum.div(amazonaws_com_AWS_ApiGateway_CacheMissCount_count).multiply(100) + + # Common metrics for HTTP, REST API + - name: count + exp: amazonaws_com_AWS_ApiGateway_Count.downsampling(SUM) + - name: integration_latency + exp: amazonaws_com_AWS_ApiGateway_IntegrationLatency_sum.div(amazonaws_com_AWS_ApiGateway_IntegrationLatency_count) + - name: latency + exp: amazonaws_com_AWS_ApiGateway_Latency_sum.div(amazonaws_com_AWS_ApiGateway_Latency_count) + diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-s3/s3-service.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-s3/s3-service.data.yaml new file mode 100644 index 000000000000..875c7e1f86c9 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-s3/s3-service.data.yaml @@ -0,0 +1,177 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + amazonaws_com_AWS_S3_4xxErrors_sum: + - labels: + bucket: test-value + BucketName: test-value + value: 100.0 + amazonaws_com_AWS_S3_5xxErrors_sum: + - labels: + bucket: test-value + BucketName: test-value + value: 100.0 + amazonaws_com_AWS_S3_BytesDownloaded_sum: + - labels: + bucket: test-value + BucketName: test-value + value: 100.0 + amazonaws_com_AWS_S3_BytesUploaded_sum: + - labels: + bucket: test-value + BucketName: test-value + value: 100.0 + amazonaws_com_AWS_S3_TotalRequestLatency_sum: + - labels: + bucket: test-value + BucketName: test-value + value: 100.0 + amazonaws_com_AWS_S3_TotalRequestLatency_count: + - labels: + bucket: test-value + BucketName: test-value + value: 100.0 + amazonaws_com_AWS_S3_FirstByteLatency_sum: + - labels: + bucket: test-value + BucketName: test-value + value: 100.0 + amazonaws_com_AWS_S3_FirstByteLatency_count: + - labels: + bucket: test-value + BucketName: test-value + value: 100.0 + amazonaws_com_AWS_S3_AllRequests_sum: + - labels: + bucket: test-value + BucketName: test-value + value: 100.0 + amazonaws_com_AWS_S3_PutRequests_sum: + - labels: + bucket: test-value + BucketName: test-value + value: 100.0 + amazonaws_com_AWS_S3_GetRequests_sum: + - labels: + bucket: test-value + BucketName: test-value + value: 100.0 + amazonaws_com_AWS_S3_DeleteRequests_sum: + - labels: + bucket: test-value + BucketName: test-value + value: 100.0 +expected: + aws_s3_4xx: + entities: + - scope: SERVICE + service: 'aws-s3::test-value' + layer: AWS_S3 + samples: + - labels: + bucket: 'aws-s3::test-value' + BucketName: test-value + value: 100.0 + aws_s3_5xx: + entities: + - scope: SERVICE + service: 'aws-s3::test-value' + layer: AWS_S3 + samples: + - labels: + bucket: 'aws-s3::test-value' + BucketName: test-value + value: 100.0 + aws_s3_downloaded_bytes: + entities: + - scope: SERVICE + service: 'aws-s3::test-value' + layer: AWS_S3 + samples: + - labels: + bucket: 'aws-s3::test-value' + BucketName: test-value + value: 100.0 + aws_s3_uploaded_bytes: + entities: + - scope: SERVICE + service: 'aws-s3::test-value' + layer: AWS_S3 + samples: + - labels: + bucket: 'aws-s3::test-value' + BucketName: test-value + value: 100.0 + aws_s3_request_latency: + entities: + - scope: SERVICE + service: 'aws-s3::test-value' + layer: AWS_S3 + samples: + - labels: + bucket: 'aws-s3::test-value' + BucketName: test-value + value: 1.0 + aws_s3_first_latency_bytes: + entities: + - scope: SERVICE + service: 'aws-s3::test-value' + layer: AWS_S3 + samples: + - labels: + bucket: 'aws-s3::test-value' + BucketName: test-value + value: 1.0 + aws_s3_all_requests: + entities: + - scope: SERVICE + service: 'aws-s3::test-value' + layer: AWS_S3 + samples: + - labels: + bucket: 'aws-s3::test-value' + BucketName: test-value + value: 100.0 + aws_s3_put_requests: + entities: + - scope: SERVICE + service: 'aws-s3::test-value' + layer: AWS_S3 + samples: + - labels: + bucket: 'aws-s3::test-value' + BucketName: test-value + value: 100.0 + aws_s3_get_requests: + entities: + - scope: SERVICE + service: 'aws-s3::test-value' + layer: AWS_S3 + samples: + - labels: + bucket: 'aws-s3::test-value' + BucketName: test-value + value: 100.0 + aws_s3_delete_requests: + entities: + - scope: SERVICE + service: 'aws-s3::test-value' + layer: AWS_S3 + samples: + - labels: + bucket: 'aws-s3::test-value' + BucketName: test-value + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/aws-s3/s3-service.yaml b/test/script-cases/scripts/mal/test-otel-rules/aws-s3/s3-service.yaml new file mode 100644 index 000000000000..6cfbf1718c7f --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/aws-s3/s3-service.yaml @@ -0,0 +1,55 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +filter: "{ tags -> {tags.cloud_provider == 'aws' && tags.Namespace == 'AWS/S3' } }" +expSuffix: tag({tags -> tags.bucket = 'aws-s3::' + tags.BucketName}).service(['bucket'], Layer.AWS_S3) +metricPrefix: aws_s3 +metricsRules: + - name: 4xx + exp: amazonaws_com_AWS_S3_4xxErrors_sum.downsampling(SUM) + - name: 5xx + exp: amazonaws_com_AWS_S3_5xxErrors_sum.downsampling(SUM) + - name: downloaded_bytes + exp: amazonaws_com_AWS_S3_BytesDownloaded_sum.downsampling(SUM) + - name: uploaded_bytes + exp: amazonaws_com_AWS_S3_BytesUploaded_sum.downsampling(SUM) + - name: request_latency + exp: amazonaws_com_AWS_S3_TotalRequestLatency_sum.div(amazonaws_com_AWS_S3_TotalRequestLatency_count) + - name: first_latency_bytes + exp: amazonaws_com_AWS_S3_FirstByteLatency_sum.div(amazonaws_com_AWS_S3_FirstByteLatency_count) + - name: all_requests + exp: amazonaws_com_AWS_S3_AllRequests_sum.downsampling(SUM) + - name: put_requests + exp: amazonaws_com_AWS_S3_PutRequests_sum.downsampling(SUM) + - name: get_requests + exp: amazonaws_com_AWS_S3_GetRequests_sum.downsampling(SUM) + - name: delete_requests + exp: amazonaws_com_AWS_S3_DeleteRequests_sum.downsampling(SUM) diff --git a/test/script-cases/scripts/mal/test-otel-rules/banyandb/banyandb-instance.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/banyandb/banyandb-instance.data.yaml new file mode 100644 index 000000000000..9bd7586983bf --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/banyandb/banyandb-instance.data.yaml @@ -0,0 +1,479 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + banyandb_measure_total_written: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_stream_tst_total_written: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_system_memory_state: + - labels: + host_name: test-host + service_instance_id: test-instance + kind: total + value: 100.0 + - labels: + host_name: test-host + service_instance_id: test-instance + kind: used + value: 100.0 + banyandb_system_disk: + - labels: + host_name: test-host + service_instance_id: test-instance + kind: used + value: 100.0 + - labels: + host_name: test-host + service_instance_id: test-instance + kind: total + value: 100.0 + banyandb_liaison_grpc_total_started: + - labels: + method: query + host_name: test-host + service_instance_id: test-instance + group: test-value + value: 100.0 + banyandb_system_cpu_num: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_liaison_grpc_total_err: + - labels: + method: query + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_liaison_grpc_total_stream_msg_sent_err: + - labels: + method: query + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_liaison_grpc_total_stream_msg_received_err: + - labels: + method: query + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_queue_sub_total_msg_sent_err: + - labels: + method: query + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_liaison_grpc_total_registry_started: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + up: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + process_cpu_seconds_total: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + process_resident_memory_bytes: + - labels: + host_name: test-host + service_instance_id: test-instance + kind: total + value: 100.0 + banyandb_system_net_state: + - labels: + host_name: test-host + service_instance_id: test-instance + kind: bytes_recv + value: 100.0 + - labels: + host_name: test-host + service_instance_id: test-instance + kind: bytes_sent + value: 100.0 + banyandb_liaison_grpc_total_latency: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + method: query + value: 100.0 + banyandb_measure_total_file_elements: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_measure_total_merge_loop_started: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + type: file + value: 100.0 + banyandb_measure_total_merge_latency: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + type: file + value: 100.0 + banyandb_measure_total_merged_parts: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + type: file + value: 100.0 + banyandb_measure_inverted_index_total_updates: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_stream_storage_inverted_index_total_term_searchers_started: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_measure_inverted_index_total_doc_count: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_stream_tst_inverted_index_total_updates: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_stream_tst_inverted_index_total_term_searchers_started: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_stream_tst_inverted_index_total_doc_count: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 +expected: + meter_banyandb_instance_write_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + group: test-value + value: 50.0 + meter_banyandb_instance_total_memory: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + kind: total + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 100.0 + meter_banyandb_instance_disk_usage: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 100.0 + meter_banyandb_instance_query_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + method: query + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 100.0 + meter_banyandb_instance_total_cpu: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 100.0 + meter_banyandb_instance_write_and_query_errors_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 3000.0 + meter_banyandb_instance_etcd_operation_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 50.0 + meter_banyandb_instance_active_instance: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 100.0 + meter_banyandb_instance_cpu_usage: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 250.0 + meter_banyandb_instance_rss_memory_usage: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 1000.0 + meter_banyandb_instance_disk_usage_all: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 1000.0 + meter_banyandb_instance_network_usage_recv: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 25.0 + meter_banyandb_instance_network_usage_sent: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 25.0 + meter_banyandb_instance_storage_write_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 25000.0 + meter_banyandb_instance_query_latency: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 1000.0 + meter_banyandb_instance_total_data: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 100.0 + meter_banyandb_instance_merge_file_data: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 1500000.0 + meter_banyandb_instance_merge_file_latency: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 1000.0 + meter_banyandb_instance_merge_file_partitions: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 1000.0 + meter_banyandb_instance_series_write_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 25000.0 + meter_banyandb_instance_series_term_search_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 25.0 + meter_banyandb_instance_total_series: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 100.0 + meter_banyandb_instance_stream_write_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 25.0 + meter_banyandb_instance_term_search_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 25000.0 + meter_banyandb_instance_total_document: + entities: + - scope: SERVICE_INSTANCE + service: 'banyandb::test-host' + instance: test-instance + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/banyandb/banyandb-instance.yaml b/test/script-cases/scripts/mal/test-otel-rules/banyandb/banyandb-instance.yaml new file mode 100644 index 000000000000..21955331f317 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/banyandb/banyandb-instance.yaml @@ -0,0 +1,86 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'banyandb-monitoring' }" +expSuffix: tag({tags -> tags.host_name = 'banyandb::' + tags.host_name}).service(['host_name'] , Layer.BANYANDB).instance(['host_name'], ['service_instance_id'], Layer.BANYANDB) +metricPrefix: meter_banyandb +metricsRules: + - name: instance_write_rate + exp: banyandb_measure_total_written.rate('PT15S')+banyandb_stream_tst_total_written.rate('PT15S') + - name: instance_total_memory + exp: banyandb_system_memory_state.tagEqual('kind','total') + - name: instance_disk_usage + exp: banyandb_system_disk.tagEqual('kind','used').sum(['host_name','service_instance_id']) + - name: instance_query_rate + exp: banyandb_liaison_grpc_total_started.sum(['method','host_name','service_instance_id']) + - name: instance_total_cpu + exp: banyandb_system_cpu_num + - name: instance_write_and_query_errors_rate + exp: banyandb_liaison_grpc_total_err.tagEqual('method','query').sum(['method','host_name','service_instance_id']).rate('PT15S')*60 + banyandb_liaison_grpc_total_stream_msg_sent_err.sum(['host_name','service_instance_id']).rate('PT15S')*60 + banyandb_liaison_grpc_total_stream_msg_received_err.sum(['host_name','service_instance_id']).rate('PT15S')*60 + banyandb_queue_sub_total_msg_sent_err.sum(['host_name','service_instance_id']).rate('PT15S')*60 + - name: instance_etcd_operation_rate + exp: banyandb_liaison_grpc_total_registry_started.sum(['host_name','service_instance_id']).rate('PT15S') + banyandb_liaison_grpc_total_started.sum(['host_name','service_instance_id']).rate('PT15S') + - name: instance_active_instance + exp: up.sum(['host_name','service_instance_id']).downsampling(MIN) + - name: instance_cpu_usage + exp: (((process_cpu_seconds_total.sum(['host_name','service_instance_id']).rate('PT15S') / banyandb_system_cpu_num.sum(['host_name','service_instance_id']))).max(['host_name','service_instance_id']))*1000 + - name: instance_rss_memory_usage + exp: ((process_resident_memory_bytes.sum(['host_name','service_instance_id']).downsampling(MAX) / banyandb_system_memory_state.tagEqual('kind','total').sum(['host_name','service_instance_id'])).max(['host_name','service_instance_id']))*1000 + - name: instance_disk_usage_all + exp: ((banyandb_system_disk.tagEqual('kind','used').sum(['host_name','service_instance_id']) / banyandb_system_memory_state.tagEqual('kind','total').sum(['host_name','service_instance_id'])).max(['host_name','service_instance_id']))*1000 + - name: instance_network_usage_recv + exp: banyandb_system_net_state.tagEqual('kind','bytes_recv').sum(['host_name','service_instance_id']).rate('PT15S') + - name: instance_network_usage_sent + exp: banyandb_system_net_state.tagEqual('kind','bytes_sent').sum(['host_name','service_instance_id']).rate('PT15S') + - name: instance_storage_write_rate + exp: banyandb_measure_total_written.sum(['group','host_name','service_instance_id']).rate('PT15S')*1000 + - name: instance_query_latency + exp: (banyandb_liaison_grpc_total_latency.tagEqual('method','query').sum(['group','host_name','service_instance_id']).rate('PT15S') / banyandb_liaison_grpc_total_started.tagEqual('method','query').sum(['group','host_name','service_instance_id']).rate('PT15S'))*1000 + - name: instance_total_data + exp: banyandb_measure_total_file_elements.sum(['group','host_name','service_instance_id']) + - name: instance_merge_file_data + exp: banyandb_measure_total_merge_loop_started.sum(['group','host_name','service_instance_id']).rate('PT15S') * 60 *1000 + - name: instance_merge_file_latency + exp: (banyandb_measure_total_merge_latency.tagEqual('type','file').sum(['group','host_name','service_instance_id']).rate('PT15S') / banyandb_measure_total_merge_loop_started.sum(['group','host_name','service_instance_id']).rate('PT15S'))*1000 + - name: instance_merge_file_partitions + exp: (banyandb_measure_total_merged_parts.tagEqual('type','file').sum(['group','host_name','service_instance_id']).rate('PT15S') / banyandb_measure_total_merge_loop_started.sum(['group','host_name','service_instance_id']).rate('PT15S'))*1000 + - name: instance_series_write_rate + exp: (banyandb_measure_inverted_index_total_updates.sum(['group','host_name','service_instance_id']).rate('PT15S'))*1000 + - name: instance_series_term_search_rate + exp: banyandb_stream_storage_inverted_index_total_term_searchers_started.sum(['group','host_name','service_instance_id']).rate('PT15S') + - name: instance_total_series + exp: banyandb_measure_inverted_index_total_doc_count.sum(['group','host_name','service_instance_id']) + - name: instance_stream_write_rate + exp: banyandb_stream_tst_inverted_index_total_updates.sum(['group','host_name','service_instance_id']).rate('PT15S') + - name: instance_term_search_rate + exp: banyandb_stream_tst_inverted_index_total_term_searchers_started.sum(['group','host_name','service_instance_id']).rate('PT15S')* 1000 + - name: instance_total_document + exp: banyandb_stream_tst_inverted_index_total_doc_count.sum(['group','host_name','service_instance_id']) + + diff --git a/test/script-cases/scripts/mal/test-otel-rules/banyandb/banyandb-service.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/banyandb/banyandb-service.data.yaml new file mode 100644 index 000000000000..77a186afd5fe --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/banyandb/banyandb-service.data.yaml @@ -0,0 +1,453 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + banyandb_measure_total_written: + - labels: + host_name: test-host + service_instance_id: test-instance + group: test-value + value: 100.0 + banyandb_stream_tst_total_written: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_system_memory_state: + - labels: + host_name: test-host + service_instance_id: test-instance + kind: total + value: 100.0 + - labels: + host_name: test-host + service_instance_id: test-instance + kind: used + value: 100.0 + banyandb_system_disk: + - labels: + host_name: test-host + service_instance_id: test-instance + kind: used + value: 100.0 + - labels: + host_name: test-host + service_instance_id: test-instance + kind: total + value: 100.0 + banyandb_liaison_grpc_total_started: + - labels: + method: query + host_name: test-host + service_instance_id: test-instance + group: test-value + value: 100.0 + banyandb_system_cpu_num: + - labels: + method: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_liaison_grpc_total_err: + - labels: + method: query + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_liaison_grpc_total_stream_msg_sent_err: + - labels: + method: query + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_liaison_grpc_total_stream_msg_received_err: + - labels: + method: query + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_queue_sub_total_msg_sent_err: + - labels: + method: query + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_liaison_grpc_total_registry_started: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + up: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + process_cpu_seconds_total: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + process_resident_memory_bytes: + - labels: + host_name: test-host + service_instance_id: test-instance + kind: total + value: 100.0 + banyandb_system_net_state: + - labels: + host_name: test-host + service_instance_id: test-instance + kind: bytes_recv + value: 100.0 + - labels: + host_name: test-host + service_instance_id: test-instance + kind: bytes_sent + value: 100.0 + banyandb_liaison_grpc_total_latency: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + method: query + value: 100.0 + banyandb_measure_total_file_elements: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_measure_total_merge_loop_started: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + type: file + value: 100.0 + banyandb_measure_total_merge_latency: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + type: file + value: 100.0 + banyandb_measure_total_merged_parts: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + type: file + value: 100.0 + banyandb_measure_inverted_index_total_updates: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_stream_storage_inverted_index_total_term_searchers_started: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_measure_inverted_index_total_doc_count: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_stream_tst_inverted_index_total_updates: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_stream_tst_inverted_index_total_term_searchers_started: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 + banyandb_stream_tst_inverted_index_total_doc_count: + - labels: + group: test-value + host_name: test-host + service_instance_id: test-instance + value: 100.0 +expected: + meter_banyandb_write_rate: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 50.0 + meter_banyandb_total_memory: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + value: 100.0 + meter_banyandb_disk_usage: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 100.0 + meter_banyandb_query_rate: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + method: query + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 100.0 + meter_banyandb_total_cpu: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + method: test-value + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 100.0 + meter_banyandb_write_and_query_errors_rate: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 3000.0 + meter_banyandb_etcd_operation_rate: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 50.0 + meter_banyandb_active_instance: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 100.0 + meter_banyandb_cpu_usage: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 250.0 + meter_banyandb_rss_memory_usage: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 1000.0 + meter_banyandb_disk_usage_all: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 1000.0 + meter_banyandb_network_usage_recv: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 25.0 + meter_banyandb_network_usage_sent: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + service_instance_id: test-instance + value: 25.0 + meter_banyandb_storage_write_rate: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 25000.0 + meter_banyandb_query_latency: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 1000.0 + meter_banyandb_total_data: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 100.0 + meter_banyandb_merge_file_data: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 1500000.0 + meter_banyandb_merge_file_latency: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 1000.0 + meter_banyandb_merge_file_partitions: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 1000.0 + meter_banyandb_series_write_rate: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 25000.0 + meter_banyandb_series_term_search_rate: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 25.0 + meter_banyandb_total_series: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 100.0 + meter_banyandb_stream_write_rate: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 25.0 + meter_banyandb_term_search_rate: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 25000.0 + meter_banyandb_total_document: + entities: + - scope: SERVICE + service: 'banyandb::test-host' + layer: BANYANDB + samples: + - labels: + host_name: 'banyandb::test-host' + group: test-value + service_instance_id: test-instance + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/banyandb/banyandb-service.yaml b/test/script-cases/scripts/mal/test-otel-rules/banyandb/banyandb-service.yaml new file mode 100644 index 000000000000..566f893cc4a5 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/banyandb/banyandb-service.yaml @@ -0,0 +1,86 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'banyandb-monitoring' }" +expSuffix: tag({tags -> tags.host_name = 'banyandb::' + tags.host_name}).service(['host_name'] , Layer.BANYANDB) +metricPrefix: meter_banyandb +metricsRules: + - name: write_rate + exp: (banyandb_measure_total_written.sum(['host_name','service_instance_id']).rate('PT15S') + banyandb_stream_tst_total_written.sum(['host_name','service_instance_id']).rate('PT15S')) + - name: total_memory + exp: banyandb_system_memory_state.tagEqual('kind','total').sum(['host_name']) + - name: disk_usage + exp: banyandb_system_disk.tagEqual('kind','used').sum(['host_name','service_instance_id']) + - name: query_rate + exp: banyandb_liaison_grpc_total_started.sum(['method','host_name','service_instance_id']) + - name: total_cpu + exp: banyandb_system_cpu_num.sum(['method','host_name','service_instance_id']) + - name: write_and_query_errors_rate + exp: banyandb_liaison_grpc_total_err.tagEqual('method','query').sum(['method','host_name','service_instance_id']).rate('PT15S')*60 + banyandb_liaison_grpc_total_stream_msg_sent_err.sum(['host_name','service_instance_id']).rate('PT15S')*60 + banyandb_liaison_grpc_total_stream_msg_received_err.sum(['host_name','service_instance_id']).rate('PT15S')*60 + banyandb_queue_sub_total_msg_sent_err.sum(['host_name','service_instance_id']).rate('PT15S')*60 + - name: etcd_operation_rate + exp: banyandb_liaison_grpc_total_registry_started.sum(['host_name','service_instance_id']).rate('PT15S') + banyandb_liaison_grpc_total_started.sum(['host_name','service_instance_id']).rate('PT15S') + - name: active_instance + exp: up.sum(['host_name','service_instance_id']).downsampling(MIN) + - name: cpu_usage + exp: (((process_cpu_seconds_total.sum(['host_name','service_instance_id']).rate('PT15S') / banyandb_system_cpu_num.sum(['host_name','service_instance_id']))).max(['host_name','service_instance_id']))*1000 + - name: rss_memory_usage + exp: ((process_resident_memory_bytes.sum(['host_name','service_instance_id']).downsampling(MAX) / banyandb_system_memory_state.tagEqual('kind','total').sum(['host_name','service_instance_id'])).max(['host_name','service_instance_id']))*1000 + - name: disk_usage_all + exp: ((banyandb_system_disk.tagEqual('kind','used').sum(['host_name','service_instance_id']) / banyandb_system_memory_state.tagEqual('kind','total').sum(['host_name','service_instance_id'])).max(['host_name','service_instance_id']))*1000 + - name: network_usage_recv + exp: banyandb_system_net_state.tagEqual('kind','bytes_recv').sum(['host_name','service_instance_id']).rate('PT15S') + - name: network_usage_sent + exp: banyandb_system_net_state.tagEqual('kind','bytes_sent').sum(['host_name','service_instance_id']).rate('PT15S') + - name: storage_write_rate + exp: banyandb_measure_total_written.sum(['group','host_name','service_instance_id']).rate('PT15S')*1000 + - name: query_latency + exp: (banyandb_liaison_grpc_total_latency.tagEqual('method','query').sum(['group','host_name','service_instance_id']).rate('PT15S') / banyandb_liaison_grpc_total_started.tagEqual('method','query').sum(['group','host_name','service_instance_id']).rate('PT15S'))*1000 + - name: total_data + exp: banyandb_measure_total_file_elements.sum(['group','host_name','service_instance_id']) + - name: merge_file_data + exp: banyandb_measure_total_merge_loop_started.sum(['group','host_name','service_instance_id']).rate('PT15S') * 60 *1000 + - name: merge_file_latency + exp: (banyandb_measure_total_merge_latency.tagEqual('type','file').sum(['group','host_name','service_instance_id']).rate('PT15S') / banyandb_measure_total_merge_loop_started.sum(['group','host_name','service_instance_id']).rate('PT15S'))*1000 + - name: merge_file_partitions + exp: (banyandb_measure_total_merged_parts.tagEqual('type','file').sum(['group','host_name','service_instance_id']).rate('PT15S') / banyandb_measure_total_merge_loop_started.sum(['group','host_name','service_instance_id']).rate('PT15S'))*1000 + - name: series_write_rate + exp: (banyandb_measure_inverted_index_total_updates.sum(['group','host_name','service_instance_id']).rate('PT15S'))*1000 + - name: series_term_search_rate + exp: banyandb_stream_storage_inverted_index_total_term_searchers_started.sum(['group','host_name','service_instance_id']).rate('PT15S') + - name: total_series + exp: banyandb_measure_inverted_index_total_doc_count.sum(['group','host_name','service_instance_id']) + - name: stream_write_rate + exp: banyandb_stream_tst_inverted_index_total_updates.sum(['group','host_name','service_instance_id']).rate('PT15S') + - name: term_search_rate + exp: banyandb_stream_tst_inverted_index_total_term_searchers_started.sum(['group','host_name','service_instance_id']).rate('PT15S')* 1000 + - name: total_document + exp: banyandb_stream_tst_inverted_index_total_doc_count.sum(['group','host_name','service_instance_id']) + + diff --git a/test/script-cases/scripts/mal/test-otel-rules/bookkeeper/bookkeeper-cluster.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/bookkeeper/bookkeeper-cluster.data.yaml new file mode 100644 index 000000000000..b7a7455df880 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/bookkeeper/bookkeeper-cluster.data.yaml @@ -0,0 +1,167 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + bookie_ledgers_count: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookie_ledger_writable_dirs: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookie_ledger_dir_data_bookkeeper_ledgers_usage: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookie_entries_count: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookie_write_cache_size: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookie_write_cache_count: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookie_read_cache_size: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookie_read_cache_count: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookie_WRITE_BYTES: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookie_READ_BYTES: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 +expected: + meter_bookkeeper_bookie_ledgers_count: + entities: + - scope: SERVICE + service: 'bookkeeper::test-cluster' + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_bookie_ledger_writable_dirs: + entities: + - scope: SERVICE + service: 'bookkeeper::test-cluster' + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_bookie_ledger_dir_data_bookkeeper_ledgers_usage: + entities: + - scope: SERVICE + service: 'bookkeeper::test-cluster' + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_bookie_entries_count: + entities: + - scope: SERVICE + service: 'bookkeeper::test-cluster' + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_bookie_write_cache_size: + entities: + - scope: SERVICE + service: 'bookkeeper::test-cluster' + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_bookie_write_cache_count: + entities: + - scope: SERVICE + service: 'bookkeeper::test-cluster' + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_bookie_read_cache_size: + entities: + - scope: SERVICE + service: 'bookkeeper::test-cluster' + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_bookie_read_cache_count: + entities: + - scope: SERVICE + service: 'bookkeeper::test-cluster' + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_bookie_write_rate: + entities: + - scope: SERVICE + service: 'bookkeeper::test-cluster' + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 25.0 + meter_bookkeeper_bookie_read_rate: + entities: + - scope: SERVICE + service: 'bookkeeper::test-cluster' + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 25.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/bookkeeper/bookkeeper-cluster.yaml b/test/script-cases/scripts/mal/test-otel-rules/bookkeeper/bookkeeper-cluster.yaml new file mode 100644 index 000000000000..6345773c2051 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/bookkeeper/bookkeeper-cluster.yaml @@ -0,0 +1,62 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +filter: "{ tags -> tags.job_name == 'bookkeeper-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'bookkeeper::' + tags.cluster}).service(['cluster'], Layer.BOOKKEEPER) +metricPrefix: meter_bookkeeper + +# Metrics Rules +metricsRules: + + # storage metrics + - name: bookie_ledgers_count + exp: bookie_ledgers_count.sum(['cluster', 'node']) + - name: bookie_ledger_writable_dirs + exp: bookie_ledger_writable_dirs.sum(['cluster', 'node']) + - name: bookie_ledger_dir_data_bookkeeper_ledgers_usage + exp: bookie_ledger_dir_data_bookkeeper_ledgers_usage.sum(['cluster', 'node']) + - name: bookie_entries_count + exp: bookie_entries_count.sum(['cluster', 'node']) + + - name: bookie_write_cache_size + exp: bookie_write_cache_size.sum(['cluster', 'node']) + - name: bookie_write_cache_count + exp: bookie_write_cache_count.sum(['cluster', 'node']) + + - name: bookie_read_cache_size + exp: bookie_read_cache_size.sum(['cluster', 'node']) + - name: bookie_read_cache_count + exp: bookie_read_cache_count.sum(['cluster', 'node']) + + - name: bookie_write_rate + exp: bookie_WRITE_BYTES.sum(['cluster', 'node']).rate('PT1M') + - name: bookie_read_rate + exp: bookie_READ_BYTES.sum(['cluster', 'node']).rate('PT1M') diff --git a/test/script-cases/scripts/mal/test-otel-rules/bookkeeper/bookkeeper-node.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/bookkeeper/bookkeeper-node.data.yaml new file mode 100644 index 000000000000..9cefebf6f3cf --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/bookkeeper/bookkeeper-node.data.yaml @@ -0,0 +1,401 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + bookkeeper_server_thread_executor_completed: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookkeeper_server_thread_executor_tasks_completed: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookkeeper_server_thread_executor_tasks_rejected: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookkeeper_server_thread_executor_tasks_failed: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookkeeper_server_BookieHighPriorityThread_threads: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookkeeper_server_BookieReadThreadPool_threads: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookkeeper_server_BookieHighPriorityThread_max_queue_size: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + bookkeeper_server_BookieReadThreadPool_max_queue_size: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + jvm_memory_bytes_used: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + jvm_memory_bytes_committed: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + jvm_memory_bytes_init: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + jvm_memory_pool_bytes_used: + - labels: + cluster: test-cluster + node: test-node + pool: PS_Eden_Space + value: 100.0 + jvm_memory_pool_bytes_committed: + - labels: + cluster: test-cluster + node: test-node + pool: PS_Eden_Space + value: 100.0 + jvm_memory_pool_bytes_init: + - labels: + cluster: test-cluster + node: test-node + pool: PS_Eden_Space + value: 100.0 + jvm_buffer_pool_used_bytes: + - labels: + cluster: test-cluster + node: test-node + pool: PS_Eden_Space + value: 100.0 + jvm_buffer_pool_capacity_bytes: + - labels: + cluster: test-cluster + node: test-node + pool: PS_Eden_Space + value: 100.0 + jvm_buffer_pool_used_buffers: + - labels: + cluster: test-cluster + node: test-node + pool: PS_Eden_Space + value: 100.0 + jvm_gc_collection_seconds_count: + - labels: + cluster: test-cluster + node: test-node + gc: PS Scavenge + value: 100.0 + jvm_gc_collection_seconds_sum: + - labels: + cluster: test-cluster + node: test-node + gc: PS Scavenge + value: 100.0 + jvm_threads_current: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + jvm_threads_daemon: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + jvm_threads_peak: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + jvm_threads_deadlocked: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 +expected: + meter_bookkeeper_node_thread_executor_completed: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_thread_executor_tasks_completed: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_thread_executor_tasks_rejected: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_thread_executor_tasks_failed: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_high_priority_threads: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_read_thread_pool_threads: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_high_priority_thread_max_queue_size: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_read_thread_pool_max_queue_size: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_memory_used: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_memory_committed: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_memory_init: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_memory_pool_used: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + pool: PS_Eden_Space + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_memory_pool_committed: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + pool: PS_Eden_Space + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_memory_pool_init: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + pool: PS_Eden_Space + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_buffer_pool_used_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + pool: PS_Eden_Space + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_buffer_pool_capacity_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + pool: PS_Eden_Space + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_buffer_pool_used_buffers: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + pool: PS_Eden_Space + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_gc_collection_seconds_count: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + gc: PS Scavenge + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_gc_collection_seconds_sum: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + gc: PS Scavenge + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_threads_current: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_threads_daemon: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_threads_peak: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 + meter_bookkeeper_node_jvm_threads_deadlocked: + entities: + - scope: SERVICE_INSTANCE + service: 'bookkeeper::test-cluster' + instance: test-node + layer: BOOKKEEPER + samples: + - labels: + cluster: 'bookkeeper::test-cluster' + node: test-node + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/bookkeeper/bookkeeper-node.yaml b/test/script-cases/scripts/mal/test-otel-rules/bookkeeper/bookkeeper-node.yaml new file mode 100644 index 000000000000..03d3931479e5 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/bookkeeper/bookkeeper-node.yaml @@ -0,0 +1,90 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'bookkeeper-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'bookkeeper::' + tags.cluster}).instance(['cluster'], ['node'], Layer.BOOKKEEPER) +metricPrefix: meter_bookkeeper_node + +# Metrics Rules +metricsRules: + # thread executor metrics + - name: thread_executor_completed + exp: bookkeeper_server_thread_executor_completed.sum(['cluster', 'node']) + - name: thread_executor_tasks_completed + exp: bookkeeper_server_thread_executor_tasks_completed.sum(['cluster', 'node']) + - name: thread_executor_tasks_rejected + exp: bookkeeper_server_thread_executor_tasks_rejected.sum(['cluster', 'node']) + - name: thread_executor_tasks_failed + exp: bookkeeper_server_thread_executor_tasks_failed.sum(['cluster', 'node']) + + - name: high_priority_threads + exp: bookkeeper_server_BookieHighPriorityThread_threads.sum(['cluster', 'node']) + - name: read_thread_pool_threads + exp: bookkeeper_server_BookieReadThreadPool_threads.sum(['cluster', 'node']) + - name: high_priority_thread_max_queue_size + exp: bookkeeper_server_BookieHighPriorityThread_max_queue_size.sum(['cluster', 'node']) + - name: read_thread_pool_max_queue_size + exp: bookkeeper_server_BookieReadThreadPool_max_queue_size.sum(['cluster', 'node']) + + # JVM Metrics + - name: jvm_memory_used + exp: jvm_memory_bytes_used.sum(['cluster', 'node']) + - name: jvm_memory_committed + exp: jvm_memory_bytes_committed.sum(['cluster', 'node']) + - name: jvm_memory_init + exp: jvm_memory_bytes_init.sum(['cluster', 'node']) + + - name: jvm_memory_pool_used + exp: jvm_memory_pool_bytes_used.sum(['cluster', 'node', 'pool']) + - name: jvm_memory_pool_committed + exp: jvm_memory_pool_bytes_committed.sum(['cluster', 'node', 'pool']) + - name: jvm_memory_pool_init + exp: jvm_memory_pool_bytes_init.sum(['cluster', 'node', 'pool']) + + - name: jvm_buffer_pool_used_bytes + exp: jvm_buffer_pool_used_bytes.sum(['cluster', 'node', 'pool']) + - name: jvm_buffer_pool_capacity_bytes + exp: jvm_buffer_pool_capacity_bytes.sum(['cluster', 'node', 'pool']) + - name: jvm_buffer_pool_used_buffers + exp: jvm_buffer_pool_used_buffers.sum(['cluster', 'node', 'pool']) + + - name: jvm_gc_collection_seconds_count + exp: jvm_gc_collection_seconds_count.sum(['cluster', 'node', 'gc']) + - name: jvm_gc_collection_seconds_sum + exp: jvm_gc_collection_seconds_sum.sum(['cluster', 'node', 'gc']) + + - name: jvm_threads_current + exp: jvm_threads_current.sum(['cluster', 'node']) + - name: jvm_threads_daemon + exp: jvm_threads_daemon.sum(['cluster', 'node']) + - name: jvm_threads_peak + exp: jvm_threads_peak.sum(['cluster', 'node']) + - name: jvm_threads_deadlocked + exp: jvm_threads_deadlocked.sum(['cluster', 'node']) diff --git a/test/script-cases/scripts/mal/test-otel-rules/clickhouse/clickhouse-instance.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/clickhouse/clickhouse-instance.data.yaml new file mode 100644 index 000000000000..86a33f3c9232 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/clickhouse/clickhouse-instance.data.yaml @@ -0,0 +1,732 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + ClickHouseMetrics_VersionInteger: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_OSCPUVirtualTimeMicroseconds: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_MemoryTracking: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseAsyncMetrics_OSMemoryTotal: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseAsyncMetrics_OSMemoryAvailable: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseAsyncMetrics_Uptime: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_FileOpen: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_TCPConnection: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_MySQLConnection: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_HTTPConnection: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_InterserverConnection: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_PostgreSQLConnection: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_NetworkReceiveBytes: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_NetworkSendBytes: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_Query: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_SelectQuery: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_InsertQuery: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_QueryTimeMicroseconds: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_SelectQueryTimeMicroseconds: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_InsertQueryTimeMicroseconds: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_OtherQueryTimeMicroseconds: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_SlowRead: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_InsertedRows: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_InsertedBytes: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_DelayedInserts: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_ReplicatedChecks: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_ReplicatedFetch: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_ReplicatedSend: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_Merge: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_MergedRows: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_MergedUncompressedBytes: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_Move: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_PartsActive: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_PartMutation: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_KafkaMessagesRead: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_KafkaWrites: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_KafkaConsumers: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_KafkaProducers: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_ZooKeeperSession: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_ZooKeeperWatch: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_ZooKeeperBytesSent: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_ZooKeeperBytesReceived: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_KeeperAliveConnections: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_KeeperOutstandingRequets: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 +expected: + meter_clickhouse_instance_version: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_cpu_usage: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 0.8333333333333334 + meter_clickhouse_instance_memory_usage: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_memory_available: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_uptime: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_file_open: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_instance_tcp_connections: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_mysql_connections: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_http_connections: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_interserver_connections: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_postgresql_connections: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_network_receive_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_instance_network_send_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_instance_query: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_instance_query_select: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_instance_query_insert: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_instance_query_select_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_instance_query_insert_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_instance_querytime_microseconds: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_instance_querytime_select_microseconds: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_instance_querytime_insert_microseconds: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_instance_querytime_other_microseconds: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_instance_query_slow: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_instance_inserted_rows: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_instance_inserted_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_instance_delayed_inserts: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_instance_replicated_checks: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_replicated_fetch: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_replicated_send: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_background_merge: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_merge_rows: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_instance_merge_uncompressed_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_instance_move: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_parts_active: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_mutations: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_kafka_messages_read: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_instance_kafka_writes: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_instance_kafka_consumers: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_kafka_producers: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_zookeeper_session: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_zookeeper_watch: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_zookeeper_bytes_sent: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_instance_zookeeper_bytes_received: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_instance_keeper_connections_alive: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_instance_keeper_outstanding_requests: + entities: + - scope: SERVICE_INSTANCE + service: 'clickhouse::test-host' + instance: test-instance + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/clickhouse/clickhouse-instance.yaml b/test/script-cases/scripts/mal/test-otel-rules/clickhouse/clickhouse-instance.yaml new file mode 100644 index 000000000000..cd1317622f73 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/clickhouse/clickhouse-instance.yaml @@ -0,0 +1,178 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'clickhouse-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.host_name = 'clickhouse::' + tags.host_name}).instance(['host_name'], ['service_instance_id'], Layer.CLICKHOUSE) +metricPrefix: meter_clickhouse +metricsRules: + # Version of the server in a single integer number in base-1000. + - name: instance_version + exp: ClickHouseMetrics_VersionInteger + # CPU time spent seen by OS. + - name: instance_cpu_usage + exp: ClickHouseProfileEvents_OSCPUVirtualTimeMicroseconds.increase('PT1M')/60 + # The percentage of memory (bytes) allocated by the server. + - name: instance_memory_usage + exp: ClickHouseMetrics_MemoryTracking / ClickHouseAsyncMetrics_OSMemoryTotal * 100 + # The percentage of memory available to be used by programs. + - name: instance_memory_available + exp: ClickHouseAsyncMetrics_OSMemoryAvailable / ClickHouseAsyncMetrics_OSMemoryTotal * 100 + # The server uptime in seconds. It includes the time spent for server initialization before accepting connections. + - name: instance_uptime + exp: ClickHouseAsyncMetrics_Uptime + # Number of files opened per minute. + - name: instance_file_open + exp: ClickHouseProfileEvents_FileOpen.increase('PT1M') + # Network + # Number of connections to TCP server. + - name: instance_tcp_connections + exp: ClickHouseMetrics_TCPConnection + # Number of client connections using MySQL protocol. + - name: instance_mysql_connections + exp: ClickHouseMetrics_MySQLConnection + # Number of connections to HTTP server. + - name: instance_http_connections + exp: ClickHouseMetrics_HTTPConnection + # Number of connections from other replicas to fetch parts. + - name: instance_interserver_connections + exp: ClickHouseMetrics_InterserverConnection + # Number of client connections using PostgreSQL protocol. + - name: instance_postgresql_connections + exp: ClickHouseMetrics_PostgreSQLConnection + # Total number of bytes received from network. + - name: instance_network_receive_bytes + exp: ClickHouseProfileEvents_NetworkReceiveBytes.increase('PT1M') + # Total number of bytes send to network. + - name: instance_network_send_bytes + exp: ClickHouseProfileEvents_NetworkSendBytes.increase('PT1M') + # Query + # Number of executing queries + - name: instance_query + exp: ClickHouseProfileEvents_Query.increase('PT1M') + # Number of executing queries, but only for SELECT queries. + - name: instance_query_select + exp: ClickHouseProfileEvents_SelectQuery.increase('PT1M') + # Number of executing queries, but only for INSERT queries. + - name: instance_query_insert + exp: ClickHouseProfileEvents_InsertQuery.increase('PT1M') + # Number of SELECT queries per second. + - name: instance_query_select_rate + exp: ClickHouseProfileEvents_SelectQuery.rate('PT1M') + # Number of INSERT queries per second. + - name: instance_query_insert_rate + exp: ClickHouseProfileEvents_InsertQuery.rate('PT1M') + # Total time of all queries + - name: instance_querytime_microseconds + exp: ClickHouseProfileEvents_QueryTimeMicroseconds.increase('PT1M') + # Total time of SELECT queries. + - name: instance_querytime_select_microseconds + exp: ClickHouseProfileEvents_SelectQueryTimeMicroseconds.increase('PT1M') + # Total time of INSERT queries. + - name: instance_querytime_insert_microseconds + exp: ClickHouseProfileEvents_InsertQueryTimeMicroseconds.increase('PT1M') + # Total time of queries that are not SELECT or INSERT. + - name: instance_querytime_other_microseconds + exp: ClickHouseProfileEvents_OtherQueryTimeMicroseconds.increase('PT1M') + # Number of reads from a file that were slow. + - name: instance_query_slow + exp: ClickHouseProfileEvents_SlowRead.rate('PT1M') + # Insertion + # Number of rows INSERTed to all tables. + - name: instance_inserted_rows + exp: ClickHouseProfileEvents_InsertedRows.rate('PT1M') + # Number of bytes INSERTed to all tables. + - name: instance_inserted_bytes + exp: ClickHouseProfileEvents_InsertedBytes.rate('PT1M') + # Number of times the INSERT of a block to a MergeTree table was throttled due to high number of active data parts for partition. + - name: instance_delayed_inserts + exp: ClickHouseProfileEvents_DelayedInserts.rate('PT1M') + # Replicas + # Number of data parts checking for consistency. + - name: instance_replicated_checks + exp: ClickHouseMetrics_ReplicatedChecks + # Number of data parts being fetched from replica. + - name: instance_replicated_fetch + exp: ClickHouseMetrics_ReplicatedFetch + # Number of data parts being sent to replicas. + - name: instance_replicated_send + exp: ClickHouseMetrics_ReplicatedSend + # MergeTree + # Number of executing background merges. + - name: instance_background_merge + exp: ClickHouseMetrics_Merge + # Rows read for background merges. This is the number of rows before merge. + - name: instance_merge_rows + exp: ClickHouseProfileEvents_MergedRows.increase('PT1M') + # Uncompressed bytes (for columns as they stored in memory) that was read for background merges. This is the number before merge. + - name : instance_merge_uncompressed_bytes + exp: ClickHouseProfileEvents_MergedUncompressedBytes.increase('PT1M') + # Number of currently executing moves. + - name: instance_move + exp: ClickHouseMetrics_Move + # Active data part, used by current and upcoming SELECTs. + - name: instance_parts_active + exp: ClickHouseMetrics_PartsActive + # Number of mutations (ALTER DELETE/UPDATE). + - name: instance_mutations + exp: ClickHouseMetrics_PartMutation + # Kafka Table Engine + # Number of Kafka messages already processed by ClickHouse. + - name: instance_kafka_messages_read + exp: ClickHouseProfileEvents_KafkaMessagesRead.rate('PT1M') + # Number of writes (inserts) to Kafka tables. + - name: instance_kafka_writes + exp: ClickHouseProfileEvents_KafkaWrites.rate('PT1M') + # Number of active Kafka consumers. + - name: instance_kafka_consumers + exp: ClickHouseMetrics_KafkaConsumers + # Number of active Kafka producer created. + - name: instance_kafka_producers + exp: ClickHouseMetrics_KafkaProducers + # Zookeeper + # Number of sessions (connections) to ZooKeeper. Should be no more than one, because using more than one connection to ZooKeeper may lead to bugs due to lack of linearizability (stale reads) that ZooKeeper consistency model allows. + - name: instance_zookeeper_session + exp: ClickHouseMetrics_ZooKeeperSession + # Number of watches (event subscriptions) in ZooKeeper. + - name: instance_zookeeper_watch + exp: ClickHouseMetrics_ZooKeeperWatch + # Number of bytes send over network while communicating with ZooKeeper. + - name: instance_zookeeper_bytes_sent + exp: ClickHouseProfileEvents_ZooKeeperBytesSent.rate('PT1M') + # Number of bytes received over network while communicating with ZooKeeper. + - name: instance_zookeeper_bytes_received + exp: ClickHouseProfileEvents_ZooKeeperBytesReceived.rate('PT1M') + # ClickHouse Keeper + # Number of alive connections for embedded ClickHouse Keeper. + - name: instance_keeper_connections_alive + exp: ClickHouseMetrics_KeeperAliveConnections + # Number of outstanding requests for embedded ClickHouse Keeper. + - name: instance_keeper_outstanding_requests + exp: ClickHouseMetrics_KeeperOutstandingRequets + diff --git a/test/script-cases/scripts/mal/test-otel-rules/clickhouse/clickhouse-service.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/clickhouse/clickhouse-service.data.yaml new file mode 100644 index 000000000000..0745601b151f --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/clickhouse/clickhouse-service.data.yaml @@ -0,0 +1,607 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + ClickHouseProfileEvents_FileOpen: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_TCPConnection: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_MySQLConnection: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_HTTPConnection: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_InterserverConnection: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_PostgreSQLConnection: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_NetworkReceiveBytes: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_NetworkSendBytes: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_Query: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_SelectQuery: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_InsertQuery: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_QueryTimeMicroseconds: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_SelectQueryTimeMicroseconds: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_InsertQueryTimeMicroseconds: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_OtherQueryTimeMicroseconds: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_SlowRead: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_InsertedRows: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_InsertedBytes: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_DelayedInserts: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_ReplicatedChecks: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_ReplicatedFetch: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_ReplicatedSend: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_Merge: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_MergedRows: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_MergedUncompressedBytes: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_Move: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_PartsActive: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_PartMutation: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_KafkaMessagesRead: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_KafkaWrites: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_KafkaConsumers: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_KafkaProducers: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_ZooKeeperSession: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_ZooKeeperWatch: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_ZooKeeperBytesSent: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseProfileEvents_ZooKeeperBytesReceived: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_KeeperAliveConnections: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + ClickHouseMetrics_KeeperOutstandingRequets: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 +expected: + meter_clickhouse_file_open: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_tcp_connections: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_mysql_connections: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_http_connections: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_interserver_connections: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_postgresql_connections: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_network_receive_bytes: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_network_send_bytes: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_query: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_query_select: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_query_insert: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_query_select_rate: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_query_insert_rate: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_querytime_microseconds: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_querytime_select_microseconds: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_querytime_insert_microseconds: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_querytime_other_microseconds: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_query_slow: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_inserted_rows: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_inserted_bytes: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_delayed_inserts: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_replicated_checks: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_replicated_fetch: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_replicated_send: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_background_merge: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_merge_rows: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_merge_uncompressed_bytes: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 50.0 + meter_clickhouse_move: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_parts_active: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_mutations: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_kafka_messages_read: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_kafka_writes: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_kafka_consumers: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_kafka_producers: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_zookeeper_session: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_zookeeper_watch: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_zookeeper_bytes_sent: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_zookeeper_bytes_received: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 25.0 + meter_clickhouse_keeper_connections_alive: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 + meter_clickhouse_keeper_outstanding_requests: + entities: + - scope: SERVICE + service: 'clickhouse::test-host' + layer: CLICKHOUSE + samples: + - labels: + host_name: 'clickhouse::test-host' + service_instance_id: test-instance + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/clickhouse/clickhouse-service.yaml b/test/script-cases/scripts/mal/test-otel-rules/clickhouse/clickhouse-service.yaml new file mode 100644 index 000000000000..940d891f643d --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/clickhouse/clickhouse-service.yaml @@ -0,0 +1,162 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'clickhouse-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.host_name = 'clickhouse::' + tags.host_name}).service(['host_name'] , Layer.CLICKHOUSE) +metricPrefix: meter_clickhouse +metricsRules: + # Number of files opened per minute. + - name: file_open + exp: ClickHouseProfileEvents_FileOpen.sum(['host_name','service_instance_id']).increase('PT1M') + # Network + # Number of connections to TCP server. + - name: tcp_connections + exp: ClickHouseMetrics_TCPConnection.sum(['host_name','service_instance_id']) + # Number of client connections using MySQL protocol. + - name: mysql_connections + exp: ClickHouseMetrics_MySQLConnection.sum(['host_name','service_instance_id']) + # Number of connections to HTTP server. + - name: http_connections + exp: ClickHouseMetrics_HTTPConnection.sum(['host_name','service_instance_id']) + # Number of connections from other replicas to fetch parts. + - name: interserver_connections + exp: ClickHouseMetrics_InterserverConnection.sum(['host_name','service_instance_id']) + # Number of client connections using PostgreSQL protocol. + - name: postgresql_connections + exp: ClickHouseMetrics_PostgreSQLConnection.sum(['host_name','service_instance_id']) + # Total number of bytes received from network. + - name: network_receive_bytes + exp: ClickHouseProfileEvents_NetworkReceiveBytes.sum(['host_name','service_instance_id']).increase('PT1M') + # Total number of bytes send to network. + - name: network_send_bytes + exp: ClickHouseProfileEvents_NetworkSendBytes.sum(['host_name','service_instance_id']).increase('PT1M') + # Query + # Number of executing queries. + - name: query + exp: ClickHouseProfileEvents_Query.sum(['host_name','service_instance_id']).increase('PT1M') + # Number of executing queries, but only for SELECT queries. + - name: query_select + exp: ClickHouseProfileEvents_SelectQuery.sum(['host_name','service_instance_id']).increase('PT1M') + # Number of executing queries, but only for INSERT queries. + - name: query_insert + exp: ClickHouseProfileEvents_InsertQuery.sum(['host_name','service_instance_id']).increase('PT1M') + # Number of SELECT queries per second. + - name: query_select_rate + exp: ClickHouseProfileEvents_SelectQuery.sum(['host_name','service_instance_id']).rate('PT1M') + # Number of INSERT queries per second. + - name: query_insert_rate + exp: ClickHouseProfileEvents_InsertQuery.sum(['host_name','service_instance_id']).rate('PT1M') + # Total time of all queries + - name: querytime_microseconds + exp: ClickHouseProfileEvents_QueryTimeMicroseconds.sum(['host_name','service_instance_id']).increase('PT1M') + # Total time of SELECT queries. + - name: querytime_select_microseconds + exp: ClickHouseProfileEvents_SelectQueryTimeMicroseconds.sum(['host_name','service_instance_id']).increase('PT1M') + # Total time of INSERT queries. + - name: querytime_insert_microseconds + exp: ClickHouseProfileEvents_InsertQueryTimeMicroseconds.sum(['host_name','service_instance_id']).increase('PT1M') + # Total time of queries that are not SELECT or INSERT. + - name: querytime_other_microseconds + exp: ClickHouseProfileEvents_OtherQueryTimeMicroseconds.sum(['host_name','service_instance_id']).increase('PT1M') + # Number of reads from a file that were slow. + - name: query_slow + exp: ClickHouseProfileEvents_SlowRead.sum(['host_name','service_instance_id']).rate('PT1M') + # Insertion + # Number of rows INSERTed to all tables. + - name: inserted_rows + exp: ClickHouseProfileEvents_InsertedRows.sum(['host_name','service_instance_id']).rate('PT1M') + # Number of bytes INSERTed to all tables. + - name: inserted_bytes + exp: ClickHouseProfileEvents_InsertedBytes.sum(['host_name','service_instance_id']).rate('PT1M') + # Number of times the INSERT of a block to a MergeTree table was throttled due to high number of active data parts for partition. + - name: delayed_inserts + exp: ClickHouseProfileEvents_DelayedInserts.sum(['host_name','service_instance_id']).rate('PT1M') + # Replicas + # Number of data parts checking for consistency. + - name: replicated_checks + exp: ClickHouseMetrics_ReplicatedChecks.sum(['host_name','service_instance_id']) + # Number of data parts being fetched from replica. + - name: replicated_fetch + exp: ClickHouseMetrics_ReplicatedFetch.sum(['host_name','service_instance_id']) + # Number of data parts being sent to replicas. + - name: replicated_send + exp: ClickHouseMetrics_ReplicatedSend.sum(['host_name','service_instance_id']) + # MergeTree + # Number of executing background merges. + - name: background_merge + exp: ClickHouseMetrics_Merge.sum(['host_name','service_instance_id']) + # Rows read for background merges. This is the number of rows before merge. + - name: merge_rows + exp: ClickHouseProfileEvents_MergedRows.sum(['host_name','service_instance_id']).increase('PT1M') + # Uncompressed bytes (for columns as they stored in memory) that was read for background merges. This is the number before merge. + - name: merge_uncompressed_bytes + exp: ClickHouseProfileEvents_MergedUncompressedBytes.sum(['host_name','service_instance_id']).increase('PT1M') + # Number of currently executing moves. + - name: move + exp: ClickHouseMetrics_Move.sum(['host_name','service_instance_id']) + # Active data part, used by current and upcoming SELECTs. + - name: parts_active + exp: ClickHouseMetrics_PartsActive.sum(['host_name','service_instance_id']) + # Number of mutations (ALTER DELETE/UPDATE) + - name: mutations + exp: ClickHouseMetrics_PartMutation.sum(['host_name','service_instance_id']) + # Kafka Table Engine + # Number of Kafka messages already processed by ClickHouse. + - name: kafka_messages_read + exp: ClickHouseProfileEvents_KafkaMessagesRead.sum(['host_name','service_instance_id']).rate('PT1M') + # Number of writes (inserts) to Kafka tables. + - name: kafka_writes + exp: ClickHouseProfileEvents_KafkaWrites.sum(['host_name','service_instance_id']).rate('PT1M') + # Number of active Kafka consumers. + - name: kafka_consumers + exp: ClickHouseMetrics_KafkaConsumers.sum(['host_name','service_instance_id']) + # Number of active Kafka producer created. + - name: kafka_producers + exp: ClickHouseMetrics_KafkaProducers.sum(['host_name','service_instance_id']) + # Zookeeper + # Number of sessions (connections) to ZooKeeper. Should be no more than one, because using more than one connection to ZooKeeper may lead to bugs due to lack of linearizability (stale reads) that ZooKeeper consistency model allows + - name: zookeeper_session + exp: ClickHouseMetrics_ZooKeeperSession.sum(['host_name','service_instance_id']) + # Number of watches (event subscriptions) in ZooKeeper. + - name: zookeeper_watch + exp: ClickHouseMetrics_ZooKeeperWatch.sum(['host_name','service_instance_id']) + # Number of bytes send over network while communicating with ZooKeeper. + - name: zookeeper_bytes_sent + exp: ClickHouseProfileEvents_ZooKeeperBytesSent.sum(['host_name','service_instance_id']).rate('PT1M') + # Number of bytes received over network while communicating with ZooKeeper. + - name: zookeeper_bytes_received + exp: ClickHouseProfileEvents_ZooKeeperBytesReceived.sum(['host_name','service_instance_id']).rate('PT1M') + # ClickHouse Keeper + # Number of alive connections for embedded ClickHouse Keeper. + - name: keeper_connections_alive + exp: ClickHouseMetrics_KeeperAliveConnections.sum(['host_name','service_instance_id']) + # Number of outstanding requests for embedded ClickHouse Keeper. + - name: keeper_outstanding_requests + exp: ClickHouseMetrics_KeeperOutstandingRequets.sum(['host_name','service_instance_id']) diff --git a/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-cluster.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-cluster.data.yaml new file mode 100644 index 000000000000..37d8c0d8652f --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-cluster.data.yaml @@ -0,0 +1,205 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + elasticsearch_cluster_health_status: + - labels: + cluster: test-cluster + color: green + value: 1.0 + elasticsearch_breakers_tripped: + - labels: + cluster: test-cluster + value: 1.0 + elasticsearch_cluster_health_number_of_nodes: + - labels: + cluster: test-cluster + value: 1.0 + elasticsearch_cluster_health_number_of_data_nodes: + - labels: + cluster: test-cluster + value: 1.0 + elasticsearch_cluster_health_number_of_pending_tasks: + - labels: + cluster: test-cluster + value: 1.0 + elasticsearch_process_cpu_percent: + - labels: + cluster: test-cluster + value: 1.0 + elasticsearch_jvm_memory_used_bytes: + - labels: + cluster: test-cluster + value: 1.0 + elasticsearch_jvm_memory_max_bytes: + - labels: + cluster: test-cluster + value: 1.0 + elasticsearch_process_open_files_count: + - labels: + cluster: test-cluster + value: 1.0 + elasticsearch_cluster_health_active_primary_shards: + - labels: + cluster: test-cluster + value: 1.0 + elasticsearch_cluster_health_active_shards: + - labels: + cluster: test-cluster + value: 1.0 + elasticsearch_cluster_health_initializing_shards: + - labels: + cluster: test-cluster + value: 1.0 + elasticsearch_cluster_health_delayed_unassigned_shards: + - labels: + cluster: test-cluster + value: 1.0 + elasticsearch_cluster_health_relocating_shards: + - labels: + cluster: test-cluster + value: 1.0 + elasticsearch_cluster_health_unassigned_shards: + - labels: + cluster: test-cluster + value: 1.0 +expected: + meter_elasticsearch_cluster_health_status: + entities: + - scope: SERVICE + service: 'elasticsearch::test-cluster' + layer: ELASTICSEARCH + samples: + - labels: + color: green + cluster: 'elasticsearch::test-cluster' + value: 1.0 + meter_elasticsearch_cluster_breakers_tripped: + entities: + - scope: SERVICE + service: 'elasticsearch::test-cluster' + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + value: 0.5 + meter_elasticsearch_cluster_nodes: + entities: + - scope: SERVICE + service: 'elasticsearch::test-cluster' + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + value: 1.0 + meter_elasticsearch_cluster_data_nodes: + entities: + - scope: SERVICE + service: 'elasticsearch::test-cluster' + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + value: 1.0 + meter_elasticsearch_cluster_pending_tasks_total: + entities: + - scope: SERVICE + service: 'elasticsearch::test-cluster' + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + value: 1.0 + meter_elasticsearch_cluster_cpu_usage_avg: + entities: + - scope: SERVICE + service: 'elasticsearch::test-cluster' + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + value: 1.0 + meter_elasticsearch_cluster_jvm_memory_used_avg: + entities: + - scope: SERVICE + service: 'elasticsearch::test-cluster' + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_cluster_open_file_count: + entities: + - scope: SERVICE + service: 'elasticsearch::test-cluster' + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + value: 1.0 + meter_elasticsearch_cluster_primary_shards_total: + entities: + - scope: SERVICE + service: 'elasticsearch::test-cluster' + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + value: 1.0 + meter_elasticsearch_cluster_shards_total: + entities: + - scope: SERVICE + service: 'elasticsearch::test-cluster' + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + value: 1.0 + meter_elasticsearch_cluster_initializing_shards_total: + entities: + - scope: SERVICE + service: 'elasticsearch::test-cluster' + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + value: 1.0 + meter_elasticsearch_cluster_delayed_unassigned_shards_total: + entities: + - scope: SERVICE + service: 'elasticsearch::test-cluster' + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + value: 1.0 + meter_elasticsearch_cluster_relocating_shards_total: + entities: + - scope: SERVICE + service: 'elasticsearch::test-cluster' + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + value: 1.0 + meter_elasticsearch_cluster_unassigned_shards_total: + entities: + - scope: SERVICE + service: 'elasticsearch::test-cluster' + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + value: 1.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-cluster.yaml b/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-cluster.yaml new file mode 100644 index 000000000000..50b1e8b092a0 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-cluster.yaml @@ -0,0 +1,72 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'elasticsearch-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'elasticsearch::' + tags.cluster}).service(['cluster'], Layer.ELASTICSEARCH) +metricPrefix: meter_elasticsearch_cluster +metricsRules: + # cluster health + - name: health_status + exp: elasticsearch_cluster_health_status.tagNotEqual('cluster','unknown_cluster').valueEqual(1).sum(['cluster' , 'color']) + # elasticsearch_breakers_tripped + - name: breakers_tripped + exp: elasticsearch_breakers_tripped.tagNotEqual('cluster','unknown_cluster').sum(['cluster']).increase('PT1M') + # cluster nodes + - name: nodes + exp: elasticsearch_cluster_health_number_of_nodes.tagNotEqual('cluster','unknown_cluster').sum(['cluster']) + # cluster data nodes + - name: data_nodes + exp: elasticsearch_cluster_health_number_of_data_nodes.tagNotEqual('cluster','unknown_cluster').sum(['cluster']) + # pending tasks total + - name: pending_tasks_total + exp: elasticsearch_cluster_health_number_of_pending_tasks.tagNotEqual('cluster','unknown_cluster').sum(['cluster']) + # cpu usage avg + - name: cpu_usage_avg + exp: elasticsearch_process_cpu_percent.tagNotEqual('cluster','unknown_cluster').avg(['cluster']) + # jvm used memory avg + - name: jvm_memory_used_avg + exp: elasticsearch_jvm_memory_used_bytes.tagNotEqual('cluster','unknown_cluster').sum(['cluster']) / elasticsearch_jvm_memory_max_bytes.tagNotEqual('cluster','unknown_cluster').sum(['cluster']) * 100 + # open file count + - name: open_file_count + exp: elasticsearch_process_open_files_count.tagNotEqual('cluster','unknown_cluster').sum(['cluster']) + + # shards + - name: primary_shards_total + exp: elasticsearch_cluster_health_active_primary_shards.tagNotEqual('cluster','unknown_cluster').sum(['cluster']) + - name: shards_total + exp: elasticsearch_cluster_health_active_shards.tagNotEqual('cluster','unknown_cluster').sum(['cluster']) + - name: initializing_shards_total + exp: elasticsearch_cluster_health_initializing_shards.tagNotEqual('cluster','unknown_cluster').sum(['cluster']) + - name: delayed_unassigned_shards_total + exp: elasticsearch_cluster_health_delayed_unassigned_shards.tagNotEqual('cluster','unknown_cluster').sum(['cluster']) + - name: relocating_shards_total + exp: elasticsearch_cluster_health_relocating_shards.tagNotEqual('cluster','unknown_cluster').sum(['cluster']) + - name: unassigned_shards_total + exp: elasticsearch_cluster_health_unassigned_shards.tagNotEqual('cluster','unknown_cluster').sum(['cluster']) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-index.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-index.data.yaml new file mode 100644 index 000000000000..855f340bae6e --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-index.data.yaml @@ -0,0 +1,703 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + elasticsearch_index_stats_indexing_index_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_indexing_index_time_seconds_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_search_query_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_search_query_time_seconds_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_search_fetch_time_seconds_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_search_scroll_time_seconds_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_search_suggest_time_seconds_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_merge_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_flush_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_refresh_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_warmer_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_indexing_delete_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_search_fetch_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_search_scroll_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_search_suggest_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_get_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_merge_time_seconds_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_flush_time_seconds_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_refresh_time_seconds_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_warmer_time_seconds_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_indexing_delete_time_seconds_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_get_time_seconds_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_merge_stopped_time_seconds_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_merge_throttle_time_seconds_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_index_stats_indexing_throttle_time_seconds_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_indices_docs_primary: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_indices_store_size_bytes_primary: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_indices_docs_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_indices_store_size_bytes_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_indices_deleted_docs_primary: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_indices_segment_count_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_indices_segment_memory_bytes_total: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_indices_segment_count_primary: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_indices_segment_memory_bytes_primary: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + value: 100.0 + elasticsearch_indices_shards_docs: + - labels: + cluster: test-cluster + index: test-value + primary: test-value + shard: test-value + value: 100.0 +expected: + meter_elasticsearch_index_stats_indexing_index_total_req_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_indexing_index_total_proc_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 1.0 + meter_elasticsearch_index_stats_search_query_total_req_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_search_query_total_proc_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 0.25 + meter_elasticsearch_index_stats_merge_total_req_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_flush_total_req_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_refresh_total_req_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_warmer_total_req_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_indexing_delete_total_req_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_search_fetch_total_req_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_search_scroll_total_req_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_search_suggest_total_req_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_get_total_req_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_merge_time_seconds_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_flush_time_seconds_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_refresh_time_seconds_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_warmer_time_seconds_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_indexing_delete_time_seconds_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_search_fetch_time_seconds_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_search_query_time_seconds_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_search_scroll_time_seconds_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_search_suggest_time_seconds_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_indexing_index_time_seconds_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_get_time_seconds_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_merge_stopped_time_seconds_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_merge_throttle_time_seconds_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_stats_indexing_throttle_time_seconds_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_search_fetch_avg_time: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 1.0 + meter_elasticsearch_index_search_query_avg_time: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 1.0 + meter_elasticsearch_index_search_scroll_avg_time: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 1.0 + meter_elasticsearch_index_search_suggest_avg_time: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 1.0 + meter_elasticsearch_index_indices_docs_primary: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 100.0 + meter_elasticsearch_index_indices_docs_primary_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_indices_store_size_bytes_primary: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 100.0 + meter_elasticsearch_index_indices_docs_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 100.0 + meter_elasticsearch_index_indices_docs_total_rate: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 25.0 + meter_elasticsearch_index_indices_store_size_bytes_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 100.0 + meter_elasticsearch_index_indices_deleted_docs_primary: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 100.0 + meter_elasticsearch_index_indices_segment_count_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 100.0 + meter_elasticsearch_index_indices_segment_memory_bytes_total: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 100.0 + meter_elasticsearch_index_indices_segment_count_primary: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 100.0 + meter_elasticsearch_index_indices_segment_memory_bytes_primary: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + value: 100.0 + meter_elasticsearch_index_indices_shards_docs: + entities: + - scope: ENDPOINT + service: 'elasticsearch::test-cluster' + endpoint: test-value + layer: ELASTICSEARCH + samples: + - labels: + cluster: 'elasticsearch::test-cluster' + index: test-value + shard: test-value + primary: replica + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-index.yaml b/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-index.yaml new file mode 100644 index 000000000000..dc0af6695342 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-index.yaml @@ -0,0 +1,127 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'elasticsearch-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'elasticsearch::' + tags.cluster}).endpoint(['cluster'], ['index'], Layer.ELASTICSEARCH) +metricPrefix: meter_elasticsearch_index +metricsRules: + - name: stats_indexing_index_total_req_rate + exp: elasticsearch_index_stats_indexing_index_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_indexing_index_total_proc_rate + exp: 1 / (elasticsearch_index_stats_indexing_index_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') / elasticsearch_index_stats_indexing_index_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M')) + + - name: stats_search_query_total_req_rate + exp: elasticsearch_index_stats_search_query_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_search_query_total_proc_rate + exp: 1 / ((elasticsearch_index_stats_search_query_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + elasticsearch_index_stats_search_fetch_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + elasticsearch_index_stats_search_scroll_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + elasticsearch_index_stats_search_suggest_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M')) / elasticsearch_index_stats_search_query_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M')) + + - name: stats_merge_total_req_rate + exp: elasticsearch_index_stats_merge_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_flush_total_req_rate + exp: elasticsearch_index_stats_flush_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_refresh_total_req_rate + exp: elasticsearch_index_stats_refresh_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_warmer_total_req_rate + exp: elasticsearch_index_stats_warmer_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_indexing_delete_total_req_rate + exp: elasticsearch_index_stats_indexing_delete_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_search_fetch_total_req_rate + exp: elasticsearch_index_stats_search_fetch_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_search_scroll_total_req_rate + exp: elasticsearch_index_stats_search_scroll_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_search_suggest_total_req_rate + exp: elasticsearch_index_stats_search_suggest_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_get_total_req_rate + exp: elasticsearch_index_stats_get_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + + - name: stats_merge_time_seconds_total + exp: elasticsearch_index_stats_merge_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_flush_time_seconds_total + exp: elasticsearch_index_stats_flush_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_refresh_time_seconds_total + exp: elasticsearch_index_stats_refresh_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_warmer_time_seconds_total + exp: elasticsearch_index_stats_warmer_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_indexing_delete_time_seconds_total + exp: elasticsearch_index_stats_indexing_delete_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_search_fetch_time_seconds_total + exp: elasticsearch_index_stats_search_fetch_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_search_query_time_seconds_total + exp: elasticsearch_index_stats_search_query_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_search_scroll_time_seconds_total + exp: elasticsearch_index_stats_search_scroll_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_search_suggest_time_seconds_total + exp: elasticsearch_index_stats_search_suggest_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_indexing_index_time_seconds_total + exp: elasticsearch_index_stats_indexing_index_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_get_time_seconds_total + exp: elasticsearch_index_stats_get_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_merge_stopped_time_seconds_total + exp: elasticsearch_index_stats_merge_stopped_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_merge_throttle_time_seconds_total + exp: elasticsearch_index_stats_merge_throttle_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: stats_indexing_throttle_time_seconds_total + exp: elasticsearch_index_stats_indexing_throttle_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + + - name: search_fetch_avg_time + exp: elasticsearch_index_stats_search_fetch_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') / elasticsearch_index_stats_search_fetch_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: search_query_avg_time + exp: elasticsearch_index_stats_search_query_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') / elasticsearch_index_stats_search_query_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: search_scroll_avg_time + exp: elasticsearch_index_stats_search_scroll_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') / elasticsearch_index_stats_search_scroll_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: search_suggest_avg_time + exp: elasticsearch_index_stats_search_suggest_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') / elasticsearch_index_stats_search_suggest_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + + - name: indices_docs_primary + exp: elasticsearch_indices_docs_primary.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']) + - name: indices_docs_primary_rate + exp: elasticsearch_indices_docs_primary.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: indices_store_size_bytes_primary + exp: elasticsearch_indices_store_size_bytes_primary.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']) + - name: indices_docs_total + exp: elasticsearch_indices_docs_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']) + - name: indices_docs_total_rate + exp: elasticsearch_indices_docs_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']).rate('PT1M') + - name: indices_store_size_bytes_total + exp: elasticsearch_indices_store_size_bytes_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']) + - name: indices_deleted_docs_primary + exp: elasticsearch_indices_deleted_docs_primary.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']) + + - name: indices_segment_count_total + exp: elasticsearch_indices_segment_count_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']) + - name: indices_segment_memory_bytes_total + exp: elasticsearch_indices_segment_memory_bytes_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']) + - name: indices_segment_count_primary + exp: elasticsearch_indices_segment_count_primary.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']) + - name: indices_segment_memory_bytes_primary + exp: elasticsearch_indices_segment_memory_bytes_primary.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index']) + + - name: indices_shards_docs + exp: elasticsearch_indices_shards_docs.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'index' , 'primary' , 'shard']).tag({tags -> if (tags['primary'] == 'true') {tags.primary = 'primary'} else {tags.primary = 'replica'} }) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-node.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-node.data.yaml new file mode 100644 index 000000000000..3cd3406d414e --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-node.data.yaml @@ -0,0 +1,820 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + elasticsearch_process_cpu_percent: + - labels: + cluster: test-cluster + name: test-name + es_client_node: test-value + es_data_node: test-value + es_ingest_node: test-value + es_master_node: test-value + value: 100.0 + elasticsearch_process_open_files_count: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_filesystem_data_available_bytes: + - labels: + cluster: test-cluster + name: test-name + mount: test-value + value: 100.0 + elasticsearch_jvm_memory_used_bytes: + - labels: + cluster: test-cluster + name: test-name + area: non-heap + value: 100.0 + - labels: + cluster: test-cluster + name: test-name + area: heap + value: 100.0 + elasticsearch_jvm_memory_max_bytes: + - labels: + cluster: test-cluster + name: test-name + area: heap + value: 100.0 + elasticsearch_jvm_memory_committed_bytes: + - labels: + cluster: test-cluster + name: test-name + area: non-heap + value: 100.0 + - labels: + cluster: test-cluster + name: test-name + area: heap + value: 100.0 + elasticsearch_jvm_memory_pool_peak_used_bytes: + - labels: + cluster: test-cluster + name: test-name + pool: PS_Eden_Space + value: 100.0 + elasticsearch_jvm_gc_collection_seconds_count: + - labels: + cluster: test-cluster + name: test-name + gc: PS Scavenge + value: 100.0 + elasticsearch_jvm_gc_collection_seconds_sum: + - labels: + cluster: test-cluster + name: test-name + gc: PS Scavenge + value: 100.0 + elasticsearch_os_cpu_percent: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_os_load1: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_os_load5: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_os_load15: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_translog_operations: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_translog_size_in_bytes: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_breakers_tripped: + - labels: + cluster: test-cluster + name: test-name + breaker: test-value + value: 100.0 + elasticsearch_breakers_estimated_size_bytes: + - labels: + cluster: test-cluster + name: test-name + breaker: test-value + value: 100.0 + elasticsearch_filesystem_data_size_bytes: + - labels: + cluster: test-cluster + name: test-name + mount: test-value + value: 100.0 + elasticsearch_filesystem_io_stats_device_read_size_kilobytes_sum: + - labels: + cluster: test-cluster + name: test-name + mount: test-value + value: 100.0 + elasticsearch_filesystem_io_stats_device_write_size_kilobytes_sum: + - labels: + cluster: test-cluster + name: test-name + mount: test-value + value: 100.0 + elasticsearch_transport_tx_size_bytes_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_transport_rx_size_bytes_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_search_query_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_search_query_time_seconds: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_search_fetch_time_seconds: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_search_scroll_time_seconds: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_search_suggest_time_seconds: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_search_fetch_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_indexing_index_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_indexing_index_time_seconds_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_merges_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_refresh_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_flush_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_get_exists_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_get_missing_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_get_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_indexing_delete_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_search_scroll_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_search_suggest_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_docs: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_docs_deleted: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_merges_docs_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_merges_total_size_bytes_total: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_segments_count: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 + elasticsearch_indices_segments_memory_bytes: + - labels: + cluster: test-cluster + name: test-name + value: 100.0 +expected: + meter_elasticsearch_node_rules: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + es_master_node: test-value + es_ingest_node: test-value + es_client_node: test-value + es_data_node: test-value + value: 100.0 + meter_elasticsearch_node_open_file_count: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_node_all_disk_free_space: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_node_jvm_memory_used: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 200.0 + meter_elasticsearch_node_jvm_memory_nonheap_used: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_node_jvm_memory_heap_used: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_node_jvm_memory_heap_max: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_node_jvm_memory_nonheap_committed: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_node_jvm_memory_heap_committed: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_node_jvm_memory_pool_peak_used: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + pool: PS_Eden_Space + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_node_jvm_gc_count: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + gc: PS Scavenge + cluster: 'elasticsearch::test-cluster' + value: 50.0 + meter_elasticsearch_node_jvm_gc_time: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + gc: PS Scavenge + cluster: 'elasticsearch::test-cluster' + value: 50000.0 + meter_elasticsearch_node_process_cpu_percent: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_node_os_cpu_percent: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_node_os_load1: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 10000.0 + meter_elasticsearch_node_os_load5: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 10000.0 + meter_elasticsearch_node_os_load15: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 10000.0 + meter_elasticsearch_node_indices_translog_operations: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 0.0 + meter_elasticsearch_node_indices_translog_size: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 0.0 + meter_elasticsearch_node_breakers_tripped: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + breaker: test-value + cluster: 'elasticsearch::test-cluster' + value: 50.0 + meter_elasticsearch_node_breakers_estimated_size: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + breaker: test-value + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_node_disk_usage_percent: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + mount: test-value + value: -0.0 + meter_elasticsearch_node_disk_usage: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + mount: test-value + value: 0.0 + meter_elasticsearch_node_disk_io_read_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + mount: test-value + value: 0.0 + meter_elasticsearch_node_disk_io_write_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + mount: test-value + value: 0.0 + meter_elasticsearch_node_network_send_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 0.0 + meter_elasticsearch_node_network_receive_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 0.0 + meter_elasticsearch_node_indices_search_query_total_req_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_indices_search_query_time_seconds_proc_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 0.25 + meter_elasticsearch_node_indices_search_fetch_total_req_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_indices_search_fetch_time_seconds: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 50.0 + meter_elasticsearch_node_indices_indexing_index_total_req_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_indices_indexing_index_total_proc_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 1.0 + meter_elasticsearch_node_indices_merges_total_req_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_indices_refresh_total_req_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_indices_flush_total_req_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_indices_get_exists_total_req_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_indices_get_missing_total_req_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_indices_get_total_req_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_indices_indexing_delete_total_req_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_indices_search_scroll_total_req_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_indices_search_suggest_total_req_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_indices_docs: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_node_indices_docs_deleted_total: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_node_indices_docs_deleted: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_indices_merges_docs_total: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_indices_merges_total_size_bytes_total: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 25.0 + meter_elasticsearch_node_segment_count: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 100.0 + meter_elasticsearch_node_segment_memory: + entities: + - scope: SERVICE_INSTANCE + service: 'elasticsearch::test-cluster' + instance: test-name + layer: ELASTICSEARCH + samples: + - labels: + name: test-name + cluster: 'elasticsearch::test-cluster' + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-node.yaml b/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-node.yaml new file mode 100644 index 000000000000..1650b94403ac --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/elasticsearch/elasticsearch-node.yaml @@ -0,0 +1,149 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'elasticsearch-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'elasticsearch::' + tags.cluster}).instance(['cluster'], ['name'], Layer.ELASTICSEARCH) +metricPrefix: meter_elasticsearch_node +metricsRules: + # node rules + - name: rules + exp: elasticsearch_process_cpu_percent.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name' , 'es_client_node' , 'es_data_node' , 'es_ingest_node' , 'es_master_node']) + - name: open_file_count + exp: elasticsearch_process_open_files_count.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']) + - name: all_disk_free_space + exp: elasticsearch_filesystem_data_available_bytes.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']) + - name: jvm_memory_used + exp: elasticsearch_jvm_memory_used_bytes.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']) + + # jvm + - name: jvm_memory_nonheap_used + exp: elasticsearch_jvm_memory_used_bytes.tagNotEqual('cluster','unknown_cluster').tagEqual('area' , 'non-heap').sum(['cluster' , 'name']) + - name: jvm_memory_heap_used + exp: elasticsearch_jvm_memory_used_bytes.tagNotEqual('cluster','unknown_cluster').tagEqual('area' , 'heap').sum(['cluster' , 'name']) + - name: jvm_memory_heap_max + exp: elasticsearch_jvm_memory_max_bytes.tagNotEqual('cluster','unknown_cluster').tagEqual('area' , 'heap').sum(['cluster' , 'name']) + - name: jvm_memory_nonheap_committed + exp: elasticsearch_jvm_memory_committed_bytes.tagNotEqual('cluster','unknown_cluster').tagEqual('area' , 'non-heap').sum(['cluster' , 'name']) + - name: jvm_memory_heap_committed + exp: elasticsearch_jvm_memory_committed_bytes.tagNotEqual('cluster','unknown_cluster').tagEqual('area' , 'heap').sum(['cluster' , 'name']) + - name: jvm_memory_pool_peak_used + exp: elasticsearch_jvm_memory_pool_peak_used_bytes.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name' , 'pool']) + - name: jvm_gc_count + exp: elasticsearch_jvm_gc_collection_seconds_count.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name' , 'gc']).increase('PT1M') + - name: jvm_gc_time + exp: (elasticsearch_jvm_gc_collection_seconds_sum * 1000).tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name' , 'gc']).increase('PT1M') + + # cpu + - name: process_cpu_percent + exp: elasticsearch_process_cpu_percent.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']) + - name: os_cpu_percent + exp: elasticsearch_os_cpu_percent.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']) + - name: os_load1 + exp: elasticsearch_os_load1.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']) * 100 + - name: os_load5 + exp: elasticsearch_os_load5.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']) * 100 + - name: os_load15 + exp: elasticsearch_os_load15.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']) * 100 + + # translog + - name: indices_translog_operations + exp: elasticsearch_indices_translog_operations.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).irate() + - name: indices_translog_size + exp: elasticsearch_indices_translog_size_in_bytes.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).irate() + + # breakers tripped + - name: breakers_tripped + exp: elasticsearch_breakers_tripped.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name' , 'breaker']).increase('PT1M') + - name: breakers_estimated_size + exp: elasticsearch_breakers_estimated_size_bytes.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name' , 'breaker']) + + # disk + - name: disk_usage_percent + exp: 100 - (elasticsearch_filesystem_data_available_bytes * 100).tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name' , 'mount']) / elasticsearch_filesystem_data_size_bytes.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name' , 'mount']) + - name: disk_usage + exp: elasticsearch_filesystem_data_size_bytes.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name' , 'mount']) - elasticsearch_filesystem_data_available_bytes.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name' , 'mount']) + - name: disk_io_read_bytes + exp: elasticsearch_filesystem_io_stats_device_read_size_kilobytes_sum.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name' , 'mount']).irate() + - name: disk_io_write_bytes + exp: elasticsearch_filesystem_io_stats_device_write_size_kilobytes_sum.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name' , 'mount']).irate() + + # network + - name: network_send_bytes + exp: elasticsearch_transport_tx_size_bytes_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).irate() + - name: network_receive_bytes + exp: elasticsearch_transport_rx_size_bytes_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).irate() + + # operations + - name: indices_search_query_total_req_rate + exp: elasticsearch_indices_search_query_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + - name: indices_search_query_time_seconds_proc_rate + exp: 1 / ((elasticsearch_indices_search_query_time_seconds.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + elasticsearch_indices_search_fetch_time_seconds.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + elasticsearch_indices_search_scroll_time_seconds.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + elasticsearch_indices_search_suggest_time_seconds.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M')) / elasticsearch_indices_search_query_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M')) + - name: indices_search_fetch_total_req_rate + exp: elasticsearch_indices_search_fetch_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + - name: indices_search_fetch_time_seconds + exp: elasticsearch_indices_search_fetch_time_seconds.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).increase('PT1M') + - name: indices_indexing_index_total_req_rate + exp: elasticsearch_indices_indexing_index_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + - name: indices_indexing_index_total_proc_rate + exp: 1 / (elasticsearch_indices_indexing_index_time_seconds_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') / elasticsearch_indices_indexing_index_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M')) + - name: indices_merges_total_req_rate + exp: elasticsearch_indices_merges_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + - name: indices_refresh_total_req_rate + exp: elasticsearch_indices_refresh_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + - name: indices_flush_total_req_rate + exp: elasticsearch_indices_flush_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + - name: indices_get_exists_total_req_rate + exp: elasticsearch_indices_get_exists_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + - name: indices_get_missing_total_req_rate + exp: elasticsearch_indices_get_missing_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + - name: indices_get_total_req_rate + exp: elasticsearch_indices_get_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + - name: indices_indexing_delete_total_req_rate + exp: elasticsearch_indices_indexing_delete_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + - name: indices_search_scroll_total_req_rate + exp: elasticsearch_indices_search_scroll_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + - name: indices_search_suggest_total_req_rate + exp: elasticsearch_indices_search_suggest_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + + - name: indices_docs + exp: elasticsearch_indices_docs.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']) + - name: indices_docs_deleted_total + exp: elasticsearch_indices_docs_deleted.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']) + - name: indices_docs_deleted + exp: elasticsearch_indices_docs_deleted.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + - name: indices_merges_docs_total + exp: elasticsearch_indices_merges_docs_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + - name: indices_merges_total_size_bytes_total + exp: elasticsearch_indices_merges_total_size_bytes_total.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']).rate('PT1M') + + - name: segment_count + exp: elasticsearch_indices_segments_count.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']) + - name: segment_memory + exp: elasticsearch_indices_segments_memory_bytes.tagNotEqual('cluster','unknown_cluster').sum(['cluster' , 'name']) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/flink/flink-job.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/flink/flink-job.data.yaml new file mode 100644 index 000000000000..ed4e945d7379 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/flink/flink-job.data.yaml @@ -0,0 +1,267 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + flink_jobmanager_job_numRestarts: + - labels: + cluster: test-cluster + flink_job_name: test-value + value: 100.0 + flink_jobmanager_job_runningTime: + - labels: + cluster: test-cluster + flink_job_name: test-value + value: 100.0 + flink_jobmanager_job_restartingTime: + - labels: + cluster: test-cluster + flink_job_name: test-value + value: 100.0 + flink_jobmanager_job_cancellingTime: + - labels: + cluster: test-cluster + flink_job_name: test-value + value: 100.0 + flink_jobmanager_job_totalNumberOfCheckpoints: + - labels: + cluster: test-cluster + flink_job_name: test-value + value: 100.0 + flink_jobmanager_job_numberOfFailedCheckpoints: + - labels: + cluster: test-cluster + flink_job_name: test-value + value: 100.0 + flink_jobmanager_job_numberOfCompletedCheckpoints: + - labels: + cluster: test-cluster + flink_job_name: test-value + value: 100.0 + flink_jobmanager_job_numberOfInProgressCheckpoints: + - labels: + cluster: test-cluster + flink_job_name: test-value + value: 100.0 + flink_jobmanager_job_lastCheckpointSize: + - labels: + cluster: test-cluster + flink_job_name: test-value + value: 100.0 + flink_jobmanager_job_lastCheckpointDuration: + - labels: + cluster: test-cluster + flink_job_name: test-value + value: 100.0 + flink_taskmanager_job_task_operator_currentEmitEventTimeLag: + - labels: + cluster: test-cluster + flink_job_name: test-value + operator_name: test-value + value: 100.0 + flink_taskmanager_job_task_operator_numRecordsIn: + - labels: + cluster: test-cluster + flink_job_name: test-value + operator_name: test-value + value: 100.0 + flink_taskmanager_job_task_operator_numRecordsOut: + - labels: + cluster: test-cluster + flink_job_name: test-value + operator_name: test-value + value: 100.0 + flink_taskmanager_job_task_operator_numBytesInPerSecond: + - labels: + cluster: test-cluster + flink_job_name: test-value + operator_name: test-value + value: 100.0 + flink_taskmanager_job_task_operator_numBytesOutPerSecond: + - labels: + cluster: test-cluster + flink_job_name: test-value + operator_name: test-value + value: 100.0 +expected: + meter_flink_job_restart_number: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_job_runningTime: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_job_restartingTime: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_job_cancellingTime: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_job_checkpoints_total: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_job_checkpoints_failed: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_job_checkpoints_completed: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_job_checkpoints_inProgress: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_job_lastCheckpointSize: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_job_lastCheckpointDuration: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_job_currentEmitEventTimeLag: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + operator_name: test-value + flink_job_name: test-value + value: 100.0 + meter_flink_job_numRecordsIn: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + operator_name: test-value + flink_job_name: test-value + value: 100.0 + meter_flink_job_numRecordsOut: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + operator_name: test-value + flink_job_name: test-value + value: 100.0 + meter_flink_job_numBytesInPerSecond: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + operator_name: test-value + flink_job_name: test-value + value: 100.0 + meter_flink_job_numBytesOutPerSecond: + entities: + - scope: ENDPOINT + service: 'flink::test-cluster' + endpoint: test-value + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + operator_name: test-value + flink_job_name: test-value + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/flink/flink-job.yaml b/test/script-cases/scripts/mal/test-otel-rules/flink/flink-job.yaml new file mode 100644 index 000000000000..b69d321b7a3e --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/flink/flink-job.yaml @@ -0,0 +1,72 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'flink-jobManager-monitoring' || tags.job_name == 'flink-taskManager-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'flink::' + tags.cluster}).endpoint(['cluster'], ['flink_job_name'], Layer.FLINK) +metricPrefix: meter_flink_job + +metricsRules: + + - name: restart_number + exp: flink_jobmanager_job_numRestarts.sum(['cluster','flink_job_name']) + - name: runningTime + exp: flink_jobmanager_job_runningTime.sum(['cluster','flink_job_name']) + - name: restartingTime + exp: flink_jobmanager_job_restartingTime.sum(['cluster','flink_job_name']) + - name: cancellingTime + exp: flink_jobmanager_job_cancellingTime.sum(['cluster','flink_job_name']) + +# checkpoints + - name: checkpoints_total + exp: flink_jobmanager_job_totalNumberOfCheckpoints.sum(['cluster','flink_job_name']) + - name: checkpoints_failed + exp: flink_jobmanager_job_numberOfFailedCheckpoints.sum(['cluster','flink_job_name']) + - name: checkpoints_completed + exp: flink_jobmanager_job_numberOfCompletedCheckpoints.sum(['cluster','flink_job_name']) + - name: checkpoints_inProgress + exp: flink_jobmanager_job_numberOfInProgressCheckpoints.sum(['cluster','flink_job_name']) + - name: lastCheckpointSize + exp: flink_jobmanager_job_lastCheckpointSize.sum(['cluster','flink_job_name']) + - name: lastCheckpointDuration + exp: flink_jobmanager_job_lastCheckpointDuration.sum(['cluster','flink_job_name']) + + - name: currentEmitEventTimeLag + exp: flink_taskmanager_job_task_operator_currentEmitEventTimeLag.sum(['cluster','flink_job_name','operator_name']) + + - name: numRecordsIn + exp: flink_taskmanager_job_task_operator_numRecordsIn.sum(['cluster','flink_job_name','operator_name']) + - name: numRecordsOut + exp: flink_taskmanager_job_task_operator_numRecordsOut.sum(['cluster','flink_job_name','operator_name']) + - name: numBytesInPerSecond + exp: flink_taskmanager_job_task_operator_numBytesInPerSecond.sum(['cluster','flink_job_name','operator_name']) + - name: numBytesOutPerSecond + exp: flink_taskmanager_job_task_operator_numBytesOutPerSecond.sum(['cluster','flink_job_name','operator_name']) + + diff --git a/test/script-cases/scripts/mal/test-otel-rules/flink/flink-jobManager.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/flink/flink-jobManager.data.yaml new file mode 100644 index 000000000000..b6ce3ee0469c --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/flink/flink-jobManager.data.yaml @@ -0,0 +1,294 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + flink_jobmanager_numRunningJobs: + - labels: + cluster: test-cluster + value: 100.0 + flink_jobmanager_numRegisteredTaskManagers: + - labels: + cluster: test-cluster + value: 100.0 + flink_jobmanager_taskSlotsTotal: + - labels: + cluster: test-cluster + value: 100.0 + flink_jobmanager_taskSlotsAvailable: + - labels: + cluster: test-cluster + value: 100.0 + flink_jobmanager_Status_JVM_CPU_Load: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 + flink_jobmanager_Status_JVM_CPU_Time: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 + flink_jobmanager_Status_JVM_Memory_Heap_Used: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 + flink_jobmanager_Status_JVM_Memory_Heap_Max: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 + flink_jobmanager_Status_JVM_Memory_NonHeap_Used: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 + flink_jobmanager_Status_JVM_Memory_NonHeap_Max: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 + flink_jobmanager_Status_JVM_Threads_Count: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 + flink_jobmanager_Status_JVM_Memory_Metaspace_Used: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 + flink_jobmanager_Status_JVM_Memory_Metaspace_Max: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 + flink_jobmanager_Status_JVM_GarbageCollector_G1_Young_Generation_Count: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 + flink_jobmanager_Status_JVM_GarbageCollector_G1_Old_Generation_Count: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 + flink_jobmanager_Status_JVM_GarbageCollector_G1_Young_Generation_Time: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 + flink_jobmanager_Status_JVM_GarbageCollector_G1_Old_Generation_Time: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 + flink_jobmanager_Status_JVM_GarbageCollector_All_Count: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 + flink_jobmanager_Status_JVM_GarbageCollector_All_Time: + - labels: + cluster: test-cluster + jobManager_node: test-value + value: 100.0 +expected: + meter_flink_jobManager_running_job_number: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + value: 100.0 + meter_flink_jobManager_taskManagers_registered_number: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + value: 100.0 + meter_flink_jobManager_taskManagers_slots_total: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + value: 100.0 + meter_flink_jobManager_taskManagers_slots_available: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + cluster: 'flink::test-cluster' + value: 100.0 + meter_flink_jobManager_jvm_cpu_load: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 100000.0 + meter_flink_jobManager_jvm_cpu_time: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 50.0 + meter_flink_jobManager_jvm_memory_heap_used: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 100.0 + meter_flink_jobManager_jvm_memory_heap_available: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 0.0 + meter_flink_jobManager_jvm_memory_nonHeap_used: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 100.0 + meter_flink_jobManager_jvm_memory_nonHeap_available: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 0.0 + meter_flink_jobManager_jvm_thread_count: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 100.0 + meter_flink_jobManager_jvm_memory_metaspace_used: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 100.0 + meter_flink_jobManager_jvm_memory_metaspace_available: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 0.0 + meter_flink_jobManager_jvm_g1_young_generation_count: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 50.0 + meter_flink_jobManager_jvm_g1_old_generation_count: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 50.0 + meter_flink_jobManager_jvm_g1_young_generation_time: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 50.0 + meter_flink_jobManager_jvm_g1_old_generation_time: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 50.0 + meter_flink_jobManager_jvm_all_garbageCollector_count: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 50.0 + meter_flink_jobManager_jvm_all_garbageCollector_time: + entities: + - scope: SERVICE + service: 'flink::test-cluster' + layer: FLINK + samples: + - labels: + jobManager_node: test-value + cluster: 'flink::test-cluster' + value: 50.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/flink/flink-jobManager.yaml b/test/script-cases/scripts/mal/test-otel-rules/flink/flink-jobManager.yaml new file mode 100644 index 000000000000..7a01653ed930 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/flink/flink-jobManager.yaml @@ -0,0 +1,80 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'flink-jobManager-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'flink::' + tags.cluster}).service(['cluster'], Layer.FLINK) +metricPrefix: meter_flink_jobManager +metricsRules: + + # job + - name: running_job_number + exp: flink_jobmanager_numRunningJobs.sum(['cluster']) + + # task + - name: taskManagers_registered_number + exp: flink_jobmanager_numRegisteredTaskManagers.sum(['cluster']) + - name: taskManagers_slots_total + exp: flink_jobmanager_taskSlotsTotal.sum(['cluster']) + - name: taskManagers_slots_available + exp: flink_jobmanager_taskSlotsAvailable.sum(['cluster']) + + #jvm + - name: jvm_cpu_load + exp: flink_jobmanager_Status_JVM_CPU_Load.sum(['cluster','jobManager_node'])*1000 + - name: jvm_cpu_time + exp: flink_jobmanager_Status_JVM_CPU_Time.sum(['cluster','jobManager_node']).increase('PT1M') + - name: jvm_memory_heap_used + exp: flink_jobmanager_Status_JVM_Memory_Heap_Used.sum(['cluster','jobManager_node']) + - name: jvm_memory_heap_available + exp: flink_jobmanager_Status_JVM_Memory_Heap_Max.sum(['cluster','jobManager_node'])-flink_jobmanager_Status_JVM_Memory_Heap_Used.sum(['cluster','jobManager_node']) + - name: jvm_memory_nonHeap_used + exp: flink_jobmanager_Status_JVM_Memory_NonHeap_Used.sum(['cluster','jobManager_node']) + - name: jvm_memory_nonHeap_available + exp: flink_jobmanager_Status_JVM_Memory_NonHeap_Max.sum(['cluster','jobManager_node'])-flink_jobmanager_Status_JVM_Memory_NonHeap_Used.sum(['cluster','jobManager_node']) + - name: jvm_thread_count + exp: flink_jobmanager_Status_JVM_Threads_Count.sum(['cluster','jobManager_node']) + - name: jvm_memory_metaspace_used + exp: flink_jobmanager_Status_JVM_Memory_Metaspace_Used.sum(['cluster','jobManager_node']) + - name: jvm_memory_metaspace_available + exp: flink_jobmanager_Status_JVM_Memory_Metaspace_Max.sum(['cluster','jobManager_node'])-flink_jobmanager_Status_JVM_Memory_Metaspace_Used.sum(['cluster','jobManager_node']) + + - name: jvm_g1_young_generation_count + exp: flink_jobmanager_Status_JVM_GarbageCollector_G1_Young_Generation_Count.sum(['cluster','jobManager_node']).increase('PT1M') + - name: jvm_g1_old_generation_count + exp: flink_jobmanager_Status_JVM_GarbageCollector_G1_Old_Generation_Count.sum(['cluster','jobManager_node']).increase('PT1M') + - name: jvm_g1_young_generation_time + exp: flink_jobmanager_Status_JVM_GarbageCollector_G1_Young_Generation_Time.sum(['cluster','jobManager_node']).increase('PT1M') + - name: jvm_g1_old_generation_time + exp: flink_jobmanager_Status_JVM_GarbageCollector_G1_Old_Generation_Time.sum(['cluster','jobManager_node']).increase('PT1M') + + - name: jvm_all_garbageCollector_count + exp: flink_jobmanager_Status_JVM_GarbageCollector_All_Count.sum(['cluster','jobManager_node']).increase('PT1M') + - name: jvm_all_garbageCollector_time + exp: flink_jobmanager_Status_JVM_GarbageCollector_All_Time.sum(['cluster','jobManager_node']).increase('PT1M') diff --git a/test/script-cases/scripts/mal/test-otel-rules/flink/flink-taskManager.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/flink/flink-taskManager.data.yaml new file mode 100644 index 000000000000..dbde688d5df8 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/flink/flink-taskManager.data.yaml @@ -0,0 +1,413 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + flink_taskmanager_Status_JVM_CPU_Load: + - labels: + cluster: test-cluster + taskManager_node: test-value + value: 100.0 + flink_taskmanager_Status_JVM_CPU_Time: + - labels: + cluster: test-cluster + taskManager_node: test-value + value: 100.0 + flink_taskmanager_Status_JVM_Memory_Heap_Used: + - labels: + cluster: test-cluster + taskManager_node: test-value + value: 100.0 + flink_taskmanager_Status_JVM_Memory_Heap_Max: + - labels: + cluster: test-cluster + taskManager_node: test-value + value: 100.0 + flink_taskmanager_Status_JVM_Threads_Count: + - labels: + cluster: test-cluster + taskManager_node: test-value + value: 100.0 + flink_taskmanager_Status_JVM_Memory_Metaspace_Max: + - labels: + cluster: test-cluster + taskManager_node: test-value + value: 100.0 + flink_taskmanager_Status_JVM_Memory_Metaspace_Used: + - labels: + cluster: test-cluster + taskManager_node: test-value + value: 100.0 + flink_taskmanager_Status_JVM_Memory_NonHeap_Used: + - labels: + cluster: test-cluster + taskManager_node: test-value + value: 100.0 + flink_taskmanager_Status_JVM_Memory_NonHeap_Max: + - labels: + cluster: test-cluster + taskManager_node: test-value + value: 100.0 + flink_taskmanager_job_task_numRecordsIn: + - labels: + cluster: test-cluster + taskManager_node: test-value + flink_job_name: test-value + task_name: test-value + value: 100.0 + flink_taskmanager_job_task_numRecordsOut: + - labels: + cluster: test-cluster + taskManager_node: test-value + flink_job_name: test-value + task_name: test-value + value: 100.0 + flink_taskmanager_job_task_numBytesInPerSecond: + - labels: + cluster: test-cluster + taskManager_node: test-value + flink_job_name: test-value + task_name: test-value + value: 100.0 + flink_taskmanager_job_task_numBytesOutPerSecond: + - labels: + cluster: test-cluster + taskManager_node: test-value + flink_job_name: test-value + task_name: test-value + value: 100.0 + flink_taskmanager_Status_Shuffle_Netty_UsedMemory: + - labels: + cluster: test-cluster + taskManager_node: test-value + value: 100.0 + flink_taskmanager_Status_Shuffle_Netty_AvailableMemory: + - labels: + cluster: test-cluster + taskManager_node: test-value + value: 100.0 + flink_taskmanager_job_task_Shuffle_Netty_Input_Buffers_inPoolUsage: + - labels: + cluster: test-cluster + taskManager_node: test-value + flink_job_name: test-value + task_name: test-value + value: 100.0 + flink_taskmanager_job_task_Shuffle_Netty_Output_Buffers_outPoolUsage: + - labels: + cluster: test-cluster + taskManager_node: test-value + flink_job_name: test-value + task_name: test-value + value: 100.0 + flink_taskmanager_job_task_isBackPressured: + - labels: + cluster: test-cluster + taskManager_node: test-value + flink_job_name: test-value + task_name: test-value + value: 100.0 + flink_taskmanager_job_task_idleTimeMsPerSecond: + - labels: + cluster: test-cluster + taskManager_node: test-value + flink_job_name: test-value + task_name: test-value + value: 100.0 + flink_taskmanager_job_task_busyTimeMsPerSecond: + - labels: + cluster: test-cluster + taskManager_node: test-value + flink_job_name: test-value + task_name: test-value + value: 100.0 + flink_taskmanager_job_task_softBackPressuredTimeMsPerSecond: + - labels: + cluster: test-cluster + taskManager_node: test-value + flink_job_name: test-value + task_name: test-value + value: 100.0 + flink_taskmanager_job_task_hardBackPressuredTimeMsPerSecond: + - labels: + cluster: test-cluster + taskManager_node: test-value + flink_job_name: test-value + task_name: test-value + value: 100.0 +expected: + meter_flink_taskManager_jvm_cpu_load: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + cluster: 'flink::test-cluster' + value: 100000.0 + meter_flink_taskManager_jvm_cpu_time: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + cluster: 'flink::test-cluster' + value: 50.0 + meter_flink_taskManager_jvm_memory_heap_used: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + cluster: 'flink::test-cluster' + value: 100.0 + meter_flink_taskManager_jvm_memory_heap_available: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + cluster: 'flink::test-cluster' + value: 0.0 + meter_flink_taskManager_jvm_thread_count: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + cluster: 'flink::test-cluster' + value: 100.0 + meter_flink_taskManager_jvm_memory_metaspace_available: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + cluster: 'flink::test-cluster' + value: 0.0 + meter_flink_taskManager_jvm_memory_metaspace_used: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + cluster: 'flink::test-cluster' + value: 100.0 + meter_flink_taskManager_jvm_memory_nonHeap_used: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + cluster: 'flink::test-cluster' + value: 100.0 + meter_flink_taskManager_jvm_memory_nonHeap_available: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + cluster: 'flink::test-cluster' + value: 0.0 + meter_flink_taskManager_numRecordsIn: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + task_name: test-value + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 50.0 + meter_flink_taskManager_numRecordsOut: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + task_name: test-value + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 50.0 + meter_flink_taskManager_numBytesInPerSecond: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + task_name: test-value + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_taskManager_numBytesOutPerSecond: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + task_name: test-value + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_taskManager_netty_usedMemory: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + cluster: 'flink::test-cluster' + value: 100.0 + meter_flink_taskManager_netty_availableMemory: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + cluster: 'flink::test-cluster' + value: 100.0 + meter_flink_taskManager_inPoolUsage: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + task_name: test-value + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 10000.0 + meter_flink_taskManager_outPoolUsage: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + task_name: test-value + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 10000.0 + meter_flink_taskManager_isBackPressured: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + task_name: test-value + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_taskManager_idleTimeMsPerSecond: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + task_name: test-value + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_taskManager_busyTimeMsPerSecond: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + task_name: test-value + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_taskManager_softBackPressuredTimeMsPerSecond: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + task_name: test-value + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 + meter_flink_taskManager_hardBackPressuredTimeMsPerSecond: + entities: + - scope: SERVICE_INSTANCE + service: 'flink::test-cluster' + instance: test-value + layer: FLINK + samples: + - labels: + taskManager_node: test-value + task_name: test-value + cluster: 'flink::test-cluster' + flink_job_name: test-value + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/flink/flink-taskManager.yaml b/test/script-cases/scripts/mal/test-otel-rules/flink/flink-taskManager.yaml new file mode 100644 index 000000000000..900594a2b1a7 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/flink/flink-taskManager.yaml @@ -0,0 +1,89 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'flink-taskManager-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'flink::' + tags.cluster}).instance(['cluster'], ['taskManager_node'], Layer.FLINK) +metricPrefix: meter_flink_taskManager +metricsRules: + + # jvm + - name: jvm_cpu_load + exp: flink_taskmanager_Status_JVM_CPU_Load.sum(['cluster','taskManager_node'])*1000 + - name: jvm_cpu_time + exp: flink_taskmanager_Status_JVM_CPU_Time.sum(['cluster','taskManager_node']).increase('PT1M') + - name: jvm_memory_heap_used + exp: flink_taskmanager_Status_JVM_Memory_Heap_Used.sum(['cluster','taskManager_node']) + - name: jvm_memory_heap_available + exp: flink_taskmanager_Status_JVM_Memory_Heap_Max.sum(['cluster','taskManager_node'])-flink_taskmanager_Status_JVM_Memory_Heap_Used.sum(['cluster','taskManager_node']) + - name: jvm_thread_count + exp: flink_taskmanager_Status_JVM_Threads_Count.sum(['cluster','taskManager_node']) + - name: jvm_memory_metaspace_available + exp: flink_taskmanager_Status_JVM_Memory_Metaspace_Max.sum(['cluster','taskManager_node'])-flink_taskmanager_Status_JVM_Memory_Metaspace_Used.sum(['cluster','taskManager_node']) + - name: jvm_memory_metaspace_used + exp: flink_taskmanager_Status_JVM_Memory_Metaspace_Used.sum(['cluster','taskManager_node']) + + - name: jvm_memory_nonHeap_used + exp: flink_taskmanager_Status_JVM_Memory_NonHeap_Used.sum(['cluster','taskManager_node']) + - name: jvm_memory_nonHeap_available + exp: flink_taskmanager_Status_JVM_Memory_NonHeap_Max.sum(['cluster','taskManager_node'])-flink_taskmanager_Status_JVM_Memory_NonHeap_Used.sum(['cluster','taskManager_node']) + +# # records + - name: numRecordsIn + exp: flink_taskmanager_job_task_numRecordsIn.sum(['cluster','taskManager_node','flink_job_name','task_name']).increase('PT1M') + - name: numRecordsOut + exp: flink_taskmanager_job_task_numRecordsOut.sum(['cluster','taskManager_node','flink_job_name','task_name']).increase('PT1M') + - name: numBytesInPerSecond + exp: flink_taskmanager_job_task_numBytesInPerSecond.sum(['cluster','taskManager_node','flink_job_name','task_name']) + - name: numBytesOutPerSecond + exp: flink_taskmanager_job_task_numBytesOutPerSecond.sum(['cluster','taskManager_node','flink_job_name','task_name']) +# +# # network + - name: netty_usedMemory + exp: flink_taskmanager_Status_Shuffle_Netty_UsedMemory.sum(['cluster','taskManager_node']) + - name: netty_availableMemory + exp: flink_taskmanager_Status_Shuffle_Netty_AvailableMemory.sum(['cluster','taskManager_node']) + - name: inPoolUsage + exp: flink_taskmanager_job_task_Shuffle_Netty_Input_Buffers_inPoolUsage.sum(['cluster','taskManager_node','flink_job_name','task_name'])*100 + - name: outPoolUsage + exp: flink_taskmanager_job_task_Shuffle_Netty_Output_Buffers_outPoolUsage.sum(['cluster','taskManager_node','flink_job_name','task_name'])*100 + + # backPressured + - name: isBackPressured + exp: flink_taskmanager_job_task_isBackPressured.sum(['cluster','taskManager_node','flink_job_name','task_name']) + - name: idleTimeMsPerSecond + exp: flink_taskmanager_job_task_idleTimeMsPerSecond.sum(['cluster','taskManager_node','flink_job_name','task_name']) + - name: busyTimeMsPerSecond + exp: flink_taskmanager_job_task_busyTimeMsPerSecond.sum(['cluster','taskManager_node','flink_job_name','task_name']) + - name: softBackPressuredTimeMsPerSecond + exp: flink_taskmanager_job_task_softBackPressuredTimeMsPerSecond.sum(['cluster','taskManager_node','flink_job_name','task_name']) + - name: hardBackPressuredTimeMsPerSecond + exp: flink_taskmanager_job_task_hardBackPressuredTimeMsPerSecond.sum(['cluster','taskManager_node','flink_job_name','task_name']) + + diff --git a/test/script-cases/scripts/mal/test-otel-rules/istio-controlplane.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/istio-controlplane.data.yaml new file mode 100644 index 000000000000..4261d99dbcd9 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/istio-controlplane.data.yaml @@ -0,0 +1,514 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + istio_build: + - labels: + cluster: test-cluster + app: test-app + tag: 1.19.0 + component: pilot + value: 100.0 + process_virtual_memory_bytes: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + process_resident_memory_bytes: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + go_memstats_alloc_bytes: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + go_memstats_heap_inuse_bytes: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + go_memstats_stack_inuse_bytes: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + process_cpu_seconds_total: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + go_goroutines: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_xds_pushes: + - labels: + cluster: test-cluster + app: test-app + type: lds + value: 100.0 + pilot_xds_cds_reject: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_xds_eds_reject: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_xds_rds_reject: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_xds_lds_reject: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_xds_write_timeout: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_total_xds_internal_errors: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_total_xds_rejects: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_xds_push_context_errors: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_xds_push_timeout: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_proxy_convergence_time: + - labels: + cluster: test-cluster + app: test-app + le: '50' + value: 10.0 + - labels: + cluster: test-cluster + app: test-app + le: '100' + value: 20.0 + - labels: + cluster: test-cluster + app: test-app + le: '250' + value: 30.0 + - labels: + cluster: test-cluster + app: test-app + le: '500' + value: 40.0 + - labels: + cluster: test-cluster + app: test-app + le: '1000' + value: 50.0 + pilot_conflict_inbound_listener: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_conflict_outbound_listener_http_over_current_tcp: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_conflict_outbound_listener_tcp_over_current_tcp: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_conflict_outbound_listener_tcp_over_current_http: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_virt_services: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_services: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + pilot_xds: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + galley_validation_passed: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + galley_validation_failed: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + sidecar_injection_success_total: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 + sidecar_injection_failure_total: + - labels: + cluster: test-cluster + app: istiod + value: 100.0 +expected: + meter_istio_pilot_version: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.test-app' + layer: MESH_CP + samples: + - labels: + app: test-app + cluster: 'istio-ctrl::test-cluster' + tag: 1.19.0 + value: 100.0 + meter_istio_virtual_memory: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_resident_memory: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_go_alloc: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_go_heap_inuse: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_go_stack_inuse: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_cpu: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 2500.0 + meter_istio_go_goroutines: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_pilot_xds_pushes: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.test-app' + layer: MESH_CP + samples: + - labels: + app: test-app + type: lds + cluster: 'istio-ctrl::test-cluster' + value: 0.0 + meter_istio_pilot_xds_cds_reject: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_pilot_xds_eds_reject: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_pilot_xds_rds_reject: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_pilot_xds_lds_reject: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_pilot_xds_write_timeout: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 25.0 + meter_istio_pilot_total_xds_internal_errors: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 25.0 + meter_istio_pilot_total_xds_rejects: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 25.0 + meter_istio_pilot_xds_push_context_errors: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 25.0 + meter_istio_pilot_xds_push_timeout: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 25.0 + meter_istio_pilot_proxy_push_percentile: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.test-app' + layer: MESH_CP + samples: + - labels: + app: test-app + cluster: 'istio-ctrl::test-cluster' + le: '1000000' + value: 12.5 + - labels: + app: test-app + cluster: 'istio-ctrl::test-cluster' + le: '100000' + value: 5.0 + - labels: + app: test-app + cluster: 'istio-ctrl::test-cluster' + le: '250000' + value: 7.5 + - labels: + app: test-app + cluster: 'istio-ctrl::test-cluster' + le: '500000' + value: 10.0 + - labels: + app: test-app + cluster: 'istio-ctrl::test-cluster' + le: '50000' + value: 2.5 + meter_istio_pilot_conflict_il: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_pilot_conflict_ol_http_tcp: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_pilot_conflict_ol_tcp_tcp: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_pilot_conflict_ol_tcp_http: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_pilot_virt_services: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_pilot_services: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_pilot_xds: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 100.0 + meter_istio_galley_validation_passed: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 25.0 + meter_istio_galley_validation_failed: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 25.0 + meter_istio_sidecar_injection_success_total: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 25.0 + meter_istio_sidecar_injection_failure_total: + entities: + - scope: SERVICE + service: 'istio-ctrl::test-cluster.istiod' + layer: MESH_CP + samples: + - labels: + app: istiod + cluster: 'istio-ctrl::test-cluster' + value: 25.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/istio-controlplane.yaml b/test/script-cases/scripts/mal/test-otel-rules/istio-controlplane.yaml new file mode 100644 index 000000000000..4fcf1df9054b --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/istio-controlplane.yaml @@ -0,0 +1,108 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +expSuffix: tag({tags -> tags.cluster = 'istio-ctrl::' + tags.cluster}).service(['cluster', 'app'], Layer.MESH_CP) +metricPrefix: meter_istio +metricsRules: + ## Resource usage + # Pilot Versions + - name: pilot_version + exp: istio_build.tagEqual('component', 'pilot').sum(['cluster', 'app', 'tag']) + # Memory + - name: virtual_memory + exp: process_virtual_memory_bytes.tagEqual('app', 'istiod').sum(['cluster', 'app']) + - name: resident_memory + exp: process_resident_memory_bytes.tagEqual('app', 'istiod').sum(['cluster', 'app']) + - name: go_alloc + exp: go_memstats_alloc_bytes.tagEqual('app', 'istiod').sum(['cluster', 'app']) + - name: go_heap_inuse + exp: go_memstats_heap_inuse_bytes.tagEqual('app', 'istiod').sum(['cluster', 'app']) + - name: go_stack_inuse + exp: go_memstats_stack_inuse_bytes.tagEqual('app', 'istiod').sum(['cluster', 'app']) + # CPU + - name: cpu + exp: (process_cpu_seconds_total * 100).tagEqual('app', 'istiod').sum(['cluster', 'app']).rate('PT1M') + # Goroutines + - name: go_goroutines + exp: go_goroutines.tagEqual('app', 'istiod').sum(['cluster', 'app']) + ## Pilot push info + # Pilot pushes + - name: pilot_xds_pushes + exp: pilot_xds_pushes.tagMatch('type', 'lds|cds|rds|eds').sum(['cluster', 'app', 'type']).irate() + # Pilot Errors + - name: pilot_xds_cds_reject + exp: pilot_xds_cds_reject.tagEqual('app', 'istiod').sum(['cluster', 'app']) + - name: pilot_xds_eds_reject + exp: pilot_xds_eds_reject.tagEqual('app', 'istiod').sum(['cluster', 'app']) + - name: pilot_xds_rds_reject + exp: pilot_xds_rds_reject.tagEqual('app', 'istiod').sum(['cluster', 'app']) + - name: pilot_xds_lds_reject + exp: pilot_xds_lds_reject.tagEqual('app', 'istiod').sum(['cluster', 'app']) + - name: pilot_xds_write_timeout + exp: pilot_xds_write_timeout.tagEqual('app', 'istiod').sum(['cluster', 'app']).rate('PT1M') + - name: pilot_total_xds_internal_errors + exp: pilot_total_xds_internal_errors.tagEqual('app', 'istiod').sum(['cluster', 'app']).rate('PT1M') + - name: pilot_total_xds_rejects + exp: pilot_total_xds_rejects.tagEqual('app', 'istiod').sum(['cluster', 'app']).rate('PT1M') + - name: pilot_xds_push_context_errors + exp: pilot_xds_push_context_errors.tagEqual('app', 'istiod').sum(['cluster', 'app']).rate('PT1M') + - name: pilot_xds_push_timeout + exp: pilot_xds_push_timeout.tagEqual('app', 'istiod').sum(['cluster', 'app']).rate('PT1M') + # Proxy Push Time + - name: pilot_proxy_push_percentile + exp: pilot_proxy_convergence_time.sum(['cluster', 'app', 'le']).rate('PT1M').histogram().histogram_percentile([50,90,99]) + # Conflicts + - name: pilot_conflict_il + exp: pilot_conflict_inbound_listener.tagEqual('app', 'istiod').sum(['cluster', 'app']) + - name: pilot_conflict_ol_http_tcp + exp: pilot_conflict_outbound_listener_http_over_current_tcp.tagEqual('app', 'istiod').sum(['cluster', 'app']) + - name: pilot_conflict_ol_tcp_tcp + exp: pilot_conflict_outbound_listener_tcp_over_current_tcp.tagEqual('app', 'istiod').sum(['cluster', 'app']) + - name: pilot_conflict_ol_tcp_http + exp: pilot_conflict_outbound_listener_tcp_over_current_http.tagEqual('app', 'istiod').sum(['cluster', 'app']) + # ADS Monitoring + - name: pilot_virt_services + exp: pilot_virt_services.tagEqual('app', 'istiod').sum(['cluster', 'app']) + - name: pilot_services + exp: pilot_services.tagEqual('app', 'istiod').sum(['cluster', 'app']) + - name: pilot_xds + exp: pilot_xds.tagEqual('app', 'istiod').sum(['cluster', 'app']) + + ## Webhooks + # Configuration Validation + - name: galley_validation_passed + exp: galley_validation_passed.tagEqual('app', 'istiod').sum(['cluster', 'app']).rate('PT1M') + - name: galley_validation_failed + exp: galley_validation_failed.tagEqual('app', 'istiod').sum(['cluster', 'app']).rate('PT1M') + # Sidecar Injection + - name: sidecar_injection_success_total + exp: sidecar_injection_success_total.tagEqual('app', 'istiod').sum(['cluster', 'app']).rate('PT1M') + - name: sidecar_injection_failure_total + exp: sidecar_injection_failure_total.tagEqual('app', 'istiod').sum(['cluster', 'app']).rate('PT1M') diff --git a/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-cluster.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-cluster.data.yaml new file mode 100644 index 000000000000..40506df4b0cd --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-cluster.data.yaml @@ -0,0 +1,401 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + kube_node_status_capacity: + - labels: + cluster: test-cluster + remove: test-value + resource: cpu + value: 1.0 + - labels: + cluster: test-cluster + remove: test-value + resource: memory + value: 1.0 + - labels: + cluster: test-cluster + remove: test-value + resource: ephemeral_storage + value: 1.0 + kube_node_status_allocatable: + - labels: + cluster: test-cluster + remove: test-value + resource: cpu + value: 1.0 + - labels: + cluster: test-cluster + remove: test-value + resource: memory + value: 1.0 + - labels: + cluster: test-cluster + remove: test-value + resource: ephemeral_storage + value: 1.0 + kube_pod_container_resource_requests: + - labels: + cluster: test-cluster + remove: test-value + resource: cpu + value: 1.0 + - labels: + cluster: test-cluster + remove: test-value + resource: memory + value: 1.0 + kube_pod_container_resource_limits: + - labels: + cluster: test-cluster + remove: test-value + resource: cpu + value: 1.0 + - labels: + cluster: test-cluster + remove: test-value + resource: memory + value: 1.0 + kube_node_info: + - labels: + cluster: test-cluster + remove: test-value + value: 1.0 + kube_node_status_condition: + - labels: + cluster: test-cluster + node: test-node + condition: test-value + remove: test-value + status: 'true' + value: 1.0 + kube_namespace_labels: + - labels: + cluster: test-cluster + remove: test-value + value: 1.0 + kube_deployment_labels: + - labels: + cluster: test-cluster + remove: test-value + value: 1.0 + kube_deployment_status_condition: + - labels: + cluster: test-cluster + deployment: test-value + namespace: test-namespace + condition: Available + status: active + remove: test-value + value: 1.0 + kube_deployment_spec_replicas: + - labels: + cluster: test-cluster + deployment: test-value + namespace: test-namespace + remove: test-value + value: 1.0 + kube_statefulset_labels: + - labels: + cluster: test-cluster + remove: test-value + value: 1.0 + kube_daemonset_labels: + - labels: + cluster: test-cluster + remove: test-value + value: 1.0 + kube_service_info: + - labels: + cluster: test-cluster + remove: test-value + value: 1.0 + kube_pod_status_phase: + - labels: + cluster: test-cluster + service: test-service + phase: test-value + pod: test-pod + remove: test-value + value: 1.0 + kube_pod_info: + - labels: + cluster: test-cluster + remove: test-value + value: 1.0 + kube_pod_container_info: + - labels: + cluster: test-cluster + remove: test-value + value: 1.0 + kube_pod_container_status_waiting_reason: + - labels: + cluster: test-cluster + pod: test-pod + container: test-container + reason: test-value + remove: test-value + value: 1.0 + kube_pod_container_status_terminated_reason: + - labels: + cluster: test-cluster + pod: test-pod + container: test-container + reason: test-value + remove: test-value + value: 1.0 +expected: + k8s_cluster_cpu_cores: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1000.0 + k8s_cluster_cpu_cores_allocatable: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1000.0 + k8s_cluster_cpu_cores_requests: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1000.0 + k8s_cluster_cpu_cores_limits: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1000.0 + k8s_cluster_memory_total: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_memory_allocatable: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_memory_requests: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_memory_limits: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_storage_total: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_storage_allocatable: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_node_total: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_node_status: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + condition: test-value + value: 1.0 + k8s_cluster_namespace_total: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_deployment_total: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_deployment_status: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + namespace: test-namespace + cluster: 'k8s-cluster::test-cluster' + deployment: test-value + status: active + value: 1.0 + k8s_cluster_deployment_spec_replicas: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + namespace: test-namespace + cluster: 'k8s-cluster::test-cluster' + deployment: test-value + value: 1.0 + k8s_cluster_statefulset_total: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_daemonset_total: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_service_total: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_service_pod_status: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + phase: test-value + cluster: 'k8s-cluster::test-cluster' + service: test-service + value: 1.0 + k8s_cluster_pod_total: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_pod_status_not_running: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + pod: test-pod + phase: test-value + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_container_total: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + value: 1.0 + k8s_cluster_pod_status_waiting: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + container: test-container + cluster: 'k8s-cluster::test-cluster' + reason: test-value + pod: test-pod + value: 1.0 + k8s_cluster_pod_status_terminated: + entities: + - scope: SERVICE + service: 'k8s-cluster::test-cluster' + layer: K8S + samples: + - labels: + container: test-container + cluster: 'k8s-cluster::test-cluster' + reason: test-value + pod: test-pod + value: 1.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-cluster.yaml b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-cluster.yaml new file mode 100644 index 000000000000..33bd30f9f60a --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-cluster.yaml @@ -0,0 +1,94 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name in [ 'kubernetes-cadvisor', 'kube-state-metrics' ] }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'k8s-cluster::' + tags.cluster}).service(['cluster'], Layer.K8S) +metricPrefix: k8s_cluster +metricsRules: + - name: cpu_cores + exp: (kube_node_status_capacity * 1000).tagEqual('resource' , 'cpu').sum(['cluster']) + - name: cpu_cores_allocatable + exp: (kube_node_status_allocatable * 1000).tagEqual('resource' , 'cpu').sum(['cluster']) + - name: cpu_cores_requests + exp: (kube_pod_container_resource_requests * 1000).tagEqual('resource' , 'cpu').sum(['cluster']) + - name: cpu_cores_limits + exp: (kube_pod_container_resource_limits * 1000).tagEqual('resource' , 'cpu').sum(['cluster']) + + - name: memory_total + exp: kube_node_status_capacity.tagEqual('resource' , 'memory').sum(['cluster']) + - name: memory_allocatable + exp: kube_node_status_allocatable.tagEqual('resource' , 'memory').sum(['cluster']) + - name: memory_requests + exp: kube_pod_container_resource_requests.tagEqual('resource' , 'memory').sum(['cluster']) + - name: memory_limits + exp: kube_pod_container_resource_limits.tagEqual('resource' , 'memory').sum(['cluster']) + + - name: storage_total + exp: kube_node_status_capacity.tagEqual('resource' , 'ephemeral_storage').sum(['cluster']) + - name: storage_allocatable + exp: kube_node_status_allocatable.tagEqual('resource' , 'ephemeral_storage').sum(['cluster']) + + - name: node_total + exp: kube_node_info.sum(['cluster']) + - name: node_status + exp: kube_node_status_condition.valueEqual(1).tagMatch('status' , 'true|unknown').sum(['cluster' , 'node' ,'condition']) + + - name: namespace_total + exp: kube_namespace_labels.sum(['cluster']) + + - name: deployment_total + exp: kube_deployment_labels.sum(['cluster']) + - name: deployment_status + exp: kube_deployment_status_condition.valueEqual(1).tagMatch('condition' , 'Available').sum(['cluster' , 'deployment' , 'namespace' ,'condition' , 'status']).tag({tags -> tags.remove('condition')}) + - name: deployment_spec_replicas + exp: kube_deployment_spec_replicas.sum(['cluster' , 'deployment' , 'namespace']) + + - name: statefulset_total + exp: kube_statefulset_labels.sum(['cluster']) + + - name: daemonset_total + exp: kube_daemonset_labels.sum(['cluster']) + + - name: service_total + exp: kube_service_info.sum(['cluster']) + - name: service_pod_status + exp: kube_pod_status_phase.retagByK8sMeta('service' , K8sRetagType.Pod2Service , 'pod' , 'namespace').tagNotEqual('service' , '').valueEqual(1).sum(['cluster' , 'service' , 'phase']) + + - name: pod_total + exp: kube_pod_info.sum(['cluster']) + - name: pod_status_not_running + exp: kube_pod_status_phase.valueEqual(1).tagNotMatch('phase' , 'Running').sum(['cluster' , 'pod' , 'phase']) + + - name: container_total + exp: kube_pod_container_info.sum(['cluster']) + - name: pod_status_waiting + exp: kube_pod_container_status_waiting_reason.valueEqual(1).sum(['cluster' , 'pod' , 'container' , 'reason']) + - name: pod_status_terminated + exp: kube_pod_container_status_terminated_reason.valueEqual(1).sum(['cluster' , 'pod' , 'container' , 'reason']) diff --git a/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-instance.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-instance.data.yaml new file mode 100644 index 000000000000..1391212f49ab --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-instance.data.yaml @@ -0,0 +1,37 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + kube_pod_status_phase: + - labels: + cluster: test-cluster + namespace: test-namespace + service: test-service + pod: test-pod + value: 100.0 +expected: + k8s_service_instance_pod_instance_status: + entities: + - scope: SERVICE_INSTANCE + service: 'test-cluster::test-pod.test-namespace' + instance: test-pod + layer: K8S_SERVICE + samples: + - labels: + cluster: test-cluster + namespace: test-namespace + service: test-pod.test-namespace + pod: test-pod + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-instance.yaml b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-instance.yaml new file mode 100644 index 000000000000..8d723fbd4b86 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-instance.yaml @@ -0,0 +1,23 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +filter: "{ tags -> tags.job_name in [ 'kubernetes-cadvisor', 'kube-state-metrics' ] }" # The OpenTelemetry job name +expSuffix: |- + service(['cluster' , 'service'], '::', Layer.K8S_SERVICE) + .instance(['cluster', 'service'], '::', ['pod'], '', Layer.K8S_SERVICE, { tags -> ['pod': tags.pod, 'namespace': tags.namespace] }) +metricPrefix: k8s_service_instance +metricsRules: + - name: pod_instance_status + exp: kube_pod_status_phase.retagByK8sMeta('service' , K8sRetagType.Pod2Service , 'pod' , 'namespace').tagNotEqual('service' , '').sum(['cluster', 'namespace', 'service' , 'pod']) diff --git a/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-node.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-node.data.yaml new file mode 100644 index 000000000000..445543ea1640 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-node.data.yaml @@ -0,0 +1,284 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + kube_node_status_capacity: + - labels: + cluster: test-cluster + node: test-node + resource: cpu + value: 1.0 + - labels: + cluster: test-cluster + node: test-node + resource: memory + value: 1.0 + - labels: + cluster: test-cluster + node: test-node + resource: ephemeral_storage + value: 1.0 + container_cpu_usage_seconds_total: + - labels: + cluster: test-cluster + node: test-node + id: / + value: 1.0 + kube_node_status_allocatable: + - labels: + cluster: test-cluster + node: test-node + resource: cpu + value: 1.0 + - labels: + cluster: test-cluster + node: test-node + resource: memory + value: 1.0 + - labels: + cluster: test-cluster + node: test-node + resource: ephemeral_storage + value: 1.0 + kube_pod_container_resource_requests: + - labels: + cluster: test-cluster + node: test-node + resource: cpu + value: 1.0 + - labels: + cluster: test-cluster + node: test-node + resource: memory + value: 1.0 + kube_pod_container_resource_limits: + - labels: + cluster: test-cluster + node: test-node + resource: cpu + value: 1.0 + - labels: + cluster: test-cluster + node: test-node + resource: memory + value: 1.0 + container_memory_working_set_bytes: + - labels: + cluster: test-cluster + node: test-node + id: / + value: 1.0 + kube_node_status_condition: + - labels: + cluster: test-cluster + node: test-node + condition: test-value + status: 'true' + value: 1.0 + kube_pod_info: + - labels: + cluster: test-cluster + node: test-node + value: 1.0 + container_network_receive_bytes_total: + - labels: + cluster: test-cluster + node: test-node + id: / + value: 1.0 + container_network_transmit_bytes_total: + - labels: + cluster: test-cluster + node: test-node + id: / + value: 1.0 +expected: + k8s_node_cpu_cores: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 1000.0 + k8s_node_cpu_usage: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 0.0 + k8s_node_cpu_cores_allocatable: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 1000.0 + k8s_node_cpu_cores_requests: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 1000.0 + k8s_node_cpu_cores_limits: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 1000.0 + k8s_node_memory_total: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 1.0 + k8s_node_memory_allocatable: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 1.0 + k8s_node_memory_requests: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 1.0 + k8s_node_memory_limits: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 1.0 + k8s_node_memory_usage: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 1.0 + k8s_node_storage_total: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 1.0 + k8s_node_storage_allocatable: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 1.0 + k8s_node_node_status: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + condition: test-value + value: 1.0 + k8s_node_pod_total: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 1.0 + k8s_node_network_receive: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 0.0 + k8s_node_network_transmit: + entities: + - scope: SERVICE_INSTANCE + service: 'k8s-cluster::test-cluster' + instance: test-node + layer: K8S + samples: + - labels: + cluster: 'k8s-cluster::test-cluster' + node: test-node + value: 0.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-node.yaml b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-node.yaml new file mode 100644 index 000000000000..29c95d798751 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-node.yaml @@ -0,0 +1,74 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name in [ 'kubernetes-cadvisor', 'kube-state-metrics' ] }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'k8s-cluster::' + tags.cluster}).instance(['cluster'] , ['node'], Layer.K8S) +metricPrefix: k8s_node +metricsRules: + + - name: cpu_cores + exp: (kube_node_status_capacity * 1000).tagEqual('resource' , 'cpu').sum(['cluster' , 'node']) + - name: cpu_usage + exp: (container_cpu_usage_seconds_total * 1000).tagEqual('id' , '/').sum(['cluster' , 'node']).irate() + - name: cpu_cores_allocatable + exp: (kube_node_status_allocatable * 1000).tagEqual('resource' , 'cpu').sum(['cluster' , 'node']) + - name: cpu_cores_requests + exp: (kube_pod_container_resource_requests * 1000).tagEqual('resource' , 'cpu').sum(['cluster' , 'node']) + - name: cpu_cores_limits + exp: (kube_pod_container_resource_limits * 1000).tagEqual('resource' , 'cpu').sum(['cluster' , 'node']) + + - name: memory_total + exp: kube_node_status_capacity.tagEqual('resource' , 'memory').sum(['cluster' , 'node']) + - name: memory_allocatable + exp: kube_node_status_allocatable.tagEqual('resource' , 'memory').sum(['cluster' , 'node']) + - name: memory_requests + exp: kube_pod_container_resource_requests.tagEqual('resource' , 'memory').sum(['cluster' , 'node']) + - name: memory_limits + exp: kube_pod_container_resource_limits.tagEqual('resource' , 'memory').sum(['cluster' , 'node']) + + - name: memory_usage + exp: container_memory_working_set_bytes.tagEqual('id' , '/').sum(['cluster' , 'node']) + + + - name: storage_total + exp: kube_node_status_capacity.tagEqual('resource' , 'ephemeral_storage').sum(['cluster' , 'node']) + - name: storage_allocatable + exp: kube_node_status_allocatable.tagEqual('resource' , 'ephemeral_storage').sum(['cluster' , 'node']) + + - name: node_status + exp: kube_node_status_condition.valueEqual(1).tagMatch('status' , 'true|unknown').sum(['cluster' , 'node' ,'condition']) + + - name: pod_total + exp: kube_pod_info.sum(['cluster' , 'node']) + + - name: network_receive + exp: container_network_receive_bytes_total.tagEqual('id' , '/').sum(['cluster' , 'node']).irate() + - name: network_transmit + exp: container_network_transmit_bytes_total.tagEqual('id' , '/').sum(['cluster' , 'node']).irate() diff --git a/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-service.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-service.data.yaml new file mode 100644 index 000000000000..486c8d21be37 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-service.data.yaml @@ -0,0 +1,208 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + kube_pod_info: + - labels: + cluster: test-cluster + service: test-service + value: 1.0 + kube_pod_container_resource_requests: + - labels: + cluster: test-cluster + service: test-service + resource: cpu + value: 1.0 + - labels: + cluster: test-cluster + service: test-service + resource: memory + value: 1.0 + kube_pod_container_resource_limits: + - labels: + cluster: test-cluster + service: test-service + resource: cpu + value: 1.0 + - labels: + cluster: test-cluster + service: test-service + resource: memory + value: 1.0 + kube_pod_status_phase: + - labels: + cluster: test-cluster + service: test-service + pod: test-pod + phase: test-value + value: 1.0 + kube_pod_container_status_waiting_reason: + - labels: + cluster: test-cluster + service: test-service + pod: test-pod + container: test-container + reason: test-value + value: 1.0 + kube_pod_container_status_terminated_reason: + - labels: + cluster: test-cluster + service: test-service + pod: test-pod + container: test-container + reason: test-value + value: 1.0 + kube_pod_container_status_restarts_total: + - labels: + cluster: test-cluster + service: test-service + pod: test-pod + value: 1.0 + container_cpu_usage_seconds_total: + - labels: + cluster: test-cluster + service: test-service + pod: test-pod + container: test-container + value: 1.0 + container_memory_working_set_bytes: + - labels: + cluster: test-cluster + service: test-service + pod: test-pod + container: test-container + value: 1.0 +expected: + k8s_service_pod_total: + entities: + - scope: SERVICE + service: 'test-cluster::test-service' + layer: K8S_SERVICE + samples: + - labels: + cluster: test-cluster + service: test-service + value: 1.0 + k8s_service_cpu_cores_requests: + entities: + - scope: SERVICE + service: 'test-cluster::test-service' + layer: K8S_SERVICE + samples: + - labels: + cluster: test-cluster + service: test-service + value: 1000.0 + k8s_service_cpu_cores_limits: + entities: + - scope: SERVICE + service: 'test-cluster::test-service' + layer: K8S_SERVICE + samples: + - labels: + cluster: test-cluster + service: test-service + value: 1000.0 + k8s_service_memory_requests: + entities: + - scope: SERVICE + service: 'test-cluster::test-service' + layer: K8S_SERVICE + samples: + - labels: + cluster: test-cluster + service: test-service + value: 1.0 + k8s_service_memory_limits: + entities: + - scope: SERVICE + service: 'test-cluster::test-service' + layer: K8S_SERVICE + samples: + - labels: + cluster: test-cluster + service: test-service + value: 1.0 + k8s_service_pod_status: + entities: + - scope: SERVICE + service: 'test-cluster::test-service' + layer: K8S_SERVICE + samples: + - labels: + cluster: test-cluster + service: test-service + pod: test-pod + phase: test-value + value: 1.0 + k8s_service_pod_status_waiting: + entities: + - scope: SERVICE + service: 'test-cluster::test-service' + layer: K8S_SERVICE + samples: + - labels: + cluster: test-cluster + service: test-service + pod: test-pod + container: test-container + reason: test-value + value: 1.0 + k8s_service_pod_status_terminated: + entities: + - scope: SERVICE + service: 'test-cluster::test-service' + layer: K8S_SERVICE + samples: + - labels: + cluster: test-cluster + service: test-service + pod: test-pod + container: test-container + reason: test-value + value: 1.0 + k8s_service_pod_status_restarts_total: + entities: + - scope: SERVICE + service: 'test-cluster::test-service' + layer: K8S_SERVICE + samples: + - labels: + cluster: test-cluster + service: test-service + pod: test-pod + value: 1.0 + k8s_service_pod_cpu_usage: + entities: + - scope: SERVICE + service: 'test-cluster::test-service' + layer: K8S_SERVICE + samples: + - labels: + cluster: test-cluster + service: test-service + pod: test-pod + value: 0.0 + k8s_service_pod_memory_usage: + entities: + - scope: SERVICE + service: 'test-cluster::test-service' + layer: K8S_SERVICE + samples: + - labels: + cluster: test-cluster + service: test-service + pod: test-pod + value: 1.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-service.yaml b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-service.yaml new file mode 100644 index 000000000000..0bdcd5e40055 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/k8s/k8s-service.yaml @@ -0,0 +1,59 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name in [ 'kubernetes-cadvisor', 'kube-state-metrics' ] }" # The OpenTelemetry job name +expSuffix: service(['cluster' , 'service'], '::', Layer.K8S_SERVICE) +metricPrefix: k8s_service +metricsRules: + - name: pod_total + exp: kube_pod_info.retagByK8sMeta('service' , K8sRetagType.Pod2Service , 'pod' , 'namespace').tagNotEqual('service' , '').sum(['cluster' , 'service']) + + - name: cpu_cores_requests + exp: (kube_pod_container_resource_requests * 1000).retagByK8sMeta('service' , K8sRetagType.Pod2Service , 'pod' , 'namespace').tagNotEqual('service' , '').tagEqual('resource' , 'cpu').sum(['cluster' , 'service']) + - name: cpu_cores_limits + exp: (kube_pod_container_resource_limits * 1000).retagByK8sMeta('service' , K8sRetagType.Pod2Service , 'pod' , 'namespace').tagNotEqual('service' , '').tagEqual('resource' , 'cpu').sum(['cluster' , 'service']) + - name: memory_requests + exp: kube_pod_container_resource_requests.retagByK8sMeta('service' , K8sRetagType.Pod2Service , 'pod' , 'namespace').tagNotEqual('service' , '').tagEqual('resource' , 'memory').sum(['cluster' , 'service']) + - name: memory_limits + exp: kube_pod_container_resource_limits.retagByK8sMeta('service' , K8sRetagType.Pod2Service , 'pod' , 'namespace').tagNotEqual('service' , '').tagEqual('resource' , 'memory').sum(['cluster' , 'service']) + + - name: pod_status + exp: kube_pod_status_phase.retagByK8sMeta('service' , K8sRetagType.Pod2Service , 'pod' , 'namespace').tagNotEqual('service' , '').valueEqual(1).sum(['cluster' , 'service' , 'pod' , 'phase']) + - name: pod_status_waiting + exp: kube_pod_container_status_waiting_reason.retagByK8sMeta('service' , K8sRetagType.Pod2Service , 'pod' , 'namespace').tagNotEqual('service' , '').valueEqual(1).sum(['cluster' , 'service' , 'pod' , 'container' , 'reason']) + - name: pod_status_terminated + exp: kube_pod_container_status_terminated_reason.retagByK8sMeta('service' , K8sRetagType.Pod2Service , 'pod' , 'namespace').tagNotEqual('service' , '').valueEqual(1).sum(['cluster' , 'service' , 'pod' , 'container' , 'reason']) + - name: pod_status_restarts_total + exp: kube_pod_container_status_restarts_total.retagByK8sMeta('service' , K8sRetagType.Pod2Service , 'pod' , 'namespace').tagNotEqual('service' , '').sum(['cluster' , 'service' , 'pod']) + + - name: pod_cpu_usage + exp: (container_cpu_usage_seconds_total * 1000).tagNotEqual('container' , '').tagNotEqual('pod' , '').retagByK8sMeta('service' , K8sRetagType.Pod2Service , 'pod' , 'namespace').tagNotEqual('service' , '').sum(['cluster' , 'service' , 'pod']).irate() + - name: pod_memory_usage + exp: container_memory_working_set_bytes.tagNotEqual('container' , '').tagNotEqual('pod' , '').retagByK8sMeta('service' , K8sRetagType.Pod2Service , 'pod' , 'namespace').tagNotEqual('service' , '').sum(['cluster' , 'service' , 'pod']) diff --git a/test/script-cases/scripts/mal/test-otel-rules/kafka/kafka-broker.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/kafka/kafka-broker.data.yaml new file mode 100644 index 000000000000..10c3764702b6 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/kafka/kafka-broker.data.yaml @@ -0,0 +1,465 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + process_cpu_seconds_total: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + jvm_memory_bytes_used: + - labels: + cluster: test-cluster + broker: test-broker + area: heap + value: 100.0 + jvm_memory_bytes_max: + - labels: + cluster: test-cluster + broker: test-broker + area: heap + value: 100.0 + kafka_server_brokertopicmetrics_messagesin_total: + - labels: + cluster: test-cluster + broker: test-broker + topic: test-topic + value: 100.0 + kafka_server_brokertopicmetrics_bytesin_total: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_server_brokertopicmetrics_bytesout_total: + - labels: + cluster: test-cluster + broker: test-broker + topic: test-topic + value: 100.0 + kafka_server_brokertopicmetrics_replicationbytesin_total: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_server_brokertopicmetrics_replicationbytesout_total: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_server_replicamanager_underreplicatedpartitions: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_server_replicamanager_underminisrpartitioncount: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_server_replicamanager_partitioncount: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_server_replicamanager_leadercount: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_server_replicamanager_isrshrinks_total: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_server_replicamanager_isrexpands_total: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_server_replicafetchermanager_maxlag: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_server_delayedoperationpurgatory_purgatorysize: + - labels: + cluster: test-cluster + broker: test-broker + delayedOperation: Produce + value: 100.0 + jvm_gc_collection_seconds_count: + - labels: + cluster: test-cluster + broker: test-broker + gc: G1 Young Generation + value: 100.0 + kafka_network_requestmetrics_requests_total: + - labels: + cluster: test-cluster + broker: test-broker + request: Produce + value: 100.0 + kafka_network_requestmetrics_requestqueuetimems_count: + - labels: + cluster: test-cluster + broker: test-broker + request: Produce + value: 100.0 + kafka_network_requestmetrics_remotetimems_count: + - labels: + cluster: test-cluster + broker: test-broker + request: Produce + value: 100.0 + kafka_network_requestmetrics_responsequeuetimems_count: + - labels: + cluster: test-cluster + broker: test-broker + request: Produce + value: 100.0 + kafka_network_requestmetrics_responsesendtimems_count: + - labels: + cluster: test-cluster + broker: test-broker + request: Produce + value: 100.0 + kafka_network_socketserver_networkprocessoravgidlepercent: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_server_brokertopicmetrics_totalfetchrequests_total: + - labels: + cluster: test-cluster + broker: test-broker + topic: test-topic + value: 100.0 + kafka_server_brokertopicmetrics_totalproducerequests_total: + - labels: + cluster: test-cluster + broker: test-broker + topic: test-topic + value: 100.0 +expected: + meter_kafka_broker_cpu_time_total: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 2500.0 + meter_kafka_broker_memory_usage_percentage: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + area: heap + cluster: 'kafka::test-cluster' + value: 100.0 + meter_kafka_broker_messages_per_second: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 25.0 + meter_kafka_broker_bytes_in_per_second: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 25.0 + meter_kafka_broker_bytes_out_per_second: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 25.0 + meter_kafka_broker_replication_bytes_in_per_second: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 25.0 + meter_kafka_broker_replication_bytes_out_per_second: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 25.0 + meter_kafka_broker_under_replicated_partitions: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 100.0 + meter_kafka_broker_under_min_isr_partition_count: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 100.0 + meter_kafka_broker_partition_count: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 100.0 + meter_kafka_broker_leader_count: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 100.0 + meter_kafka_broker_isr_shrinks_per_second: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 25.0 + meter_kafka_broker_isr_expands_per_second: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 25.0 + meter_kafka_broker_max_lag: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 100.0 + meter_kafka_broker_purgatory_size: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + delayedOperation: Produce + value: 100.0 + meter_kafka_broker_garbage_collector_count: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + gc: G1 Young Generation + cluster: 'kafka::test-cluster' + value: 25.0 + meter_kafka_broker_requests_per_second: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + request: Produce + value: 25.0 + meter_kafka_broker_request_queue_time_ms: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + request: Produce + value: 25.0 + meter_kafka_broker_remote_time_ms: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + request: Produce + value: 25.0 + meter_kafka_broker_response_queue_time_ms: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + request: Produce + value: 25.0 + meter_kafka_broker_response_send_time_ms: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + request: Produce + value: 25.0 + meter_kafka_broker_network_processor_avg_idle_percent: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 2500.0 + meter_kafka_broker_topic_messages_in_total: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + topic: test-topic + value: 100.0 + meter_kafka_broker_topic_bytesout_per_second: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + topic: test-topic + value: 25.0 + meter_kafka_broker_topic_bytesin_per_second: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + topic: test-topic + value: 25.0 + meter_kafka_broker_topic_fetch_requests_per_second: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + topic: test-topic + value: 25.0 + meter_kafka_broker_topic_produce_requests_per_second: + entities: + - scope: SERVICE_INSTANCE + service: 'kafka::test-cluster' + instance: test-broker + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + topic: test-topic + value: 25.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/kafka/kafka-broker.yaml b/test/script-cases/scripts/mal/test-otel-rules/kafka/kafka-broker.yaml new file mode 100644 index 000000000000..d27436691850 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/kafka/kafka-broker.yaml @@ -0,0 +1,118 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'kafka-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'kafka::' + tags.cluster}).instance(['cluster'], ['broker'], Layer.KAFKA) +metricPrefix: meter_kafka_broker +metricsRules: + + - name: cpu_time_total + exp: (process_cpu_seconds_total * 100).sum(['cluster', 'broker']).rate('PT1M') + + - name: memory_usage_percentage + exp: (jvm_memory_bytes_used * 100).tagMatch("area", "heap").sum(['cluster', 'broker', 'area']) / (jvm_memory_bytes_max).tagMatch("area", "heap").sum(['cluster', 'broker', 'area']) + + - name: messages_per_second + exp: kafka_server_brokertopicmetrics_messagesin_total.sum(['cluster', 'broker']).rate('PT1M') + + - name: bytes_in_per_second + exp: kafka_server_brokertopicmetrics_bytesin_total.sum(['cluster', 'broker']).rate('PT1M') + + - name: bytes_out_per_second + exp: kafka_server_brokertopicmetrics_bytesout_total.sum(['cluster', 'broker']).rate('PT1M') + + - name: replication_bytes_in_per_second + exp: kafka_server_brokertopicmetrics_replicationbytesin_total.sum(['cluster', 'broker']).rate('PT1M') + + - name: replication_bytes_out_per_second + exp: kafka_server_brokertopicmetrics_replicationbytesout_total.sum(['cluster', 'broker']).rate('PT1M') + + - name: under_replicated_partitions + exp: kafka_server_replicamanager_underreplicatedpartitions.sum(['cluster', 'broker']) + + - name: under_min_isr_partition_count + exp: kafka_server_replicamanager_underminisrpartitioncount.sum(['cluster', 'broker']) + + - name: partition_count + exp: kafka_server_replicamanager_partitioncount.sum(['cluster', 'broker']) + + - name: leader_count + exp: kafka_server_replicamanager_leadercount.sum(['cluster', 'broker']) + + - name: isr_shrinks_per_second + exp: kafka_server_replicamanager_isrshrinks_total.sum(['cluster', 'broker']).rate('PT1M') + + - name: isr_expands_per_second + exp: kafka_server_replicamanager_isrexpands_total.sum(['cluster', 'broker']).rate('PT1M') + + - name: max_lag + exp: kafka_server_replicafetchermanager_maxlag.sum(['cluster', 'broker']) + + - name: purgatory_size + exp: kafka_server_delayedoperationpurgatory_purgatorysize.tagMatch("delayedOperation", "Produce|Fetch").sum(['cluster', 'broker','delayedOperation']) + + - name: garbage_collector_count + exp: jvm_gc_collection_seconds_count.tagMatch("gc", "G1 Young Generation|G1 Old Generation").sum(['cluster', 'broker','gc']).rate('PT1M') + + - name: requests_per_second + exp: kafka_network_requestmetrics_requests_total.tagMatch("request", "FetchConsumer|Produce|Fetch").sum(['cluster','broker','request']).rate('PT1M') + + - name: request_queue_time_ms + exp: kafka_network_requestmetrics_requestqueuetimems_count.tagMatch("request", "Produce|FetchConsumer|FetchFollower").sum(['cluster','broker','request']).rate('PT1M') + + - name: remote_time_ms + exp: kafka_network_requestmetrics_remotetimems_count.tagMatch("request", "Produce|FetchConsumer|FetchFollower").sum(['cluster','broker','request']).rate('PT1M') + + - name: response_queue_time_ms + exp: kafka_network_requestmetrics_responsequeuetimems_count.tagMatch("request", "Produce|FetchConsumer|FetchFollower").sum(['cluster','broker','request']).rate('PT1M') + + - name: response_send_time_ms + exp: kafka_network_requestmetrics_responsesendtimems_count.tagMatch("request", "Produce|FetchConsumer|FetchFollower").sum(['cluster','broker','request']).rate('PT1M') + + - name: network_processor_avg_idle_percent + exp: (kafka_network_socketserver_networkprocessoravgidlepercent * 100).sum(['cluster','broker']).rate('PT1M') + + - name: topic_messages_in_total + exp: kafka_server_brokertopicmetrics_messagesin_total.sum(['cluster','broker','topic']) + + - name: topic_bytesout_per_second + exp: kafka_server_brokertopicmetrics_bytesout_total.sum(['cluster','broker','topic']).rate('PT1M') + + - name: topic_bytesin_per_second + exp: kafka_server_brokertopicmetrics_bytesout_total.sum(['cluster','broker','topic']).rate('PT1M') + + - name: topic_fetch_requests_per_second + exp: kafka_server_brokertopicmetrics_totalfetchrequests_total.sum(['cluster','broker','topic']).rate('PT1M') + + - name: topic_produce_requests_per_second + exp: kafka_server_brokertopicmetrics_totalproducerequests_total.sum(['cluster','broker','topic']).rate('PT1M') + + + diff --git a/test/script-cases/scripts/mal/test-otel-rules/kafka/kafka-cluster.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/kafka/kafka-cluster.data.yaml new file mode 100644 index 000000000000..58724dedb5ac --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/kafka/kafka-cluster.data.yaml @@ -0,0 +1,137 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + kafka_server_replicamanager_underreplicatedpartitions: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_controller_kafkacontroller_offlinepartitionscount: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_server_replicamanager_partitioncount: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_server_replicamanager_leadercount: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_controller_kafkacontroller_activecontrollercount: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_controller_controllerstats_leaderelectionrateandtimems_count: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_controller_controllerstats_uncleanleaderelections_total: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + kafka_server_replicafetchermanager_maxlag: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 +expected: + meter_kafka_under_replicated_partitions: + entities: + - scope: SERVICE + service: 'kafka::test-cluster' + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 100.0 + meter_kafka_offline_partitions_count: + entities: + - scope: SERVICE + service: 'kafka::test-cluster' + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 100.0 + meter_kafka_partition_count: + entities: + - scope: SERVICE + service: 'kafka::test-cluster' + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 100.0 + meter_kafka_leader_count: + entities: + - scope: SERVICE + service: 'kafka::test-cluster' + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 100.0 + meter_kafka_active_controller_count: + entities: + - scope: SERVICE + service: 'kafka::test-cluster' + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 100.0 + meter_kafka_leader_election_rate: + entities: + - scope: SERVICE + service: 'kafka::test-cluster' + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 25.0 + meter_kafka_unclean_leader_elections_per_second: + entities: + - scope: SERVICE + service: 'kafka::test-cluster' + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 25.0 + meter_kafka_max_lag: + entities: + - scope: SERVICE + service: 'kafka::test-cluster' + layer: KAFKA + samples: + - labels: + broker: test-broker + cluster: 'kafka::test-cluster' + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/kafka/kafka-cluster.yaml b/test/script-cases/scripts/mal/test-otel-rules/kafka/kafka-cluster.yaml new file mode 100644 index 000000000000..7a9bac0b24df --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/kafka/kafka-cluster.yaml @@ -0,0 +1,58 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'kafka-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'kafka::' + tags.cluster}).service(['cluster'], Layer.KAFKA) +metricPrefix: meter_kafka +metricsRules: + + - name: under_replicated_partitions + exp: kafka_server_replicamanager_underreplicatedpartitions.sum(['cluster','broker']) + + - name: offline_partitions_count + exp: kafka_controller_kafkacontroller_offlinepartitionscount.sum(['cluster','broker']) + + - name: partition_count + exp: kafka_server_replicamanager_partitioncount.sum(['cluster', 'broker']) + + - name: leader_count + exp: kafka_server_replicamanager_leadercount.sum(['cluster', 'broker']) + + - name: active_controller_count + exp: kafka_controller_kafkacontroller_activecontrollercount.sum(['cluster', 'broker']) + + - name: leader_election_rate + exp: kafka_controller_controllerstats_leaderelectionrateandtimems_count.sum(['cluster', 'broker']).rate('PT1M') + + - name: unclean_leader_elections_per_second + exp: kafka_controller_controllerstats_uncleanleaderelections_total.sum(['cluster', 'broker']).rate('PT1M') + + - name: max_lag + exp: kafka_server_replicafetchermanager_maxlag.sum(['cluster', 'broker']) diff --git a/test/script-cases/scripts/mal/test-otel-rules/kong/kong-endpoint.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/kong/kong-endpoint.data.yaml new file mode 100644 index 000000000000..badf1c904c40 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/kong/kong-endpoint.data.yaml @@ -0,0 +1,227 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + kong_bandwidth_bytes: + - labels: + host_name: test-host + direction: in + route: test-route + value: 100.0 + kong_http_requests_total: + - labels: + host_name: test-host + route: test-route + code: 200 + value: 100.0 + kong_kong_latency_ms: + - labels: + host_name: test-host + route: test-route + le: '50' + value: 10.0 + - labels: + host_name: test-host + route: test-route + le: '100' + value: 20.0 + - labels: + host_name: test-host + route: test-route + le: '250' + value: 30.0 + - labels: + host_name: test-host + route: test-route + le: '500' + value: 40.0 + - labels: + host_name: test-host + route: test-route + le: '1000' + value: 50.0 + kong_request_latency_ms: + - labels: + host_name: test-host + route: test-route + le: '50' + value: 10.0 + - labels: + host_name: test-host + route: test-route + le: '100' + value: 20.0 + - labels: + host_name: test-host + route: test-route + le: '250' + value: 30.0 + - labels: + host_name: test-host + route: test-route + le: '500' + value: 40.0 + - labels: + host_name: test-host + route: test-route + le: '1000' + value: 50.0 + kong_upstream_latency_ms: + - labels: + host_name: test-host + route: test-route + le: '50' + value: 10.0 + - labels: + host_name: test-host + route: test-route + le: '100' + value: 20.0 + - labels: + host_name: test-host + route: test-route + le: '250' + value: 30.0 + - labels: + host_name: test-host + route: test-route + le: '500' + value: 40.0 + - labels: + host_name: test-host + route: test-route + le: '1000' + value: 50.0 +expected: + meter_kong_endpoint_http_bandwidth: + entities: + - scope: ENDPOINT + service: 'kong::test-host' + endpoint: test-route + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + direction: in + route: test-route + value: 25.0 + meter_kong_endpoint_http_status: + entities: + - scope: ENDPOINT + service: 'kong::test-host' + endpoint: test-route + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + route: test-route + code: '200' + value: 25.0 + meter_kong_endpoint_kong_latency: + entities: + - scope: ENDPOINT + service: 'kong::test-host' + endpoint: test-route + layer: KONG + samples: + - labels: + le: '100000' + host_name: 'kong::test-host' + route: test-route + value: 20.0 + - labels: + le: '1000000' + host_name: 'kong::test-host' + route: test-route + value: 50.0 + - labels: + le: '250000' + host_name: 'kong::test-host' + route: test-route + value: 30.0 + - labels: + le: '50000' + host_name: 'kong::test-host' + route: test-route + value: 10.0 + - labels: + le: '500000' + host_name: 'kong::test-host' + route: test-route + value: 40.0 + meter_kong_endpoint_request_latency: + entities: + - scope: ENDPOINT + service: 'kong::test-host' + endpoint: test-route + layer: KONG + samples: + - labels: + le: '100000' + host_name: 'kong::test-host' + route: test-route + value: 20.0 + - labels: + le: '1000000' + host_name: 'kong::test-host' + route: test-route + value: 50.0 + - labels: + le: '250000' + host_name: 'kong::test-host' + route: test-route + value: 30.0 + - labels: + le: '50000' + host_name: 'kong::test-host' + route: test-route + value: 10.0 + - labels: + le: '500000' + host_name: 'kong::test-host' + route: test-route + value: 40.0 + meter_kong_endpoint_upstream_latency: + entities: + - scope: ENDPOINT + service: 'kong::test-host' + endpoint: test-route + layer: KONG + samples: + - labels: + le: '100000' + host_name: 'kong::test-host' + route: test-route + value: 20.0 + - labels: + le: '1000000' + host_name: 'kong::test-host' + route: test-route + value: 50.0 + - labels: + le: '250000' + host_name: 'kong::test-host' + route: test-route + value: 30.0 + - labels: + le: '50000' + host_name: 'kong::test-host' + route: test-route + value: 10.0 + - labels: + le: '500000' + host_name: 'kong::test-host' + route: test-route + value: 40.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/kong/kong-endpoint.yaml b/test/script-cases/scripts/mal/test-otel-rules/kong/kong-endpoint.yaml new file mode 100644 index 000000000000..9abefaa463d6 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/kong/kong-endpoint.yaml @@ -0,0 +1,40 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +filter: "{ tags -> tags.job_name == 'kong-monitoring' }" +expSuffix: tag({tags -> tags.host_name = 'kong::' + tags.host_name}).endpoint(['host_name'], ['route'], Layer.KONG) +metricPrefix: meter_kong_endpoint +metricsRules: + # counter + # Total bandwidth (ingress/egress) throughput in bytes + - name: http_bandwidth + exp: kong_bandwidth_bytes.sum(['host_name','direction','route']).rate('PT1M') + # HTTP status codes per consumer/service/route in Kong + - name: http_status + exp: kong_http_requests_total.sum(['host_name','route','code']).rate('PT1M') + + # histogram + # Latency added by Kong and enabled plugins for each service/route in Kong + - name: kong_latency + exp: kong_kong_latency_ms.tagNotEqual('route','').sum(['host_name','route','le']).histogram().histogram_percentile([50,75,90,95,99]) + # Total latency incurred during requests for each service/route in Kong + - name: request_latency + exp: kong_request_latency_ms.tagNotEqual('route','').sum(['host_name','route','le']).histogram().histogram_percentile([50,75,90,95,99]) + # Latency added by upstream response for each service/route in Kong + - name: upstream_latency + exp: kong_upstream_latency_ms.tagNotEqual('route','').sum(['host_name','route','le']).histogram().histogram_percentile([50,75,90,95,99]) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/kong/kong-instance.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/kong/kong-instance.data.yaml new file mode 100644 index 000000000000..28910e39bb5d --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/kong/kong-instance.data.yaml @@ -0,0 +1,351 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + kong_bandwidth_bytes: + - labels: + host_name: test-host + service_instance_id: test-instance + direction: in + route: test-route + value: 100.0 + kong_http_requests_total: + - labels: + host_name: test-host + service_instance_id: test-instance + code: 200 + value: 100.0 + kong_datastore_reachable: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + kong_nginx_requests_total: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + kong_memory_lua_shared_dict_bytes: + - labels: + host_name: test-host + service_instance_id: test-instance + shared_dict: test-dict + value: 100.0 + kong_memory_lua_shared_dict_total_bytes: + - labels: + host_name: test-host + service_instance_id: test-instance + shared_dict: test-dict + value: 100.0 + kong_memory_workers_lua_vms_bytes: + - labels: + host_name: test-host + service_instance_id: test-instance + pid: 12345 + value: 100.0 + kong_nginx_connections_total: + - labels: + host_name: test-host + service_instance_id: test-instance + state: active + value: 100.0 + kong_nginx_timers: + - labels: + host_name: test-host + service_instance_id: test-instance + state: active + value: 100.0 + kong_kong_latency_ms: + - labels: + host_name: test-host + service_instance_id: test-instance + le: '50' + value: 10.0 + - labels: + host_name: test-host + service_instance_id: test-instance + le: '100' + value: 20.0 + - labels: + host_name: test-host + service_instance_id: test-instance + le: '250' + value: 30.0 + - labels: + host_name: test-host + service_instance_id: test-instance + le: '500' + value: 40.0 + - labels: + host_name: test-host + service_instance_id: test-instance + le: '1000' + value: 50.0 + kong_request_latency_ms: + - labels: + host_name: test-host + service_instance_id: test-instance + le: '50' + value: 10.0 + - labels: + host_name: test-host + service_instance_id: test-instance + le: '100' + value: 20.0 + - labels: + host_name: test-host + service_instance_id: test-instance + le: '250' + value: 30.0 + - labels: + host_name: test-host + service_instance_id: test-instance + le: '500' + value: 40.0 + - labels: + host_name: test-host + service_instance_id: test-instance + le: '1000' + value: 50.0 + kong_upstream_latency_ms: + - labels: + host_name: test-host + service_instance_id: test-instance + le: '50' + value: 10.0 + - labels: + host_name: test-host + service_instance_id: test-instance + le: '100' + value: 20.0 + - labels: + host_name: test-host + service_instance_id: test-instance + le: '250' + value: 30.0 + - labels: + host_name: test-host + service_instance_id: test-instance + le: '500' + value: 40.0 + - labels: + host_name: test-host + service_instance_id: test-instance + le: '1000' + value: 50.0 +expected: + meter_kong_instance_http_bandwidth: + entities: + - scope: SERVICE_INSTANCE + service: 'kong::test-host' + instance: test-instance + layer: KONG + samples: + - labels: + service_instance_id: test-instance + route: test-route + host_name: 'kong::test-host' + direction: in + value: 25.0 + meter_kong_instance_http_status: + entities: + - scope: SERVICE_INSTANCE + service: 'kong::test-host' + instance: test-instance + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + service_instance_id: test-instance + code: '200' + value: 25.0 + meter_kong_instance_datastore_reachable: + entities: + - scope: SERVICE_INSTANCE + service: 'kong::test-host' + instance: test-instance + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 100.0 + meter_kong_instance_http_requests: + entities: + - scope: SERVICE_INSTANCE + service: 'kong::test-host' + instance: test-instance + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 25.0 + meter_kong_instance_shared_dict_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'kong::test-host' + instance: test-instance + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + shared_dict: test-dict + service_instance_id: test-instance + value: 100.0 + meter_kong_instance_shared_dict_total_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'kong::test-host' + instance: test-instance + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + shared_dict: test-dict + service_instance_id: test-instance + value: 100.0 + meter_kong_instance_memory_workers_lua_vms_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'kong::test-host' + instance: test-instance + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + pid: '12345' + service_instance_id: test-instance + value: 100.0 + meter_kong_instance_nginx_connections_total: + entities: + - scope: SERVICE_INSTANCE + service: 'kong::test-host' + instance: test-instance + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + service_instance_id: test-instance + state: active + value: 25.0 + meter_kong_instance_nginx_timers: + entities: + - scope: SERVICE_INSTANCE + service: 'kong::test-host' + instance: test-instance + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + service_instance_id: test-instance + state: active + value: 100.0 + meter_kong_instance_kong_latency: + entities: + - scope: SERVICE_INSTANCE + service: 'kong::test-host' + instance: test-instance + layer: KONG + samples: + - labels: + le: '100000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 20.0 + - labels: + le: '1000000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 50.0 + - labels: + le: '250000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 30.0 + - labels: + le: '50000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 10.0 + - labels: + le: '500000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 40.0 + meter_kong_instance_request_latency: + entities: + - scope: SERVICE_INSTANCE + service: 'kong::test-host' + instance: test-instance + layer: KONG + samples: + - labels: + le: '100000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 20.0 + - labels: + le: '1000000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 50.0 + - labels: + le: '250000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 30.0 + - labels: + le: '50000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 10.0 + - labels: + le: '500000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 40.0 + meter_kong_instance_upstream_latency: + entities: + - scope: SERVICE_INSTANCE + service: 'kong::test-host' + instance: test-instance + layer: KONG + samples: + - labels: + le: '100000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 20.0 + - labels: + le: '1000000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 50.0 + - labels: + le: '250000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 30.0 + - labels: + le: '50000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 10.0 + - labels: + le: '500000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 40.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/kong/kong-instance.yaml b/test/script-cases/scripts/mal/test-otel-rules/kong/kong-instance.yaml new file mode 100644 index 000000000000..427d0414aafd --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/kong/kong-instance.yaml @@ -0,0 +1,105 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'kong-monitoring' }" +expSuffix: tag({tags -> tags.host_name = 'kong::' + tags.host_name}).instance(['host_name'], ['service_instance_id'], Layer.KONG) +metricPrefix: meter_kong_instance +metricsRules: + # counter + # Total bandwidth (ingress/egress) throughput in bytes + - name: http_bandwidth + exp: kong_bandwidth_bytes.sum(['host_name','service_instance_id','direction','route']).rate('PT1M') + # HTTP status codes per consumer/service/route in Kong + - name: http_status + exp: kong_http_requests_total.sum(['host_name','service_instance_id','code']).rate('PT1M') + + # gauge + # Datastore reachable from Kong + - name: datastore_reachable + exp: kong_datastore_reachable.sum(['host_name','service_instance_id']) + # Total number of requests + - name: http_requests + exp: kong_nginx_requests_total.sum(['host_name','service_instance_id']).rate('PT1M') + # Allocated slabs in bytes in a shared_dict + - name: shared_dict_bytes + exp: kong_memory_lua_shared_dict_bytes.sum(['host_name','service_instance_id','shared_dict']) + # Total capacity in bytes of a shared_dict + - name: shared_dict_total_bytes + exp: kong_memory_lua_shared_dict_total_bytes.sum(['host_name','service_instance_id','shared_dict']) + # Allocated bytes in worker Lua VM + - name: memory_workers_lua_vms_bytes + exp: kong_memory_workers_lua_vms_bytes.tagNotEqual('pid','').sum(['host_name','service_instance_id','pid']) + # Number of connections by subsystem + - name: nginx_connections_total + exp: kong_nginx_connections_total.tagNotEqual('state','').sum(['host_name','service_instance_id','state']).rate('PT1M') + # Number of Nginx timers + - name: nginx_timers + exp: kong_nginx_timers.sum(['host_name','service_instance_id','state']) + + # histogram + # Latency added by Kong and enabled plugins for each service/route in Kong + - name: kong_latency + exp: kong_kong_latency_ms.sum(['host_name','service_instance_id','le']).histogram().histogram_percentile([50,75,90,95,99]) + # Total latency incurred during requests for each service/route in Kong + - name: request_latency + exp: kong_request_latency_ms.sum(['host_name','service_instance_id','le']).histogram().histogram_percentile([50,75,90,95,99]) + # Latency added by upstream response for each service/route in Kong + - name: upstream_latency + exp: kong_upstream_latency_ms.sum(['host_name','service_instance_id','le']).histogram().histogram_percentile([50,75,90,95,99]) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/kong/kong-service.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/kong/kong-service.data.yaml new file mode 100644 index 000000000000..c9006a86d4b9 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/kong/kong-service.data.yaml @@ -0,0 +1,303 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + kong_bandwidth_bytes: + - labels: + host_name: test-host + service_instance_id: test-instance + direction: in + route: test-route + value: 100.0 + kong_http_requests_total: + - labels: + host_name: test-host + service_instance_id: test-instance + code: 200 + value: 100.0 + kong_nginx_metric_errors_total: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + kong_datastore_reachable: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + kong_nginx_requests_total: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + kong_nginx_connections_total: + - labels: + host_name: test-host + service_instance_id: test-instance + state: active + value: 100.0 + kong_nginx_timers: + - labels: + host_name: test-host + service_instance_id: test-instance + state: active + value: 100.0 + kong_kong_latency_ms: + - labels: + le: '50' + host_name: test-host + service_instance_id: test-instance + value: 10.0 + - labels: + le: '100' + host_name: test-host + service_instance_id: test-instance + value: 20.0 + - labels: + le: '250' + host_name: test-host + service_instance_id: test-instance + value: 30.0 + - labels: + le: '500' + host_name: test-host + service_instance_id: test-instance + value: 40.0 + - labels: + le: '1000' + host_name: test-host + service_instance_id: test-instance + value: 50.0 + kong_request_latency_ms: + - labels: + le: '50' + host_name: test-host + service_instance_id: test-instance + value: 10.0 + - labels: + le: '100' + host_name: test-host + service_instance_id: test-instance + value: 20.0 + - labels: + le: '250' + host_name: test-host + service_instance_id: test-instance + value: 30.0 + - labels: + le: '500' + host_name: test-host + service_instance_id: test-instance + value: 40.0 + - labels: + le: '1000' + host_name: test-host + service_instance_id: test-instance + value: 50.0 + kong_upstream_latency_ms: + - labels: + le: '50' + host_name: test-host + service_instance_id: test-instance + value: 10.0 + - labels: + le: '100' + host_name: test-host + service_instance_id: test-instance + value: 20.0 + - labels: + le: '250' + host_name: test-host + service_instance_id: test-instance + value: 30.0 + - labels: + le: '500' + host_name: test-host + service_instance_id: test-instance + value: 40.0 + - labels: + le: '1000' + host_name: test-host + service_instance_id: test-instance + value: 50.0 +expected: + meter_kong_service_http_bandwidth: + entities: + - scope: SERVICE + service: 'kong::test-host' + layer: KONG + samples: + - labels: + service_instance_id: test-instance + route: test-route + host_name: 'kong::test-host' + direction: in + value: 25.0 + meter_kong_service_http_status: + entities: + - scope: SERVICE + service: 'kong::test-host' + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + service_instance_id: test-instance + code: '200' + value: 25.0 + meter_kong_service_nginx_metric_errors_total: + entities: + - scope: SERVICE + service: 'kong::test-host' + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 100.0 + meter_kong_service_datastore_reachable: + entities: + - scope: SERVICE + service: 'kong::test-host' + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 100.0 + meter_kong_service_http_requests: + entities: + - scope: SERVICE + service: 'kong::test-host' + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 25.0 + meter_kong_service_nginx_connections_total: + entities: + - scope: SERVICE + service: 'kong::test-host' + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + service_instance_id: test-instance + state: active + value: 25.0 + meter_kong_service_nginx_timers: + entities: + - scope: SERVICE + service: 'kong::test-host' + layer: KONG + samples: + - labels: + host_name: 'kong::test-host' + service_instance_id: test-instance + state: active + value: 100.0 + meter_kong_service_kong_latency: + entities: + - scope: SERVICE + service: 'kong::test-host' + layer: KONG + samples: + - labels: + le: '100000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 20.0 + - labels: + le: '1000000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 50.0 + - labels: + le: '250000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 30.0 + - labels: + le: '50000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 10.0 + - labels: + le: '500000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 40.0 + meter_kong_service_request_latency: + entities: + - scope: SERVICE + service: 'kong::test-host' + layer: KONG + samples: + - labels: + le: '100000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 20.0 + - labels: + le: '1000000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 50.0 + - labels: + le: '250000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 30.0 + - labels: + le: '50000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 10.0 + - labels: + le: '500000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 40.0 + meter_kong_service_upstream_latency: + entities: + - scope: SERVICE + service: 'kong::test-host' + layer: KONG + samples: + - labels: + le: '100000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 20.0 + - labels: + le: '1000000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 50.0 + - labels: + le: '250000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 30.0 + - labels: + le: '50000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 10.0 + - labels: + le: '500000' + host_name: 'kong::test-host' + service_instance_id: test-instance + value: 40.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/kong/kong-service.yaml b/test/script-cases/scripts/mal/test-otel-rules/kong/kong-service.yaml new file mode 100644 index 000000000000..de623dafa314 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/kong/kong-service.yaml @@ -0,0 +1,69 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'kong-monitoring' }" +expSuffix: tag({tags -> tags.host_name = 'kong::' + tags.host_name}).service(['host_name'], Layer.KONG) +metricPrefix: meter_kong_service +metricsRules: + # counter + # Total bandwidth (ingress/egress) throughput in bytes + - name: http_bandwidth + exp: kong_bandwidth_bytes.sum(['host_name','service_instance_id','direction','route']).rate('PT1M') + # HTTP status codes per consumer/service/route in Kong + - name: http_status + exp: kong_http_requests_total.sum(['host_name','service_instance_id','code']).rate('PT1M') + # Number of nginx-lua-prometheus errors + - name: nginx_metric_errors_total + exp: kong_nginx_metric_errors_total.sum(['host_name','service_instance_id']) + + # gauge + # Datastore reachable from Kong + - name: datastore_reachable + exp: kong_datastore_reachable.sum(['host_name','service_instance_id']) + # Total number of requests + - name: http_requests + exp: kong_nginx_requests_total.sum(['host_name','service_instance_id']).rate('PT1M') + # Number of connections by subsystem + - name: nginx_connections_total + exp: kong_nginx_connections_total.sum(['host_name','service_instance_id','state']).rate('PT1M') + # Number of Nginx timers + - name: nginx_timers + exp: kong_nginx_timers.sum(['host_name','service_instance_id','state']) + + # histogram + # Latency added by Kong and enabled plugins for each service/route in Kong + - name: kong_latency + exp: kong_kong_latency_ms.sum(['le','host_name','service_instance_id']).histogram().histogram_percentile([50,75,90,95,99]) + # Total latency incurred during requests for each service/route in Kong + - name: request_latency + exp: kong_request_latency_ms.sum(['le','host_name','service_instance_id']).histogram().histogram_percentile([50,75,90,95,99]) + # Latency added by upstream response for each service/route in Kong + - name: upstream_latency + exp: kong_upstream_latency_ms.sum(['le','host_name','service_instance_id']).histogram().histogram_percentile([50,75,90,95,99]) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/mongodb/mongodb-cluster.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/mongodb/mongodb-cluster.data.yaml new file mode 100644 index 000000000000..8754ef78e001 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/mongodb/mongodb-cluster.data.yaml @@ -0,0 +1,248 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + mongodb_ss_uptime: + - labels: + cluster: test-cluster + service_instance_id: test-instance + rs_nm: test-value + set: test-value + value: 100.0 + mongodb_dbstats_dataSize: + - labels: + cluster: test-cluster + rs_nm: test-value + cl_role: test-value + database: test-value + service_instance_id: test-instance + set: test-value + rs_state: 1 + value: 100.0 + mongodb_dbstats_collections: + - labels: + cluster: test-cluster + rs_nm: test-value + cl_role: test-value + database: test-value + service_instance_id: test-instance + set: test-value + rs_state: 1 + value: 100.0 + mongodb_dbstats_objects: + - labels: + cluster: test-cluster + rs_nm: test-value + cl_role: test-value + database: test-value + set: test-value + value: 100.0 + mongodb_ss_metrics_document: + - labels: + cluster: test-cluster + doc_op_type: test-value + service_instance_id: test-instance + rs_nm: test-value + set: test-value + value: 100.0 + mongodb_ss_opcounters: + - labels: + cluster: test-cluster + legacy_op_type: test-value + service_instance_id: test-instance + rs_nm: test-value + set: test-value + value: 100.0 + mongodb_ss_connections: + - labels: + cluster: test-cluster + service_instance_id: test-instance + rs_nm: test-value + set: test-value + conn_type: current + value: 100.0 + mongodb_ss_metrics_cursor_open: + - labels: + cluster: test-cluster + csr_type: test-value + service_instance_id: test-instance + rs_nm: test-value + set: test-value + value: 100.0 + mongodb_mongod_replset_member_replication_lag: + - labels: + cluster: test-cluster + rs_nm: test-value + state: active + set: test-value + value: 100.0 + mongodb_dbstats_indexSize: + - labels: + cluster: test-cluster + database: test-value + service_instance_id: test-instance + cl_role: test-value + rs_nm: test-value + set: test-value + rs_state: 1 + value: 100.0 + mongodb_dbstats_indexes: + - labels: + cluster: test-cluster + database: test-value + service_instance_id: test-instance + cl_role: test-value + rs_nm: test-value + set: test-value + rs_state: 1 + value: 100.0 +expected: + meter_mongodb_cluster_uptime: + entities: + - scope: SERVICE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_mongodb_cluster_data_size: + entities: + - scope: SERVICE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + rs_nm: test-value + value: 100.0 + meter_mongodb_cluster_collection_count: + entities: + - scope: SERVICE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + rs_nm: test-value + value: 100.0 + meter_mongodb_cluster_object_count: + entities: + - scope: SERVICE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + rs_nm: test-value + value: 100.0 + meter_mongodb_cluster_document_avg_qps: + entities: + - scope: SERVICE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + doc_op_type: test-value + service_instance_id: test-instance + value: 25.0 + meter_mongodb_cluster_operation_avg_qps: + entities: + - scope: SERVICE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + legacy_op_type: test-value + cluster: 'mongodb::test-cluster' + service_instance_id: test-instance + value: 25.0 + meter_mongodb_cluster_connections: + entities: + - scope: SERVICE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_mongodb_cluster_cursor_avg: + entities: + - scope: SERVICE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + csr_type: test-value + service_instance_id: test-instance + value: 100.0 + meter_mongodb_cluster_repl_lag: + entities: + - scope: SERVICE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + rs_nm: test-value + value: 100.0 + meter_mongodb_cluster_db_data_size: + entities: + - scope: SERVICE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + database: test-value + service_instance_id: test-instance + value: 100.0 + meter_mongodb_cluster_db_index_size: + entities: + - scope: SERVICE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + database: test-value + service_instance_id: test-instance + value: 100.0 + meter_mongodb_cluster_db_collection_count: + entities: + - scope: SERVICE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + database: test-value + service_instance_id: test-instance + value: 100.0 + meter_mongodb_cluster_db_index_count: + entities: + - scope: SERVICE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + database: test-value + service_instance_id: test-instance + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/mongodb/mongodb-cluster.yaml b/test/script-cases/scripts/mal/test-otel-rules/mongodb/mongodb-cluster.yaml new file mode 100644 index 000000000000..711f46469bdf --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/mongodb/mongodb-cluster.yaml @@ -0,0 +1,63 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'mongodb-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'mongodb::' + tags.cluster}).service(['cluster'], Layer.MONGODB) +metricPrefix: meter_mongodb_cluster +metricsRules: + - name: uptime + exp: mongodb_ss_uptime.max(['cluster','service_instance_id']) + - name: data_size + exp: mongodb_dbstats_dataSize.tagNotEqual('cl_role','mongos').tagNotEqual('database','local').sum(['cluster', 'rs_nm']) + - name: collection_count + exp: mongodb_dbstats_collections.tagNotEqual('cl_role','mongos').tagNotEqual('database','local').sum(['cluster', 'rs_nm']) + - name: object_count + exp: mongodb_dbstats_objects.tagNotEqual('cl_role','mongos').tagNotEqual('database','local').sum(['cluster', 'rs_nm']) + + - name: document_avg_qps + exp: mongodb_ss_metrics_document.max(['cluster','doc_op_type','service_instance_id']).rate('PT1M') + - name: operation_avg_qps + exp: mongodb_ss_opcounters.max(['cluster','legacy_op_type','service_instance_id']).rate('PT1M') + + - name: connections + exp: mongodb_ss_connections.tagEqual('conn_type','current').max(['cluster','service_instance_id']) + - name: cursor_avg + exp: mongodb_ss_metrics_cursor_open.max(['cluster','csr_type','service_instance_id']) + - name: repl_lag + exp: mongodb_mongod_replset_member_replication_lag.tag({tags -> tags.rs_nm = tags.set}).tagNotEqual('state','ARBITER').avg(['cluster','rs_nm']) + + - name: db_data_size + exp: mongodb_dbstats_dataSize.tagEqual('rs_state', '1').tagNotEqual('cl_role','mongos').tagNotEqual('database','local').sum(['cluster', 'database','service_instance_id']) + - name: db_index_size + exp: mongodb_dbstats_indexSize.tagEqual('rs_state', '1').tagNotEqual('cl_role','mongos').tagNotEqual('database','local').sum(['cluster', 'database','service_instance_id']) + - name: db_collection_count + exp: mongodb_dbstats_collections.tagEqual('rs_state', '1').tagNotEqual('cl_role','mongos').tagNotEqual('database','local').sum(['cluster', 'database','service_instance_id']) + - name: db_index_count + exp: mongodb_dbstats_indexes.tagEqual('rs_state', '1').tagNotEqual('cl_role','mongos').tagNotEqual('database','local').sum(['cluster' , 'database','service_instance_id']) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/mongodb/mongodb-node.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/mongodb/mongodb-node.data.yaml new file mode 100644 index 000000000000..ad60e6efe3d2 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/mongodb/mongodb-node.data.yaml @@ -0,0 +1,547 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + mongodb_ss_uptime: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_opcounters: + - labels: + cluster: test-cluster + service_instance_id: test-instance + legacy_op_type: test-value + value: 100.0 + mongodb_ss_opLatencies_ops: + - labels: + cluster: test-cluster + service_instance_id: test-instance + op_type: test-value + value: 100.0 + mongodb_ss_opLatencies_latency: + - labels: + cluster: test-cluster + service_instance_id: test-instance + op_type: test-value + value: 100.0 + mongodb_sys_memory_MemTotal_kb: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_sys_memory_MemAvailable_kb: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_version_info: + - labels: + cluster: test-cluster + service_instance_id: test-instance + edition: test-value + mongodb: test-value + value: 100.0 + mongodb_members_self: + - labels: + cluster: test-cluster + service_instance_id: test-instance + member_state: test-value + value: 100.0 + mongodb_sys_cpu_user_ms: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_sys_cpu_iowait_ms: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_sys_cpu_system_ms: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_sys_cpu_irq_ms: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_sys_cpu_softirq_ms: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_sys_cpu_nice_ms: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_sys_cpu_steal_ms: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_network_bytesIn: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_network_bytesOut: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_sys_memory_MemFree_kb: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_sys_memory_SwapFree_kb: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_dbstats_fsUsedSize: + - labels: + cluster: test-cluster + service_instance_id: test-instance + value: 100.0 + mongodb_dbstats_fsTotalSize: + - labels: + cluster: test-cluster + service_instance_id: test-instance + value: 100.0 + mongodb_ss_connections: + - labels: + cluster: test-cluster + service_instance_id: test-instance + conn_type: current + value: 100.0 + mongodb_ss_globalLock_activeClients_total: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_globalLock_activeClients_readers: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_globalLock_activeClients_writers: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_transactions_currentActive: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_transactions_currentInactive: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_metrics_document: + - labels: + cluster: test-cluster + service_instance_id: test-instance + doc_op_type: test-value + value: 100.0 + mongodb_ss_opcountersRepl: + - labels: + cluster: test-cluster + service_instance_id: test-instance + legacy_op_type: test-value + value: 100.0 + mongodb_ss_metrics_cursor_open: + - labels: + cluster: test-cluster + service_instance_id: test-instance + csr_type: test-value + value: 100.0 + mongodb_ss_mem_virtual: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_mem_resident: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_asserts: + - labels: + cluster: test-cluster + service_instance_id: test-instance + assert_type: test-value + value: 100.0 + mongodb_ss_metrics_repl_buffer_count: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_metrics_repl_buffer_sizeBytes: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_metrics_repl_buffer_maxSizeBytes: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_globalLock_currentQueue: + - labels: + cluster: test-cluster + service_instance_id: test-instance + count_type: test-value + value: 100.0 + mongodb_ss_metrics_getLastError_wtime_num: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_metrics_getLastError_wtimeouts: + - labels: + cluster: test-cluster + value: 100.0 + mongodb_ss_metrics_getLastError_wtime_totalMillis: + - labels: + cluster: test-cluster + value: 100.0 +expected: + meter_mongodb_node_uptime: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 100.0 + meter_mongodb_node_qps: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + instance: test-instance + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + service_instance_id: test-instance + value: 25.0 + meter_mongodb_node_op_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + instance: test-instance + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + service_instance_id: test-instance + op_type: test-value + value: 25.0 + meter_mongodb_node_latency_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + instance: test-instance + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + service_instance_id: test-instance + op_type: test-value + value: 25.0 + meter_mongodb_node_memory_usage: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 0.0 + meter_mongodb_node_version: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + instance: test-instance + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + edition: test-value + service_instance_id: test-instance + mongodb: test-value + value: 100.0 + meter_mongodb_node_rs_state: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + instance: test-instance + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + member_state: test-value + service_instance_id: test-instance + value: 100.0 + meter_mongodb_node_cpu_total_percentage: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 17.5 + meter_mongodb_node_network_bytes_in: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 0.0 + meter_mongodb_node_network_bytes_out: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 0.0 + meter_mongodb_node_memory_free_kb: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 100.0 + meter_mongodb_node_swap_memory_free_kb: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 100.0 + meter_mongodb_node_fs_used_size: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + instance: test-instance + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_mongodb_node_fs_total_size: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + instance: test-instance + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_mongodb_node_connections: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + instance: test-instance + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + service_instance_id: test-instance + value: 100.0 + meter_mongodb_node_active_total_num: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 100.0 + meter_mongodb_node_active_reader_num: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 100.0 + meter_mongodb_node_active_writer_num: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 100.0 + meter_mongodb_node_transactions_active: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 100.0 + meter_mongodb_node_transactions_inactive: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 100.0 + meter_mongodb_node_document_qps: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + instance: test-instance + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + doc_op_type: test-value + service_instance_id: test-instance + value: 25.0 + meter_mongodb_node_operation_qps: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + instance: test-instance + layer: MONGODB + samples: + - labels: + legacy_op_type: test-value + cluster: 'mongodb::test-cluster' + service_instance_id: test-instance + value: 25.0 + meter_mongodb_node_repl_operation_qps: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + instance: test-instance + layer: MONGODB + samples: + - labels: + legacy_op_type: test-value + cluster: 'mongodb::test-cluster' + service_instance_id: test-instance + value: 25.0 + meter_mongodb_node_cursor: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + instance: test-instance + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + csr_type: test-value + service_instance_id: test-instance + value: 100.0 + meter_mongodb_node_mem_virtual: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 100.0 + meter_mongodb_node_mem_resident: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 100.0 + meter_mongodb_node_asserts: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + instance: test-instance + layer: MONGODB + samples: + - labels: + assert_type: test-value + cluster: 'mongodb::test-cluster' + service_instance_id: test-instance + value: 50.0 + meter_mongodb_node_repl_buffer_count: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 100.0 + meter_mongodb_node_repl_buffer_size: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 100.0 + meter_mongodb_node_repl_buffer_size_max: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 100.0 + meter_mongodb_node_queued_operation: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + instance: test-instance + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + service_instance_id: test-instance + count_type: test-value + value: 100.0 + meter_mongodb_node_write_wait_num: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 25.0 + meter_mongodb_node_write_wait_timeout_num: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 25.0 + meter_mongodb_node_write_wait_time: + entities: + - scope: SERVICE_INSTANCE + service: 'mongodb::test-cluster' + layer: MONGODB + samples: + - labels: + cluster: 'mongodb::test-cluster' + value: 25.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/mongodb/mongodb-node.yaml b/test/script-cases/scripts/mal/test-otel-rules/mongodb/mongodb-node.yaml new file mode 100644 index 000000000000..b670d066ab1d --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/mongodb/mongodb-node.yaml @@ -0,0 +1,108 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'mongodb-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'mongodb::' + tags.cluster}).service(['cluster'] , Layer.MONGODB).instance(['cluster'], ['service_instance_id'], Layer.MONGODB) +metricPrefix: meter_mongodb_node +metricsRules: + - name: uptime + exp: mongodb_ss_uptime + - name: qps + exp: mongodb_ss_opcounters.sum(['cluster','service_instance_id']).rate('PT1M') + - name: op_rate + exp: mongodb_ss_opLatencies_ops.sum(['cluster','service_instance_id', 'op_type']).rate('PT1M') + - name: latency_rate + exp: mongodb_ss_opLatencies_latency.sum(['cluster','service_instance_id', 'op_type']).rate('PT1M') + - name: memory_usage + exp: (mongodb_sys_memory_MemTotal_kb - mongodb_sys_memory_MemAvailable_kb) / mongodb_sys_memory_MemTotal_kb * 100 + - name: version + exp: mongodb_version_info.max(['cluster','service_instance_id','edition',"mongodb"]) + - name: rs_state + exp: mongodb_members_self.max(['cluster','service_instance_id','member_state']) + + - name: cpu_total_percentage + exp: ((mongodb_sys_cpu_user_ms + mongodb_sys_cpu_iowait_ms + mongodb_sys_cpu_system_ms + mongodb_sys_cpu_irq_ms + mongodb_sys_cpu_softirq_ms + mongodb_sys_cpu_nice_ms + mongodb_sys_cpu_steal_ms) / 10).rate('PT1M') + - name: network_bytes_in + exp: mongodb_ss_network_bytesIn.irate() + - name: network_bytes_out + exp: mongodb_ss_network_bytesOut.irate() + - name: memory_free_kb + exp: mongodb_sys_memory_MemFree_kb + - name: swap_memory_free_kb + exp: mongodb_sys_memory_SwapFree_kb + - name: fs_used_size + exp: mongodb_dbstats_fsUsedSize.max(['cluster','service_instance_id']) + - name: fs_total_size + exp: mongodb_dbstats_fsTotalSize.max(['cluster','service_instance_id']) + + - name: connections + exp: mongodb_ss_connections.tagEqual('conn_type','current').max(['cluster','service_instance_id']) + - name: active_total_num + exp: mongodb_ss_globalLock_activeClients_total + - name: active_reader_num + exp: mongodb_ss_globalLock_activeClients_readers + - name: active_writer_num + exp: mongodb_ss_globalLock_activeClients_writers + - name: transactions_active + exp: mongodb_ss_transactions_currentActive + - name: transactions_inactive + exp: mongodb_ss_transactions_currentInactive + + - name: document_qps + exp: mongodb_ss_metrics_document.sum(['cluster','service_instance_id','doc_op_type']).rate('PT1M') + - name: operation_qps + exp: mongodb_ss_opcounters.sum(['cluster','service_instance_id','legacy_op_type']).rate('PT1M') + - name: repl_operation_qps + exp: mongodb_ss_opcountersRepl.sum(['cluster','service_instance_id','legacy_op_type']).rate('PT1M') + + - name: cursor + exp: mongodb_ss_metrics_cursor_open.max(['cluster','service_instance_id',"csr_type"]) + - name: mem_virtual + exp: mongodb_ss_mem_virtual + - name: mem_resident + exp: mongodb_ss_mem_resident + - name: asserts + exp: mongodb_ss_asserts.max(['cluster','service_instance_id',"assert_type"]).increase('PT1M') + + - name: repl_buffer_count + exp: mongodb_ss_metrics_repl_buffer_count + - name: repl_buffer_size + exp: mongodb_ss_metrics_repl_buffer_sizeBytes + - name: repl_buffer_size_max + exp: mongodb_ss_metrics_repl_buffer_maxSizeBytes + - name: queued_operation + exp: mongodb_ss_globalLock_currentQueue.max(['cluster','service_instance_id',"count_type"]) + + - name: write_wait_num + exp: mongodb_ss_metrics_getLastError_wtime_num.rate('PT1M') + - name: write_wait_timeout_num + exp: mongodb_ss_metrics_getLastError_wtimeouts.rate('PT1M') + - name: write_wait_time + exp: mongodb_ss_metrics_getLastError_wtime_totalMillis.rate('PT1M') \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/mysql/mysql-instance.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/mysql/mysql-instance.data.yaml new file mode 100644 index 000000000000..59d30dcedfdb --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/mysql/mysql-instance.data.yaml @@ -0,0 +1,325 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + mysql_global_status_uptime: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + mysql_global_variables_innodb_buffer_pool_size: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + mysql_global_variables_max_connections: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + mysql_global_variables_thread_cache_size: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + mysql_global_status_commands_total: + - labels: + host_name: test-host + service_instance_id: test-instance + command: insert + value: 100.0 + - labels: + host_name: test-host + service_instance_id: test-instance + command: select + value: 100.0 + - labels: + host_name: test-host + service_instance_id: test-instance + command: delete + value: 100.0 + - labels: + host_name: test-host + service_instance_id: test-instance + command: update + value: 100.0 + - labels: + host_name: test-host + service_instance_id: test-instance + command: rollback + value: 100.0 + mysql_global_status_queries: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + mysql_global_status_threads_connected: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + mysql_global_status_threads_created: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + mysql_global_status_threads_running: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + mysql_global_status_threads_cached: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + mysql_global_status_aborted_connects: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + mysql_global_status_connection_errors_total: + - labels: + host_name: test-host + service_instance_id: test-instance + error: max_connection + value: 100.0 + - labels: + host_name: test-host + service_instance_id: test-instance + error: internal + value: 100.0 + mysql_global_status_slow_queries: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 +expected: + meter_mysql_instance_uptime: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_instance_innodb_buffer_pool_size: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_instance_max_connections: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_instance_thread_cache_size: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_instance_commands_insert_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + command: insert + service_instance_id: test-instance + value: 25.0 + meter_mysql_instance_commands_select_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + command: select + service_instance_id: test-instance + value: 25.0 + meter_mysql_instance_commands_delete_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + command: delete + service_instance_id: test-instance + value: 25.0 + meter_mysql_instance_commands_update_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + command: update + service_instance_id: test-instance + value: 25.0 + meter_mysql_instance_qps: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_mysql_instance_tps: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + command: rollback + service_instance_id: test-instance + value: 25.0 + meter_mysql_instance_threads_connected: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_instance_threads_created: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_instance_threads_running: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_instance_threads_cached: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_instance_connects_aborted: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_instance_connects_available: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 0.0 + meter_mysql_instance_connection_errors_max_connections: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + error: max_connection + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_instance_connection_errors_internal: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + error: internal + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_instance_slow_queries_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'mysql::test-host' + instance: test-instance + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 25.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/mysql/mysql-instance.yaml b/test/script-cases/scripts/mal/test-otel-rules/mysql/mysql-instance.yaml new file mode 100644 index 000000000000..bbbbde476748 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/mysql/mysql-instance.yaml @@ -0,0 +1,82 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'mysql-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.host_name = 'mysql::' + tags.host_name}).service(['host_name'] , Layer.MYSQL).instance(['host_name'], ['service_instance_id'], Layer.MYSQL) +metricPrefix: meter_mysql +metricsRules: + # mysql configurations + - name: instance_uptime + exp: mysql_global_status_uptime + - name: instance_innodb_buffer_pool_size + exp: mysql_global_variables_innodb_buffer_pool_size + - name: instance_max_connections + exp: mysql_global_variables_max_connections + - name: instance_thread_cache_size + exp: mysql_global_variables_thread_cache_size + + # database throughput + - name: instance_commands_insert_rate + exp: mysql_global_status_commands_total.tagEqual('command','insert').rate('PT1M') + - name: instance_commands_select_rate + exp: mysql_global_status_commands_total.tagEqual('command','select').rate('PT1M') + - name: instance_commands_delete_rate + exp: mysql_global_status_commands_total.tagEqual('command','delete').rate('PT1M') + - name: instance_commands_update_rate + exp: mysql_global_status_commands_total.tagEqual('command','update').rate('PT1M') + - name: instance_qps + exp: mysql_global_status_queries.rate('PT1M') + - name: instance_tps + exp: mysql_global_status_commands_total.tagMatch('command','rollback|commit').rate('PT1M') + + # connections + ## threads + - name: instance_threads_connected + exp: mysql_global_status_threads_connected + - name: instance_threads_created + exp: mysql_global_status_threads_created + - name: instance_threads_running + exp: mysql_global_status_threads_running + - name: instance_threads_cached + exp: mysql_global_status_threads_cached + ## connect + - name: instance_connects_aborted + exp: mysql_global_status_aborted_connects + - name: instance_connects_available + exp: mysql_global_variables_max_connections.sum(['host_name','service_instance_id']) - mysql_global_status_threads_connected.sum(['host_name','service_instance_id']) + - name: instance_connection_errors_max_connections + exp: mysql_global_status_connection_errors_total.tagEqual('error','max_connection') + - name: instance_connection_errors_internal + exp: mysql_global_status_connection_errors_total.tagEqual('error','internal') + + # slow queries + - name: instance_slow_queries_rate + exp: mysql_global_status_slow_queries.rate('PT1M') + diff --git a/test/script-cases/scripts/mal/test-otel-rules/mysql/mysql-service.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/mysql/mysql-service.data.yaml new file mode 100644 index 000000000000..30115c164d70 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/mysql/mysql-service.data.yaml @@ -0,0 +1,254 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + mysql_global_status_commands_total: + - labels: + service_instance_id: test-instance + host_name: test-host + command: insert + value: 100.0 + - labels: + service_instance_id: test-instance + host_name: test-host + command: select + value: 100.0 + - labels: + service_instance_id: test-instance + host_name: test-host + command: delete + value: 100.0 + - labels: + service_instance_id: test-instance + host_name: test-host + command: update + value: 100.0 + - labels: + service_instance_id: test-instance + host_name: test-host + command: rollback + value: 100.0 + mysql_global_status_queries: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + mysql_global_status_threads_connected: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + mysql_global_status_threads_created: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + mysql_global_status_threads_running: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + mysql_global_status_threads_cached: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + mysql_global_status_aborted_connects: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + mysql_global_variables_max_connections: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + mysql_global_status_connection_errors_total: + - labels: + service_instance_id: test-instance + host_name: test-host + error: max_connection + value: 100.0 + - labels: + service_instance_id: test-instance + host_name: test-host + error: internal + value: 100.0 + mysql_global_status_slow_queries: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 +expected: + meter_mysql_commands_insert_rate: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_mysql_commands_select_rate: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_mysql_commands_delete_rate: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_mysql_commands_update_rate: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_mysql_qps: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_mysql_tps: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_mysql_threads_connected: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_threads_created: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_threads_running: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_threads_cached: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_connects_aborted: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_max_connections: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_status_thread_connected: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_connection_errors_max_connections: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_connection_errors_internal: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_mysql_slow_queries_rate: + entities: + - scope: SERVICE + service: 'mysql::test-host' + layer: MYSQL + samples: + - labels: + host_name: 'mysql::test-host' + service_instance_id: test-instance + value: 25.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/mysql/mysql-service.yaml b/test/script-cases/scripts/mal/test-otel-rules/mysql/mysql-service.yaml new file mode 100644 index 000000000000..642a08591871 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/mysql/mysql-service.yaml @@ -0,0 +1,74 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'mysql-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.host_name = 'mysql::' + tags.host_name}).service(['host_name'] , Layer.MYSQL) +metricPrefix: meter_mysql +metricsRules: + # database throughput + - name: commands_insert_rate + exp: mysql_global_status_commands_total.tagEqual('command','insert').sum(['service_instance_id','host_name']).rate('PT1M') + - name: commands_select_rate + exp: mysql_global_status_commands_total.tagEqual('command','select').sum(['service_instance_id','host_name']).rate('PT1M') + - name: commands_delete_rate + exp: mysql_global_status_commands_total.tagEqual('command','delete').sum(['service_instance_id','host_name']).rate('PT1M') + - name: commands_update_rate + exp: mysql_global_status_commands_total.tagEqual('command','update').sum(['service_instance_id','host_name']).rate('PT1M') + - name: qps + exp: mysql_global_status_queries.rate('PT1M').sum(['service_instance_id','host_name']) + - name: tps + exp: mysql_global_status_commands_total.tagMatch('command','rollback|commit').sum(['host_name', 'service_instance_id']).rate('PT1M') + + # connections + ## threads + - name: threads_connected + exp: mysql_global_status_threads_connected.sum(['service_instance_id','host_name']) + - name: threads_created + exp: mysql_global_status_threads_created.sum(['service_instance_id','host_name']) + - name: threads_running + exp: mysql_global_status_threads_running.sum(['service_instance_id','host_name']) + - name: threads_cached + exp: mysql_global_status_threads_cached.sum(['service_instance_id','host_name']) + ## connect + - name: connects_aborted + exp: mysql_global_status_aborted_connects.sum(['service_instance_id','host_name']) + - name: max_connections + exp: mysql_global_variables_max_connections.sum(['host_name','service_instance_id']) + - name: status_thread_connected + exp: mysql_global_status_threads_connected.sum(['host_name','service_instance_id']) + - name: connection_errors_max_connections + exp: mysql_global_status_connection_errors_total.tagEqual('error','max_connection').sum(['service_instance_id','host_name']) + - name: connection_errors_internal + exp: mysql_global_status_connection_errors_total.tagEqual('error','internal').sum(['service_instance_id','host_name']) + + # slow queries + - name: slow_queries_rate + exp: mysql_global_status_slow_queries.sum(['service_instance_id','host_name']).rate('PT1M') + diff --git a/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-endpoint.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-endpoint.data.yaml new file mode 100644 index 000000000000..2955a6f73671 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-endpoint.data.yaml @@ -0,0 +1,158 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + nginx_http_requests_total: + - labels: + service: test-service + route: /test-route + value: 100.0 + - labels: + service: test-service + route: /test-route + status: '404' + value: 10.0 + - labels: + service: test-service + route: /test-route + status: '500' + value: 5.0 + nginx_http_latency: + - labels: {service: test-service, route: /test-route, le: '50'} + value: 10.0 + - labels: {service: test-service, route: /test-route, le: '100'} + value: 20.0 + - labels: {service: test-service, route: /test-route, le: '250'} + value: 30.0 + - labels: {service: test-service, route: /test-route, le: '500'} + value: 40.0 + - labels: {service: test-service, route: /test-route, le: '1000'} + value: 50.0 + nginx_http_size_bytes: + - labels: + service: test-service + route: /test-route + value: 100.0 +expected: + meter_nginx_endpoint_http_requests: + entities: + - scope: ENDPOINT + service: 'nginx::test-service' + endpoint: /test-route + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + route: /test-route + value: 28.75 + meter_nginx_endpoint_http_latency: + entities: + - scope: ENDPOINT + service: 'nginx::test-service' + endpoint: /test-route + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + route: /test-route + le: '1000000' + value: 50.0 + - labels: + service: 'nginx::test-service' + route: /test-route + le: '100000' + value: 20.0 + - labels: + service: 'nginx::test-service' + route: /test-route + le: '250000' + value: 30.0 + - labels: + service: 'nginx::test-service' + route: /test-route + le: '500000' + value: 40.0 + - labels: + service: 'nginx::test-service' + route: /test-route + le: '50000' + value: 10.0 + meter_nginx_endpoint_http_bandwidth: + entities: + - scope: ENDPOINT + service: 'nginx::test-service' + endpoint: /test-route + layer: NGINX + samples: + - labels: + type: + service: 'nginx::test-service' + route: /test-route + value: 25.0 + meter_nginx_endpoint_http_status: + entities: + - scope: ENDPOINT + service: 'nginx::test-service' + endpoint: /test-route + layer: NGINX + samples: + - labels: + status: + service: 'nginx::test-service' + route: /test-route + value: 25.0 + - labels: + status: '404' + service: 'nginx::test-service' + route: /test-route + value: 2.5 + - labels: + status: '500' + service: 'nginx::test-service' + route: /test-route + value: 1.25 + meter_nginx_endpoint_http_requests_increment: + entities: + - scope: ENDPOINT + service: 'nginx::test-service' + endpoint: /test-route + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + route: /test-route + value: 57.5 + meter_nginx_endpoint_http_4xx_requests_increment: + entities: + - scope: ENDPOINT + service: 'nginx::test-service' + endpoint: /test-route + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + route: /test-route + value: 5.0 + meter_nginx_endpoint_http_5xx_requests_increment: + entities: + - scope: ENDPOINT + service: 'nginx::test-service' + endpoint: /test-route + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + route: /test-route + value: 2.5 diff --git a/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-endpoint.yaml b/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-endpoint.yaml new file mode 100644 index 000000000000..f82e3dc14cc3 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-endpoint.yaml @@ -0,0 +1,49 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'nginx-monitoring' }" # The OpenTelemetry job name +expPrefix: tag({tags -> tags.service = 'nginx::' + tags.service}) +expSuffix: endpoint(['service'],['route'], Layer.NGINX) +metricPrefix: meter_nginx_endpoint +metricsRules: + - name: http_requests + exp: nginx_http_requests_total.sum(['service','route']).rate('PT1M') + - name: http_latency + exp: nginx_http_latency.sum(['le','service','route']).histogram().histogram_percentile([50,75,90,95,99]) + - name: http_bandwidth + exp: nginx_http_size_bytes.sum(['type','service','route']).rate('PT1M') + - name: http_status + exp: nginx_http_requests_total.sum(['status','service','route']).rate('PT1M') + - name: http_requests_increment + exp: nginx_http_requests_total.sum(['service','route']).increase('PT1M') + - name: http_4xx_requests_increment + exp: nginx_http_requests_total.tagMatch("status", "400|401|403|404|405").sum(['service','route']).increase('PT1M') + - name: http_5xx_requests_increment + exp: nginx_http_requests_total.tagMatch("status", "500|502|503|504").sum(['service','route']).increase('PT1M') \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-instance.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-instance.data.yaml new file mode 100644 index 000000000000..7065b3394cd6 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-instance.data.yaml @@ -0,0 +1,175 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + nginx_http_requests_total: + - labels: + service: test-service + service_instance_id: test-instance + value: 100.0 + - labels: + service: test-service + service_instance_id: test-instance + status: '404' + value: 10.0 + - labels: + service: test-service + service_instance_id: test-instance + status: '500' + value: 5.0 + nginx_http_latency: + - labels: {service: test-service, service_instance_id: test-instance, le: '50'} + value: 10.0 + - labels: {service: test-service, service_instance_id: test-instance, le: '100'} + value: 20.0 + - labels: {service: test-service, service_instance_id: test-instance, le: '250'} + value: 30.0 + - labels: {service: test-service, service_instance_id: test-instance, le: '500'} + value: 40.0 + - labels: {service: test-service, service_instance_id: test-instance, le: '1000'} + value: 50.0 + nginx_http_size_bytes: + - labels: + service: test-service + service_instance_id: test-instance + value: 100.0 + nginx_http_connections: + - labels: + service: test-service + service_instance_id: test-instance + value: 100.0 +expected: + meter_nginx_instance_http_requests: + entities: + - scope: SERVICE_INSTANCE + service: 'nginx::test-service' + instance: test-instance + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + service_instance_id: test-instance + value: 28.75 + meter_nginx_instance_http_latency: + entities: + - scope: SERVICE_INSTANCE + service: 'nginx::test-service' + instance: test-instance + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + service_instance_id: test-instance + le: '1000000' + value: 50.0 + - labels: + service: 'nginx::test-service' + service_instance_id: test-instance + le: '100000' + value: 20.0 + - labels: + service: 'nginx::test-service' + service_instance_id: test-instance + le: '250000' + value: 30.0 + - labels: + service: 'nginx::test-service' + service_instance_id: test-instance + le: '500000' + value: 40.0 + - labels: + service: 'nginx::test-service' + service_instance_id: test-instance + le: '50000' + value: 10.0 + meter_nginx_instance_http_bandwidth: + entities: + - scope: SERVICE_INSTANCE + service: 'nginx::test-service' + instance: test-instance + layer: NGINX + samples: + - labels: + type: + service: 'nginx::test-service' + service_instance_id: test-instance + value: 25.0 + meter_nginx_instance_http_connections: + entities: + - scope: SERVICE_INSTANCE + service: 'nginx::test-service' + instance: test-instance + layer: NGINX + samples: + - labels: + state: + service: 'nginx::test-service' + service_instance_id: test-instance + value: 100.0 + meter_nginx_instance_http_status: + entities: + - scope: SERVICE_INSTANCE + service: 'nginx::test-service' + instance: test-instance + layer: NGINX + samples: + - labels: + status: + service: 'nginx::test-service' + service_instance_id: test-instance + value: 25.0 + - labels: + status: '404' + service: 'nginx::test-service' + service_instance_id: test-instance + value: 2.5 + - labels: + status: '500' + service: 'nginx::test-service' + service_instance_id: test-instance + value: 1.25 + meter_nginx_instance_http_requests_increment: + entities: + - scope: SERVICE_INSTANCE + service: 'nginx::test-service' + instance: test-instance + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + service_instance_id: test-instance + value: 57.5 + meter_nginx_instance_http_4xx_requests_increment: + entities: + - scope: SERVICE_INSTANCE + service: 'nginx::test-service' + instance: test-instance + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + service_instance_id: test-instance + value: 5.0 + meter_nginx_instance_http_5xx_requests_increment: + entities: + - scope: SERVICE_INSTANCE + service: 'nginx::test-service' + instance: test-instance + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + service_instance_id: test-instance + value: 2.5 diff --git a/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-instance.yaml b/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-instance.yaml new file mode 100644 index 000000000000..091efdccf4ae --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-instance.yaml @@ -0,0 +1,51 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'nginx-monitoring' }" # The OpenTelemetry job name +expPrefix: tag({tags -> tags.service = 'nginx::' + tags.service}) +expSuffix: instance(['service'],['service_instance_id'], Layer.NGINX) +metricPrefix: meter_nginx_instance +metricsRules: + - name: http_requests + exp: nginx_http_requests_total.sum(['service','service_instance_id']).rate('PT1M') + - name: http_latency + exp: nginx_http_latency.sum(['le','service','service_instance_id']).histogram().histogram_percentile([50,75,90,95,99]) + - name: http_bandwidth + exp: nginx_http_size_bytes.sum(['type','service','service_instance_id']).rate('PT1M') + - name: http_connections + exp: nginx_http_connections.sum(['state','service','service_instance_id']) + - name: http_status + exp: nginx_http_requests_total.sum(['status','service','service_instance_id']).rate('PT1M') + - name: http_requests_increment + exp: nginx_http_requests_total.sum(['service','service_instance_id']).increase('PT1M') + - name: http_4xx_requests_increment + exp: nginx_http_requests_total.tagMatch("status", "400|401|403|404|405").sum(['service','service_instance_id']).increase('PT1M') + - name: http_5xx_requests_increment + exp: nginx_http_requests_total.tagMatch("status", "500|502|503|504").sum(['service','service_instance_id']).increase('PT1M') \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-service.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-service.data.yaml new file mode 100644 index 000000000000..6c5cc5c95977 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-service.data.yaml @@ -0,0 +1,162 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + nginx_http_requests_total: + - labels: + service: test-service + value: 100.0 + - labels: + service: test-service + status: '404' + value: 10.0 + - labels: + service: test-service + status: '500' + value: 5.0 + nginx_http_latency: + - labels: {service: test-service, le: '50'} + value: 10.0 + - labels: {service: test-service, le: '100'} + value: 20.0 + - labels: {service: test-service, le: '250'} + value: 30.0 + - labels: {service: test-service, le: '500'} + value: 40.0 + - labels: {service: test-service, le: '1000'} + value: 50.0 + nginx_http_size_bytes: + - labels: + service: test-service + value: 100.0 + nginx_http_connections: + - labels: + service: test-service + value: 100.0 +expected: + meter_nginx_service_http_requests: + entities: + - scope: SERVICE + service: 'nginx::test-service' + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + service_instance_id: + value: 28.75 + meter_nginx_service_http_latency: + entities: + - scope: SERVICE + service: 'nginx::test-service' + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + service_instance_id: + le: '1000000' + value: 50.0 + - labels: + service: 'nginx::test-service' + service_instance_id: + le: '100000' + value: 20.0 + - labels: + service: 'nginx::test-service' + service_instance_id: + le: '250000' + value: 30.0 + - labels: + service: 'nginx::test-service' + service_instance_id: + le: '500000' + value: 40.0 + - labels: + service: 'nginx::test-service' + service_instance_id: + le: '50000' + value: 10.0 + meter_nginx_service_http_bandwidth: + entities: + - scope: SERVICE + service: 'nginx::test-service' + layer: NGINX + samples: + - labels: + type: + service: 'nginx::test-service' + service_instance_id: + value: 25.0 + meter_nginx_service_http_connections: + entities: + - scope: SERVICE + service: 'nginx::test-service' + layer: NGINX + samples: + - labels: + state: + service: 'nginx::test-service' + service_instance_id: + value: 100.0 + meter_nginx_service_http_status: + entities: + - scope: SERVICE + service: 'nginx::test-service' + layer: NGINX + samples: + - labels: + status: + service: 'nginx::test-service' + service_instance_id: + value: 25.0 + - labels: + status: '404' + service: 'nginx::test-service' + service_instance_id: + value: 2.5 + - labels: + status: '500' + service: 'nginx::test-service' + service_instance_id: + value: 1.25 + meter_nginx_service_http_requests_increment: + entities: + - scope: SERVICE + service: 'nginx::test-service' + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + service_instance_id: + value: 57.5 + meter_nginx_service_http_4xx_requests_increment: + entities: + - scope: SERVICE + service: 'nginx::test-service' + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + service_instance_id: + value: 5.0 + meter_nginx_service_http_5xx_requests_increment: + entities: + - scope: SERVICE + service: 'nginx::test-service' + layer: NGINX + samples: + - labels: + service: 'nginx::test-service' + service_instance_id: + value: 2.5 diff --git a/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-service.yaml b/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-service.yaml new file mode 100644 index 000000000000..c5446b95afaa --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/nginx/nginx-service.yaml @@ -0,0 +1,51 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'nginx-monitoring' }" # The OpenTelemetry job name +expPrefix: tag({tags -> tags.service = 'nginx::' + tags.service}) +expSuffix: service(['service'] , Layer.NGINX) +metricPrefix: meter_nginx_service +metricsRules: + - name: http_requests + exp: nginx_http_requests_total.sum(['service','service_instance_id']).rate('PT1M') + - name: http_latency + exp: nginx_http_latency.sum(['le','service','service_instance_id']).histogram().histogram_percentile([50,75,90,95,99]) + - name: http_bandwidth + exp: nginx_http_size_bytes.sum(['type','service','service_instance_id']).rate('PT1M') + - name: http_connections + exp: nginx_http_connections.sum(['state','service','service_instance_id']) + - name: http_status + exp: nginx_http_requests_total.sum(['status','service','service_instance_id']).rate('PT1M') + - name: http_requests_increment + exp: nginx_http_requests_total.sum(['service','service_instance_id']).increase('PT1M') + - name: http_4xx_requests_increment + exp: nginx_http_requests_total.tagMatch("status", "400|401|403|404|405").sum(['service','service_instance_id']).increase('PT1M') + - name: http_5xx_requests_increment + exp: nginx_http_requests_total.tagMatch("status", "500|502|503|504").sum(['service','service_instance_id']).increase('PT1M') \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/oap.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/oap.data.yaml new file mode 100644 index 000000000000..a0a6b318da53 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/oap.data.yaml @@ -0,0 +1,1539 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + process_cpu_seconds_total: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + jvm_memory_bytes_used: + - labels: + service: test-service + host_name: test-host + area: heap + gc: PS Scavenge + level: ERROR + value: 100.0 + jvm_buffer_pool_used_bytes: + - labels: + service: test-service + host_name: test-host + pool: PS_Eden_Space + gc: PS Scavenge + level: ERROR + value: 100.0 + jvm_gc_collection_seconds_count: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + jvm_gc_collection_seconds_sum: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + trace_in_latency_count: + - labels: + service: test-service + host_name: test-host + protocol: http + gc: PS Scavenge + level: ERROR + value: 100.0 + trace_in_latency: + - labels: + le: '50' + service: test-service + host_name: test-host + protocol: http + gc: PS Scavenge + level: ERROR + value: 10.0 + - labels: + le: '100' + service: test-service + host_name: test-host + protocol: http + gc: PS Scavenge + level: ERROR + value: 20.0 + - labels: + le: '250' + service: test-service + host_name: test-host + protocol: http + gc: PS Scavenge + level: ERROR + value: 30.0 + - labels: + le: '500' + service: test-service + host_name: test-host + protocol: http + gc: PS Scavenge + level: ERROR + value: 40.0 + - labels: + le: '1000' + service: test-service + host_name: test-host + protocol: http + gc: PS Scavenge + level: ERROR + value: 50.0 + trace_analysis_error_count: + - labels: + service: test-service + host_name: test-host + protocol: http + gc: PS Scavenge + level: ERROR + value: 100.0 + spans_dropped_count: + - labels: + service: test-service + host_name: test-host + protocol: http + gc: PS Scavenge + level: ERROR + value: 100.0 + mesh_analysis_latency_count: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + mesh_analysis_latency: + - labels: + le: '50' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 10.0 + - labels: + le: '100' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 20.0 + - labels: + le: '250' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 30.0 + - labels: + le: '500' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 40.0 + - labels: + le: '1000' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 50.0 + mesh_analysis_error_count: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + metrics_aggregation: + - labels: + service: test-service + host_name: test-host + level: ERROR + gc: PS Scavenge + dimensionality: minute + value: 100.0 + metrics_aggregation_queue_used_percentage: + - labels: + service: test-service + host_name: test-host + level: ERROR + slot: test-value + gc: PS Scavenge + value: 100.0 + persistence_timer_bulk_execute_latency: + - labels: + le: '50' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 10.0 + - labels: + le: '100' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 20.0 + - labels: + le: '250' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 30.0 + - labels: + le: '500' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 40.0 + - labels: + le: '1000' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 50.0 + persistence_timer_bulk_prepare_latency: + - labels: + le: '50' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 10.0 + - labels: + le: '100' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 20.0 + - labels: + le: '250' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 30.0 + - labels: + le: '500' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 40.0 + - labels: + le: '1000' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 50.0 + persistence_timer_bulk_error_count: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + persistence_timer_bulk_execute_latency_count: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + persistence_timer_bulk_prepare_latency_count: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + metrics_persistent_cache: + - labels: + service: test-service + host_name: test-host + status: active + gc: PS Scavenge + level: ERROR + value: 100.0 + metrics_persistent_collection_cached_size: + - labels: + service: test-service + host_name: test-host + dimensionality: minute + kind: test-kind + metricName: test-metric + gc: PS Scavenge + level: ERROR + value: 100.0 + jvm_threads_current: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + jvm_threads_daemon: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + jvm_threads_peak: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + jvm_threads_state: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + state: RUNNABLE + value: 100.0 + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + state: BLOCKED + value: 100.0 + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + state: WAITING + value: 100.0 + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + state: TIMED_WAITING + value: 100.0 + jvm_classes_loaded: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + jvm_classes_unloaded_total: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + jvm_classes_loaded_total: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + k8s_als_in_count: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + k8s_als_drop_count: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + k8s_als_in_latency: + - labels: + le: '50' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 10.0 + - labels: + le: '100' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 20.0 + - labels: + le: '250' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 30.0 + - labels: + le: '500' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 40.0 + - labels: + le: '1000' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 50.0 + k8s_als_in_latency_count: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + k8s_als_error_streams: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + otel_metrics_latency_count: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + otel_logs_latency_count: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + otel_spans_latency_count: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + otel_spans_dropped: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + otel_metrics_latency: + - labels: + le: '50' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 10.0 + - labels: + le: '100' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 20.0 + - labels: + le: '250' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 30.0 + - labels: + le: '500' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 40.0 + - labels: + le: '1000' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 50.0 + otel_logs_latency: + - labels: + le: '50' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 10.0 + - labels: + le: '100' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 20.0 + - labels: + le: '250' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 30.0 + - labels: + le: '500' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 40.0 + - labels: + le: '1000' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 50.0 + otel_spans_latency: + - labels: + le: '50' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 10.0 + - labels: + le: '100' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 20.0 + - labels: + le: '250' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 30.0 + - labels: + le: '500' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 40.0 + - labels: + le: '1000' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 50.0 + graphql_query_latency: + - labels: + le: '50' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 10.0 + - labels: + le: '100' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 20.0 + - labels: + le: '250' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 30.0 + - labels: + le: '500' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 40.0 + - labels: + le: '1000' + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 50.0 + graphql_query_latency_count: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + graphql_query_error_count: + - labels: + service: test-service + host_name: test-host + gc: PS Scavenge + level: ERROR + value: 100.0 + watermark_circuit_breaker_break_count: + - labels: + service: test-service + host_name: test-host + listener: test-listener + event: test-event + gc: PS Scavenge + level: ERROR + value: 100.0 + watermark_circuit_breaker_recover_count: + - labels: + service: test-service + host_name: test-host + listener: test-listener + gc: PS Scavenge + level: ERROR + value: 100.0 + elasticsearch_write_latency: + - labels: + le: '50' + service: test-service + host_name: test-host + operation: test-op + gc: PS Scavenge + level: ERROR + value: 10.0 + - labels: + le: '100' + service: test-service + host_name: test-host + operation: test-op + gc: PS Scavenge + level: ERROR + value: 20.0 + - labels: + le: '250' + service: test-service + host_name: test-host + operation: test-op + gc: PS Scavenge + level: ERROR + value: 30.0 + - labels: + le: '500' + service: test-service + host_name: test-host + operation: test-op + gc: PS Scavenge + level: ERROR + value: 40.0 + - labels: + le: '1000' + service: test-service + host_name: test-host + operation: test-op + gc: PS Scavenge + level: ERROR + value: 50.0 + banyandb_write_latency: + - labels: + le: '50' + service: test-service + host_name: test-host + catalog: test-catalog + operation: test-op + gc: PS Scavenge + level: ERROR + value: 10.0 + - labels: + le: '100' + service: test-service + host_name: test-host + catalog: test-catalog + operation: test-op + gc: PS Scavenge + level: ERROR + value: 20.0 + - labels: + le: '250' + service: test-service + host_name: test-host + catalog: test-catalog + operation: test-op + gc: PS Scavenge + level: ERROR + value: 30.0 + - labels: + le: '500' + service: test-service + host_name: test-host + catalog: test-catalog + operation: test-op + gc: PS Scavenge + level: ERROR + value: 40.0 + - labels: + le: '1000' + service: test-service + host_name: test-host + catalog: test-catalog + operation: test-op + gc: PS Scavenge + level: ERROR + value: 50.0 +expected: + meter_oap_instance_cpu_percentage: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 2500.0 + meter_oap_instance_jvm_memory_bytes_used: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + area: heap + value: 100.0 + meter_oap_instance_jvm_buffer_pool_bytes_used: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + pool: PS_Eden_Space + value: 100.0 + meter_oap_instance_jvm_gc_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + gc: young_gc_count + service: test-service + host_name: test-host + value: 50.0 + meter_oap_instance_jvm_gc_time: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + gc: young_gc_time + service: test-service + host_name: test-host + value: 50000.0 + meter_oap_instance_trace_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + protocol: http + value: 50.0 + meter_oap_instance_trace_latency_percentile: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + protocol: http + le: '1000000' + value: 25.0 + - labels: + service: test-service + host_name: test-host + protocol: http + le: '100000' + value: 10.0 + - labels: + service: test-service + host_name: test-host + protocol: http + le: '250000' + value: 15.0 + - labels: + service: test-service + host_name: test-host + protocol: http + le: '500000' + value: 20.0 + - labels: + service: test-service + host_name: test-host + protocol: http + le: '50000' + value: 5.0 + meter_oap_instance_trace_analysis_error_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + protocol: http + value: 50.0 + meter_oap_instance_spans_dropped_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + protocol: http + value: 50.0 + meter_oap_instance_mesh_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_instance_mesh_latency_percentile: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + le: '1000000' + value: 25.0 + - labels: + service: test-service + host_name: test-host + le: '100000' + value: 10.0 + - labels: + service: test-service + host_name: test-host + le: '250000' + value: 15.0 + - labels: + service: test-service + host_name: test-host + le: '500000' + value: 20.0 + - labels: + service: test-service + host_name: test-host + le: '50000' + value: 5.0 + meter_oap_instance_mesh_analysis_error_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_instance_metrics_aggregation: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + level: ERROR + service: test-service + host_name: test-host + value: 50.0 + meter_oap_instance_metrics_aggregation_queue_used_per_ten_thousand: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + level: ERROR + slot: test-value + value: 10000.0 + meter_oap_instance_persistence_execute_percentile: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + le: '1000000' + value: 25.0 + - labels: + service: test-service + host_name: test-host + le: '100000' + value: 10.0 + - labels: + service: test-service + host_name: test-host + le: '250000' + value: 15.0 + - labels: + service: test-service + host_name: test-host + le: '500000' + value: 20.0 + - labels: + service: test-service + host_name: test-host + le: '50000' + value: 5.0 + meter_oap_instance_persistence_prepare_percentile: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + le: '1000000' + value: 25.0 + - labels: + service: test-service + host_name: test-host + le: '100000' + value: 10.0 + - labels: + service: test-service + host_name: test-host + le: '250000' + value: 15.0 + - labels: + service: test-service + host_name: test-host + le: '500000' + value: 20.0 + - labels: + service: test-service + host_name: test-host + le: '50000' + value: 5.0 + meter_oap_instance_persistence_error_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_instance_persistence_execute_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_instance_persistence_prepare_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_instance_metrics_persistent_cache: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + status: active + value: 50.0 + meter_oap_instance_metrics_persistent_collection_cached_size: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + dimensionality: minute + kind: test-kind + metricName: test-metric + value: 100.0 + meter_oap_jvm_thread_live_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 100.0 + meter_oap_jvm_thread_daemon_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 100.0 + meter_oap_jvm_thread_peak_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 100.0 + meter_oap_jvm_thread_runnable_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 100.0 + meter_oap_jvm_thread_blocked_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 100.0 + meter_oap_jvm_thread_waiting_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 100.0 + meter_oap_jvm_thread_timed_waiting_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 100.0 + meter_oap_jvm_class_loaded_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 100.0 + meter_oap_jvm_class_total_unloaded_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 100.0 + meter_oap_jvm_class_total_loaded_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 100.0 + meter_oap_instance_k8s_als_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_instance_k8s_als_drop: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_instance_k8s_als_latency_percentile: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + le: '1000000' + value: 25.0 + - labels: + service: test-service + host_name: test-host + le: '100000' + value: 10.0 + - labels: + service: test-service + host_name: test-host + le: '250000' + value: 15.0 + - labels: + service: test-service + host_name: test-host + le: '500000' + value: 20.0 + - labels: + service: test-service + host_name: test-host + le: '50000' + value: 5.0 + meter_oap_instance_k8s_als_streams: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_instance_k8s_als_error_streams: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_otel_metrics_received: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_otel_logs_received: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_otel_spans_received: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_otel_spans_dropped: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_otel_metrics_latency_percentile: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + le: '1000000' + value: 25.0 + - labels: + service: test-service + host_name: test-host + le: '100000' + value: 10.0 + - labels: + service: test-service + host_name: test-host + le: '250000' + value: 15.0 + - labels: + service: test-service + host_name: test-host + le: '500000' + value: 20.0 + - labels: + service: test-service + host_name: test-host + le: '50000' + value: 5.0 + meter_oap_otel_logs_latency_percentile: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + le: '1000000' + value: 25.0 + - labels: + service: test-service + host_name: test-host + le: '100000' + value: 10.0 + - labels: + service: test-service + host_name: test-host + le: '250000' + value: 15.0 + - labels: + service: test-service + host_name: test-host + le: '500000' + value: 20.0 + - labels: + service: test-service + host_name: test-host + le: '50000' + value: 5.0 + meter_oap_otel_spans_latency_percentile: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + le: '1000000' + value: 25.0 + - labels: + service: test-service + host_name: test-host + le: '100000' + value: 10.0 + - labels: + service: test-service + host_name: test-host + le: '250000' + value: 15.0 + - labels: + service: test-service + host_name: test-host + le: '500000' + value: 20.0 + - labels: + service: test-service + host_name: test-host + le: '50000' + value: 5.0 + meter_oap_graphql_query_latency_percentile: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + le: '1000000' + value: 25.0 + - labels: + service: test-service + host_name: test-host + le: '100000' + value: 10.0 + - labels: + service: test-service + host_name: test-host + le: '250000' + value: 15.0 + - labels: + service: test-service + host_name: test-host + le: '500000' + value: 20.0 + - labels: + service: test-service + host_name: test-host + le: '50000' + value: 5.0 + meter_oap_instance_graphql_query_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_instance_graphql_query_error_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + value: 50.0 + meter_oap_instance_watermark_circuit_breaker_break_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + listener: test-listener + event: test-event + value: 100.0 + meter_oap_instance_watermark_circuit_breaker_recover_count: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + listener: test-listener + value: 100.0 + meter_oap_elasticsearch_write_latency_percentile: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + operation: test-op + le: '1000000' + value: 25.0 + - labels: + service: test-service + host_name: test-host + operation: test-op + le: '100000' + value: 10.0 + - labels: + service: test-service + host_name: test-host + operation: test-op + le: '250000' + value: 15.0 + - labels: + service: test-service + host_name: test-host + operation: test-op + le: '500000' + value: 20.0 + - labels: + service: test-service + host_name: test-host + operation: test-op + le: '50000' + value: 5.0 + meter_oap_banyandb_write_latency_percentile: + entities: + - scope: SERVICE_INSTANCE + service: test-service + instance: test-host + layer: SO11Y_OAP + samples: + - labels: + service: test-service + host_name: test-host + catalog: test-catalog + operation: test-op + le: '1000000' + value: 25.0 + - labels: + service: test-service + host_name: test-host + catalog: test-catalog + operation: test-op + le: '100000' + value: 10.0 + - labels: + service: test-service + host_name: test-host + catalog: test-catalog + operation: test-op + le: '250000' + value: 15.0 + - labels: + service: test-service + host_name: test-host + catalog: test-catalog + operation: test-op + le: '500000' + value: 20.0 + - labels: + service: test-service + host_name: test-host + catalog: test-catalog + operation: test-op + le: '50000' + value: 5.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/oap.yaml b/test/script-cases/scripts/mal/test-otel-rules/oap.yaml new file mode 100644 index 000000000000..a0c53108358d --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/oap.yaml @@ -0,0 +1,144 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'skywalking-so11y' }" # The OpenTelemetry job name +expSuffix: instance(['service'], ['host_name'], Layer.SO11Y_OAP) +metricPrefix: meter_oap +metricsRules: + - name: instance_cpu_percentage + exp: (process_cpu_seconds_total * 100).sum(['service', 'host_name']).rate('PT1M') + - name: instance_jvm_memory_bytes_used + exp: jvm_memory_bytes_used.sum(['service', 'host_name', 'area']) + - name: instance_jvm_buffer_pool_bytes_used + exp: jvm_buffer_pool_used_bytes.sum(['service', 'host_name', 'pool']) + - name: instance_jvm_gc_count + exp: > + jvm_gc_collection_seconds_count.tagMatch('gc', 'PS Scavenge|Copy|ParNew|G1 Young Generation|PS MarkSweep|MarkSweepCompact|ConcurrentMarkSweep|G1 Old Generation') + .sum(['service', 'host_name', 'gc']).increase('PT1M') + .tag({tags -> if (tags['gc'] == 'PS Scavenge' || tags['gc'] == 'Copy' || tags['gc'] == 'ParNew' || tags['gc'] == 'G1 Young Generation') {tags.gc = 'young_gc_count'} }) + .tag({tags -> if (tags['gc'] == 'PS MarkSweep' || tags['gc'] == 'MarkSweepCompact' || tags['gc'] == 'ConcurrentMarkSweep' || tags['gc'] == 'G1 Old Generation') {tags.gc = 'old_gc_count'} }) + - name: instance_jvm_gc_time + exp: > + (jvm_gc_collection_seconds_sum * 1000).tagMatch('gc', 'PS Scavenge|Copy|ParNew|G1 Young Generation|PS MarkSweep|MarkSweepCompact|ConcurrentMarkSweep|G1 Old Generation') + .sum(['service', 'host_name', 'gc']).increase('PT1M') + .tag({tags -> if (tags['gc'] == 'PS Scavenge' || tags['gc'] == 'Copy' || tags['gc'] == 'ParNew' || tags['gc'] == 'G1 Young Generation') {tags.gc = 'young_gc_time'} }) + .tag({tags -> if (tags['gc'] == 'PS MarkSweep' || tags['gc'] == 'MarkSweepCompact' || tags['gc'] == 'ConcurrentMarkSweep' || tags['gc'] == 'G1 Old Generation') {tags.gc = 'old_gc_time'} }) + - name: instance_trace_count + exp: trace_in_latency_count.sum(['service', 'host_name', 'protocol']).increase('PT1M') + - name: instance_trace_latency_percentile + exp: trace_in_latency.sum(['le', 'service', 'host_name', 'protocol']).increase('PT1M').histogram().histogram_percentile([50,75,90,95,99]) + - name: instance_trace_analysis_error_count + exp: trace_analysis_error_count.sum(['service', 'host_name', 'protocol']).increase('PT1M') + - name: instance_spans_dropped_count + exp: spans_dropped_count.sum(['service', 'host_name', 'protocol']).increase('PT1M') + - name: instance_mesh_count + exp: mesh_analysis_latency_count.sum(['service', 'host_name']).increase('PT1M') + - name: instance_mesh_latency_percentile + exp: mesh_analysis_latency.sum(['le', 'service', 'host_name']).increase('PT1M').histogram().histogram_percentile([50,75,90,95,99]) + - name: instance_mesh_analysis_error_count + exp: mesh_analysis_error_count.sum(['service', 'host_name']).increase('PT1M') + - name: instance_metrics_aggregation + exp: > + metrics_aggregation.tagEqual('dimensionality', 'minute').sum(['service', 'host_name', 'level']).increase('PT1M') + .tag({tags -> if (tags['level'] == '1') {tags.level = 'L1 aggregation'} }).tag({tags -> if (tags['level'] == '2') {tags.level = 'L2 aggregation'} }) + - name: instance_metrics_aggregation_queue_used_per_ten_thousand + exp: 100 * metrics_aggregation_queue_used_percentage.sum(['service', 'host_name', 'level', 'slot']) + - name: instance_persistence_execute_percentile + exp: persistence_timer_bulk_execute_latency.sum(['le', 'service', 'host_name']).increase('PT1M').histogram().histogram_percentile([50,75,90,95,99]) + - name: instance_persistence_prepare_percentile + exp: persistence_timer_bulk_prepare_latency.sum(['le', 'service', 'host_name']).increase('PT1M').histogram().histogram_percentile([50,75,90,95,99]) + - name: instance_persistence_error_count + exp: persistence_timer_bulk_error_count.sum(['service', 'host_name']).increase('PT1M') + - name: instance_persistence_execute_count + exp: persistence_timer_bulk_execute_latency_count.sum(['service', 'host_name']).increase('PT1M') + - name: instance_persistence_prepare_count + exp: persistence_timer_bulk_prepare_latency_count.sum(['service', 'host_name']).increase('PT1M') + - name: instance_metrics_persistent_cache + exp: metrics_persistent_cache.sum(['service', 'host_name', 'status']).increase('PT1M') + - name: instance_metrics_persistent_collection_cached_size + exp: metrics_persistent_collection_cached_size.sum(['service', 'host_name', 'dimensionality', 'kind', 'metricName']) + - name: jvm_thread_live_count + exp: jvm_threads_current.sum(['service', 'host_name']) + - name: jvm_thread_daemon_count + exp: jvm_threads_daemon.sum(['service', 'host_name']) + - name: jvm_thread_peak_count + exp: jvm_threads_peak.sum(['service', 'host_name']) + - name: jvm_thread_runnable_count + exp: jvm_threads_state.tagMatch('state', 'RUNNABLE').sum(['service', 'host_name']) + - name: jvm_thread_blocked_count + exp: jvm_threads_state.tagMatch('state', 'BLOCKED').sum(['service', 'host_name']) + - name: jvm_thread_waiting_count + exp: jvm_threads_state.tagMatch('state', 'WAITING').sum(['service', 'host_name']) + - name: jvm_thread_timed_waiting_count + exp: jvm_threads_state.tagMatch('state', 'TIMED_WAITING').sum(['service', 'host_name']) + - name: jvm_class_loaded_count + exp: jvm_classes_loaded.sum(['service', 'host_name']) + - name: jvm_class_total_unloaded_count + exp: jvm_classes_unloaded_total.sum(['service', 'host_name']) + - name: jvm_class_total_loaded_count + exp: jvm_classes_loaded_total.sum(['service', 'host_name']) + - name: instance_k8s_als_count + exp: k8s_als_in_count.sum(['service', 'host_name']).increase('PT1M') + - name: instance_k8s_als_drop + exp: k8s_als_drop_count.sum(['service', 'host_name']).increase('PT1M') + - name: instance_k8s_als_latency_percentile + exp: k8s_als_in_latency.sum(['le', 'service', 'host_name']).increase('PT1M').histogram().histogram_percentile([50,75,90,95,99]) + - name: instance_k8s_als_streams + exp: k8s_als_in_latency_count.sum(['service', 'host_name']).increase('PT1M') + - name: instance_k8s_als_error_streams + exp: k8s_als_error_streams.sum(['service', 'host_name']).increase('PT1M') + - name: otel_metrics_received + exp: otel_metrics_latency_count.sum(['service', 'host_name']).increase('PT1M') + - name: otel_logs_received + exp: otel_logs_latency_count.sum(['service', 'host_name']).increase('PT1M') + - name: otel_spans_received + exp: otel_spans_latency_count.sum(['service', 'host_name']).increase('PT1M') + - name: otel_spans_dropped + exp: otel_spans_dropped.sum(['service', 'host_name']).increase('PT1M') + - name: otel_metrics_latency_percentile + exp: otel_metrics_latency.sum(['le', 'service', 'host_name']).increase('PT1M').histogram().histogram_percentile([50,75,90,95,99]) + - name: otel_logs_latency_percentile + exp: otel_logs_latency.sum(['le', 'service', 'host_name']).increase('PT1M').histogram().histogram_percentile([50,75,90,95,99]) + - name: otel_spans_latency_percentile + exp: otel_spans_latency.sum(['le', 'service', 'host_name']).increase('PT1M').histogram().histogram_percentile([50,75,90,95,99]) + - name: graphql_query_latency_percentile + exp: graphql_query_latency.sum(['le', 'service', 'host_name']).increase('PT5M').histogram().histogram_percentile([50,75,90,95,99]) + - name: instance_graphql_query_count + exp: graphql_query_latency_count.sum(['service', 'host_name']).increase('PT1M') + - name: instance_graphql_query_error_count + exp: graphql_query_error_count.sum(['service', 'host_name']).increase('PT1M') + - name: instance_watermark_circuit_breaker_break_count + exp: watermark_circuit_breaker_break_count.sum(['service', 'host_name', 'listener', 'event']) + - name: instance_watermark_circuit_breaker_recover_count + exp: watermark_circuit_breaker_recover_count.sum(['service', 'host_name', 'listener']) + - name: elasticsearch_write_latency_percentile + exp: elasticsearch_write_latency.sum(['le', 'service', 'host_name', 'operation']).increase('PT1M').histogram().histogram_percentile([50,75,90,95,99]) + - name: banyandb_write_latency_percentile + exp: banyandb_write_latency.sum(['le', 'service', 'host_name', 'catalog', 'operation']).increase('PT1M').histogram().histogram_percentile([50,75,90,95,99]) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/postgresql/postgresql-instance.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/postgresql/postgresql-instance.data.yaml new file mode 100644 index 000000000000..2940b98e84bb --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/postgresql/postgresql-instance.data.yaml @@ -0,0 +1,597 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + pg_settings_shared_buffers_bytes: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_settings_effective_cache_size_bytes: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_settings_maintenance_work_mem_bytes: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_settings_work_mem_bytes: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_settings_seq_page_cost: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_settings_random_page_cost: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_settings_max_wal_size_bytes: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_settings_max_parallel_workers: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_settings_max_worker_processes: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_stat_database_tup_fetched: + - labels: + datname: test-value + host_name: test-host + service_instance_id: test-instance + mode: user + value: 100.0 + pg_stat_database_tup_deleted: + - labels: + datname: test-value + host_name: test-host + service_instance_id: test-instance + mode: user + value: 100.0 + pg_stat_database_tup_inserted: + - labels: + datname: test-value + host_name: test-host + service_instance_id: test-instance + mode: user + value: 100.0 + pg_stat_database_tup_updated: + - labels: + datname: test-value + host_name: test-host + service_instance_id: test-instance + mode: user + value: 100.0 + pg_stat_database_tup_returned: + - labels: + datname: test-value + host_name: test-host + service_instance_id: test-instance + mode: user + value: 100.0 + pg_locks_count: + - labels: + mode: user + host_name: test-host + service_instance_id: test-instance + datname: test-value + value: 100.0 + pg_stat_activity_count: + - labels: + datname: test-value + host_name: test-host + service_instance_id: test-instance + mode: user + state: active + value: 100.0 + - labels: + datname: test-value + host_name: test-host + service_instance_id: test-instance + mode: user + state: idle + value: 100.0 + pg_stat_database_xact_commit: + - labels: + datname: test-value + host_name: test-host + service_instance_id: test-instance + mode: user + value: 100.0 + pg_stat_database_xact_rollback: + - labels: + datname: test-value + host_name: test-host + service_instance_id: test-instance + mode: user + value: 100.0 + pg_stat_database_blks_hit: + - labels: + datname: test-value + host_name: test-host + service_instance_id: test-instance + mode: user + value: 100.0 + pg_stat_database_blks_read: + - labels: + datname: test-value + host_name: test-host + service_instance_id: test-instance + mode: user + value: 100.0 + pg_stat_database_temp_bytes: + - labels: + datname: test-value + host_name: test-host + service_instance_id: test-instance + mode: user + value: 100.0 + pg_stat_bgwriter_checkpoint_write_time_total: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_stat_bgwriter_checkpoint_sync_time_total: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_stat_bgwriter_checkpoints_req_total: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_stat_bgwriter_checkpoints_timed_total: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_stat_database_conflicts: + - labels: + datname: test-value + host_name: test-host + service_instance_id: test-instance + mode: user + value: 100.0 + pg_stat_database_deadlocks: + - labels: + datname: test-value + host_name: test-host + service_instance_id: test-instance + mode: user + value: 100.0 + pg_stat_bgwriter_buffers_checkpoint_total: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_stat_bgwriter_buffers_clean_total: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_stat_bgwriter_buffers_backend_fsync_total: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_stat_bgwriter_buffers_alloc_total: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 + pg_stat_bgwriter_buffers_backend_total: + - labels: + host_name: test-host + mode: user + datname: test-value + value: 100.0 +expected: + meter_pg_instance_shared_buffers: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 100.0 + meter_pg_instance_effective_cache: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 100.0 + meter_pg_instance_maintenance_work_mem: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 100.0 + meter_pg_instance_work_mem: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 100.0 + meter_pg_instance_seq_page_cost: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 100.0 + meter_pg_instance_random_page_cost: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 100.0 + meter_pg_instance_max_wal_size: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 100.0 + meter_pg_instance_max_parallel_workers: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 100.0 + meter_pg_instance_max_worker_processes: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 100.0 + meter_pg_instance_fetched_rows_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + instance: test-instance + layer: POSTGRESQL + samples: + - labels: + datname: test-value + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_instance_deleted_rows_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + instance: test-instance + layer: POSTGRESQL + samples: + - labels: + datname: test-value + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_instance_inserted_rows_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + instance: test-instance + layer: POSTGRESQL + samples: + - labels: + datname: test-value + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_instance_updated_rows_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + instance: test-instance + layer: POSTGRESQL + samples: + - labels: + datname: test-value + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_instance_returned_rows_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + instance: test-instance + layer: POSTGRESQL + samples: + - labels: + datname: test-value + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_instance_locks_count: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + instance: test-instance + layer: POSTGRESQL + samples: + - labels: + mode: 'test-value:user' + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_pg_instance_active_sessions: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + instance: test-instance + layer: POSTGRESQL + samples: + - labels: + datname: test-value + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_pg_instance_idle_sessions: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + instance: test-instance + layer: POSTGRESQL + samples: + - labels: + datname: test-value + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_pg_instance_committed_transactions_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + instance: test-instance + layer: POSTGRESQL + samples: + - labels: + datname: test-value + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_instance_rolled_back_transactions_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + instance: test-instance + layer: POSTGRESQL + samples: + - labels: + datname: test-value + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_instance_cache_hit_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + instance: test-instance + layer: POSTGRESQL + samples: + - labels: + datname: test-value + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 50.0 + meter_pg_instance_temporary_files_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + instance: test-instance + layer: POSTGRESQL + samples: + - labels: + datname: test-value + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_instance_checkpoint_write_time_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 25.0 + meter_pg_instance_checkpoint_sync_time_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 25.0 + meter_pg_instance_checkpoint_req_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 25.0 + meter_pg_instance_checkpoints_timed_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 25.0 + meter_pg_instance_conflicts_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + instance: test-instance + layer: POSTGRESQL + samples: + - labels: + datname: test-value + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_instance_deadlocks_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + instance: test-instance + layer: POSTGRESQL + samples: + - labels: + datname: test-value + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_instance_buffers_checkpoint: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 25.0 + meter_pg_instance_buffers_clean: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 25.0 + meter_pg_instance_buffers_backend_fsync: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 25.0 + meter_pg_instance_buffers_alloc: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 25.0 + meter_pg_instance_buffers_backend: + entities: + - scope: SERVICE_INSTANCE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + mode: user + datname: test-value + host_name: 'postgresql::test-host' + value: 25.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/postgresql/postgresql-instance.yaml b/test/script-cases/scripts/mal/test-otel-rules/postgresql/postgresql-instance.yaml new file mode 100644 index 000000000000..2005cce99bfd --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/postgresql/postgresql-instance.yaml @@ -0,0 +1,115 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'postgresql-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.host_name = 'postgresql::' + tags.host_name}).service(['host_name'] , Layer.POSTGRESQL).instance(['host_name'],['service_instance_id'], Layer.POSTGRESQL) +metricPrefix: meter_pg +metricsRules: + # postgresql configurations + - name: instance_shared_buffers + exp: pg_settings_shared_buffers_bytes + - name: instance_effective_cache + exp: pg_settings_effective_cache_size_bytes + - name: instance_maintenance_work_mem + exp: pg_settings_maintenance_work_mem_bytes + - name: instance_work_mem + exp: pg_settings_work_mem_bytes + - name: instance_seq_page_cost + exp: pg_settings_seq_page_cost + - name: instance_random_page_cost + exp: pg_settings_random_page_cost + - name: instance_max_wal_size + exp: pg_settings_max_wal_size_bytes + - name: instance_max_parallel_workers + exp: pg_settings_max_parallel_workers + - name: instance_max_worker_processes + exp: pg_settings_max_worker_processes + + # dashboards + ## rows + - name: instance_fetched_rows_rate + exp: pg_stat_database_tup_fetched.sum(['datname','host_name','service_instance_id']).rate('PT1M') + - name: instance_deleted_rows_rate + exp: pg_stat_database_tup_deleted.sum(['datname','host_name','service_instance_id']).rate('PT1M') + - name: instance_inserted_rows_rate + exp: pg_stat_database_tup_inserted.sum(['datname','host_name','service_instance_id']).rate('PT1M') + - name: instance_updated_rows_rate + exp: pg_stat_database_tup_updated.sum(['datname','host_name','service_instance_id']).rate('PT1M') + - name: instance_returned_rows_rate + exp: pg_stat_database_tup_returned.sum(['datname','host_name','service_instance_id']).rate('PT1M') + ## locks + - name: instance_locks_count + exp: pg_locks_count.tag({tags -> tags.mode = tags.datname + ":" + tags.mode}).sum(['mode','host_name','service_instance_id']) + + ## sessions + - name: instance_active_sessions + exp: pg_stat_activity_count.tagEqual('state','active').sum(['datname' , 'host_name','service_instance_id']) + - name: instance_idle_sessions + exp: pg_stat_activity_count.tagMatch('state','idle|idle in transaction|idle in transaction (aborted)').sum(['datname','host_name','service_instance_id']) + + ## transactions + - name: instance_committed_transactions_rate + exp: pg_stat_database_xact_commit.sum(['datname','host_name','service_instance_id']).rate('PT1M') + - name: instance_rolled_back_transactions_rate + exp: pg_stat_database_xact_rollback.sum(['datname','host_name','service_instance_id']).rate('PT1M') + + ## cache and temporary file + - name: instance_cache_hit_rate + exp: (pg_stat_database_blks_hit*100 / (pg_stat_database_blks_read + pg_stat_database_blks_hit)).sum(['datname','host_name','service_instance_id']) + - name: instance_temporary_files_rate + exp: pg_stat_database_temp_bytes.sum(['datname','host_name','service_instance_id']).rate('PT1M') + + ## checkpoint + - name: instance_checkpoint_write_time_rate + exp: pg_stat_bgwriter_checkpoint_write_time_total.rate('PT1M') + - name: instance_checkpoint_sync_time_rate + exp: pg_stat_bgwriter_checkpoint_sync_time_total.rate('PT1M') + - name: instance_checkpoint_req_rate + exp: pg_stat_bgwriter_checkpoints_req_total.rate('PT1M') + - name: instance_checkpoints_timed_rate + exp: pg_stat_bgwriter_checkpoints_timed_total.rate('PT1M') + + ## conflicts and deadlocks + - name: instance_conflicts_rate + exp: pg_stat_database_conflicts.sum(['datname','host_name','service_instance_id']).rate('PT1M') + - name: instance_deadlocks_rate + exp: pg_stat_database_deadlocks.sum(['datname','host_name','service_instance_id']).rate('PT1M') + + ## buffers + - name: instance_buffers_checkpoint + exp: pg_stat_bgwriter_buffers_checkpoint_total.rate('PT1M') + - name: instance_buffers_clean + exp: pg_stat_bgwriter_buffers_clean_total.rate('PT1M') + - name: instance_buffers_backend_fsync + exp: pg_stat_bgwriter_buffers_backend_fsync_total.rate('PT1M') + - name: instance_buffers_alloc + exp: pg_stat_bgwriter_buffers_alloc_total.rate('PT1M') + - name: instance_buffers_backend + exp: pg_stat_bgwriter_buffers_backend_total.rate('PT1M') diff --git a/test/script-cases/scripts/mal/test-otel-rules/postgresql/postgresql-service.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/postgresql/postgresql-service.data.yaml new file mode 100644 index 000000000000..f8ebf167cf1a --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/postgresql/postgresql-service.data.yaml @@ -0,0 +1,368 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + pg_stat_database_tup_fetched: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_database_tup_deleted: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_database_tup_inserted: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_database_tup_updated: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_database_tup_returned: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_locks_count: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_activity_count: + - labels: + service_instance_id: test-instance + host_name: test-host + state: active + value: 100.0 + - labels: + service_instance_id: test-instance + host_name: test-host + state: idle + value: 100.0 + pg_stat_database_xact_commit: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_database_xact_rollback: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_database_blks_hit: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_database_blks_read: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_database_temp_bytes: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_bgwriter_checkpoint_write_time_total: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_bgwriter_checkpoint_sync_time_total: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_bgwriter_checkpoints_req_total: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_bgwriter_checkpoints_timed_total: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_database_conflicts: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_database_deadlocks: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_bgwriter_buffers_checkpoint_total: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_bgwriter_buffers_clean_total: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_bgwriter_buffers_backend_fsync_total: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_bgwriter_buffers_alloc_total: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + pg_stat_bgwriter_buffers_backend_total: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 +expected: + meter_pg_fetched_rows_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_deleted_rows_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_inserted_rows_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_updated_rows_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_returned_rows_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_locks_count: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_pg_active_sessions: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_pg_idle_sessions: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 100.0 + meter_pg_committed_transactions_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_rolled_back_transactions_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_cache_hit_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 50.0 + meter_pg_temporary_files_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_checkpoint_write_time_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_checkpoint_sync_time_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_checkpoint_req_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_checkpoints_timed_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_conflicts_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_deadlocks_rate: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_buffers_checkpoint: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_buffers_clean: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_buffers_backend_fsync: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_buffers_alloc: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 + meter_pg_buffers_backend: + entities: + - scope: SERVICE + service: 'postgresql::test-host' + layer: POSTGRESQL + samples: + - labels: + host_name: 'postgresql::test-host' + service_instance_id: test-instance + value: 25.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/postgresql/postgresql-service.yaml b/test/script-cases/scripts/mal/test-otel-rules/postgresql/postgresql-service.yaml new file mode 100644 index 000000000000..f5b7ed02ddff --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/postgresql/postgresql-service.yaml @@ -0,0 +1,95 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'postgresql-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.host_name = 'postgresql::' + tags.host_name}).service(['host_name'] , Layer.POSTGRESQL) +metricPrefix: meter_pg +metricsRules: + # dashboards + ## rows + - name: fetched_rows_rate + exp: pg_stat_database_tup_fetched.sum(['service_instance_id','host_name']).rate('PT1M') + - name: deleted_rows_rate + exp: pg_stat_database_tup_deleted.sum(['service_instance_id','host_name']).rate('PT1M') + - name: inserted_rows_rate + exp: pg_stat_database_tup_inserted.sum(['service_instance_id','host_name']).rate('PT1M') + - name: updated_rows_rate + exp: pg_stat_database_tup_updated.sum(['service_instance_id','host_name']).rate('PT1M') + - name: returned_rows_rate + exp: pg_stat_database_tup_returned.sum(['service_instance_id','host_name']).rate('PT1M') + ## locks + - name: locks_count + exp: pg_locks_count.sum(['service_instance_id','host_name']) + + ## sessions + - name: active_sessions + exp: pg_stat_activity_count.tagEqual('state','active').sum(['service_instance_id' , 'host_name']) + - name: idle_sessions + exp: pg_stat_activity_count.tagMatch('state','idle|idle in transaction|idle in transaction (aborted)').sum(['service_instance_id' , 'host_name']) + + ## transactions + - name: committed_transactions_rate + exp: pg_stat_database_xact_commit.sum(['service_instance_id','host_name']).rate('PT1M') + - name: rolled_back_transactions_rate + exp: pg_stat_database_xact_rollback.sum(['service_instance_id','host_name']).rate('PT1M') + + ## cache and temporary file + - name: cache_hit_rate + exp: (pg_stat_database_blks_hit*100 / (pg_stat_database_blks_read + pg_stat_database_blks_hit)).avg(['service_instance_id','host_name']) + - name: temporary_files_rate + exp: pg_stat_database_temp_bytes.sum(['service_instance_id','host_name']).rate('PT1M') + + ## checkpoint + - name: checkpoint_write_time_rate + exp: pg_stat_bgwriter_checkpoint_write_time_total.rate('PT1M').sum(['service_instance_id','host_name']) + - name: checkpoint_sync_time_rate + exp: pg_stat_bgwriter_checkpoint_sync_time_total.rate('PT1M').sum(['service_instance_id','host_name']) + - name: checkpoint_req_rate + exp: pg_stat_bgwriter_checkpoints_req_total.rate('PT1M').sum(['service_instance_id','host_name']) + - name: checkpoints_timed_rate + exp: pg_stat_bgwriter_checkpoints_timed_total.rate('PT1M').sum(['service_instance_id','host_name']) + + ## conflicts and deadlocks + - name: conflicts_rate + exp: pg_stat_database_conflicts.sum(['service_instance_id','host_name']).rate('PT1M') + - name: deadlocks_rate + exp: pg_stat_database_deadlocks.sum(['service_instance_id','host_name']).rate('PT1M') + + ## buffers + - name: buffers_checkpoint + exp: pg_stat_bgwriter_buffers_checkpoint_total.rate('PT1M').sum(['service_instance_id','host_name']) + - name: buffers_clean + exp: pg_stat_bgwriter_buffers_clean_total.rate('PT1M').sum(['service_instance_id','host_name']) + - name: buffers_backend_fsync + exp: pg_stat_bgwriter_buffers_backend_fsync_total.rate('PT1M').sum(['service_instance_id','host_name']) + - name: buffers_alloc + exp: pg_stat_bgwriter_buffers_alloc_total.rate('PT1M').sum(['service_instance_id','host_name']) + - name: buffers_backend + exp: pg_stat_bgwriter_buffers_backend_total.rate('PT1M').sum(['service_instance_id','host_name']) diff --git a/test/script-cases/scripts/mal/test-otel-rules/pulsar/pulsar-broker.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/pulsar/pulsar-broker.data.yaml new file mode 100644 index 000000000000..1c771e86e212 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/pulsar/pulsar-broker.data.yaml @@ -0,0 +1,281 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + pulsar_active_connections: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_connection_created_total_count: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_connection_create_success_count: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_connection_create_fail_count: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_connection_closed_total_count: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + jvm_memory_bytes_used: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + jvm_memory_bytes_committed: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + jvm_memory_bytes_init: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + jvm_memory_pool_bytes_used: + - labels: + cluster: test-cluster + node: test-node + pool: PS_Eden_Space + value: 100.0 + jvm_buffer_pool_used_bytes: + - labels: + cluster: test-cluster + node: test-node + pool: PS_Eden_Space + value: 100.0 + jvm_gc_collection_seconds_count: + - labels: + cluster: test-cluster + node: test-node + gc: PS Scavenge + value: 100.0 + jvm_gc_collection_seconds_sum: + - labels: + cluster: test-cluster + node: test-node + gc: PS Scavenge + value: 100.0 + jvm_threads_current: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + jvm_threads_daemon: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + jvm_threads_peak: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + jvm_threads_deadlocked: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 +expected: + meter_pulsar_broker_active_connections: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_broker_total_connections: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_broker_connection_create_success_count: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_broker_connection_create_fail_count: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_broker_connection_closed_total_count: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_broker_jvm_memory_used: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_broker_jvm_memory_committed: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_broker_jvm_memory_init: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_broker_jvm_memory_pool_used: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + pool: PS_Eden_Space + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_broker_jvm_buffer_pool_used_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + pool: PS_Eden_Space + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_broker_jvm_gc_collection_seconds_count: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + gc: PS Scavenge + cluster: 'pulsar::test-cluster' + node: test-node + value: 25.0 + meter_pulsar_broker_jvm_gc_collection_seconds_sum: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + gc: PS Scavenge + cluster: 'pulsar::test-cluster' + node: test-node + value: 25.0 + meter_pulsar_broker_jvm_threads_current: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_broker_jvm_threads_daemon: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_broker_jvm_threads_peak: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_broker_jvm_threads_deadlocked: + entities: + - scope: SERVICE_INSTANCE + service: 'pulsar::test-cluster' + instance: test-node + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/pulsar/pulsar-broker.yaml b/test/script-cases/scripts/mal/test-otel-rules/pulsar/pulsar-broker.yaml new file mode 100644 index 000000000000..038c253c9d88 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/pulsar/pulsar-broker.yaml @@ -0,0 +1,76 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'pulsar-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'pulsar::' + tags.cluster}).instance(['cluster'], ['node'], Layer.PULSAR) +metricPrefix: meter_pulsar_broker + +# Metrics Rules +metricsRules: + + # connection Metrics + - name: active_connections + exp: pulsar_active_connections.sum(['cluster', 'node']) + - name: total_connections + exp: pulsar_connection_created_total_count.sum(['cluster', 'node']) + - name: connection_create_success_count + exp: pulsar_connection_create_success_count.sum(['cluster', 'node']) + - name: connection_create_fail_count + exp: pulsar_connection_create_fail_count.sum(['cluster', 'node']) + - name: connection_closed_total_count + exp: pulsar_connection_closed_total_count.sum(['cluster', 'node']) + + # JVM Metrics + - name: jvm_memory_used + exp: jvm_memory_bytes_used.sum(['cluster', 'node']) + - name: jvm_memory_committed + exp: jvm_memory_bytes_committed.sum(['cluster', 'node']) + - name: jvm_memory_init + exp: jvm_memory_bytes_init.sum(['cluster', 'node']) + + - name: jvm_memory_pool_used + exp: jvm_memory_pool_bytes_used.sum(['cluster', 'node', 'pool']) + + - name: jvm_buffer_pool_used_bytes + exp: jvm_buffer_pool_used_bytes.sum(['cluster', 'node', 'pool']) + + - name: jvm_gc_collection_seconds_count + exp: jvm_gc_collection_seconds_count.sum(['cluster', 'node', 'gc']).rate('PT1M') + - name: jvm_gc_collection_seconds_sum + exp: jvm_gc_collection_seconds_sum.sum(['cluster', 'node', 'gc']).rate('PT1M') + + - name: jvm_threads_current + exp: jvm_threads_current.sum(['cluster', 'node']) + - name: jvm_threads_daemon + exp: jvm_threads_daemon.sum(['cluster', 'node']) + - name: jvm_threads_peak + exp: jvm_threads_peak.sum(['cluster', 'node']) + - name: jvm_threads_deadlocked + exp: jvm_threads_deadlocked.sum(['cluster', 'node']) diff --git a/test/script-cases/scripts/mal/test-otel-rules/pulsar/pulsar-cluster.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/pulsar/pulsar-cluster.data.yaml new file mode 100644 index 000000000000..3b3023aa4b3f --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/pulsar/pulsar-cluster.data.yaml @@ -0,0 +1,197 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + pulsar_broker_topics_count: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_broker_subscriptions_count: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_broker_producers_count: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_broker_consumers_count: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_broker_rate_in: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_broker_rate_out: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_broker_throughput_in: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_broker_throughput_out: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_broker_storage_size: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_broker_storage_logical_size: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_broker_storage_write_rate: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + pulsar_broker_storage_read_rate: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 +expected: + meter_pulsar_total_topics: + entities: + - scope: SERVICE + service: 'pulsar::test-cluster' + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_total_subscriptions: + entities: + - scope: SERVICE + service: 'pulsar::test-cluster' + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_total_producers: + entities: + - scope: SERVICE + service: 'pulsar::test-cluster' + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_total_consumers: + entities: + - scope: SERVICE + service: 'pulsar::test-cluster' + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_message_rate_in: + entities: + - scope: SERVICE + service: 'pulsar::test-cluster' + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_message_rate_out: + entities: + - scope: SERVICE + service: 'pulsar::test-cluster' + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_throughput_in: + entities: + - scope: SERVICE + service: 'pulsar::test-cluster' + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_throughput_out: + entities: + - scope: SERVICE + service: 'pulsar::test-cluster' + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_storage_size: + entities: + - scope: SERVICE + service: 'pulsar::test-cluster' + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_storage_logical_size: + entities: + - scope: SERVICE + service: 'pulsar::test-cluster' + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_storage_write_rate: + entities: + - scope: SERVICE + service: 'pulsar::test-cluster' + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 + meter_pulsar_storage_read_rate: + entities: + - scope: SERVICE + service: 'pulsar::test-cluster' + layer: PULSAR + samples: + - labels: + cluster: 'pulsar::test-cluster' + node: test-node + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/pulsar/pulsar-cluster.yaml b/test/script-cases/scripts/mal/test-otel-rules/pulsar/pulsar-cluster.yaml new file mode 100644 index 000000000000..d5a31c45c1ce --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/pulsar/pulsar-cluster.yaml @@ -0,0 +1,68 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +filter: "{ tags -> tags.job_name == 'pulsar-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'pulsar::' + tags.cluster}).service(['cluster'], Layer.PULSAR) +metricPrefix: meter_pulsar + +# Metrics Rules +metricsRules: + + # Topic and Subscription Metrics + - name: total_topics + exp: pulsar_broker_topics_count.sum(['cluster', 'node']) + - name: total_subscriptions + exp: pulsar_broker_subscriptions_count.sum(['cluster', 'node']) + + # Producer and Consumer Metrics + - name: total_producers + exp: pulsar_broker_producers_count.sum(['cluster', 'node']) + - name: total_consumers + exp: pulsar_broker_consumers_count.sum(['cluster', 'node']) + + # Message Rate and Throughput Metrics + - name: message_rate_in + exp: pulsar_broker_rate_in.sum(['cluster', 'node']) + - name: message_rate_out + exp: pulsar_broker_rate_out.sum(['cluster', 'node']) + - name: throughput_in + exp: pulsar_broker_throughput_in.sum(['cluster', 'node']) + - name: throughput_out + exp: pulsar_broker_throughput_out.sum(['cluster', 'node']) + + - name: storage_size + exp: pulsar_broker_storage_size.sum(['cluster', 'node']) + - name: storage_logical_size + exp: pulsar_broker_storage_logical_size.sum(['cluster', 'node']) + - name: storage_write_rate + exp: pulsar_broker_storage_write_rate.sum(['cluster', 'node']) + - name: storage_read_rate + exp: pulsar_broker_storage_read_rate.sum(['cluster', 'node']) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/rabbitmq/rabbitmq-cluster.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/rabbitmq/rabbitmq-cluster.data.yaml new file mode 100644 index 000000000000..1e339791e5ff --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/rabbitmq/rabbitmq-cluster.data.yaml @@ -0,0 +1,362 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + rabbitmq_resident_memory_limit_bytes: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_process_resident_memory_bytes: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_disk_space_available_bytes: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_process_max_fds: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_process_open_fds: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_process_max_tcp_sockets: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_process_open_tcp_sockets: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_queue_messages_ready: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_queue_messages_unacked: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_global_messages_received_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_global_messages_confirmed_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_global_messages_routed_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_global_messages_received_confirm_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_global_messages_unroutable_dropped_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_global_messages_unroutable_returned_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_queues: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_queues_declared_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_queues_created_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_queues_deleted_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_channels: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_channels_opened_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_channels_closed_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_connections: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_connections_opened_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_connections_closed_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 +expected: + meter_rabbitmq_memory_available_before_publisher_blocked: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 0.0 + meter_rabbitmq_disk_space_available_before_publisher_blocked: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_file_descriptors_available: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 0.0 + meter_rabbitmq_tcp_socket_available: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 0.0 + meter_rabbitmq_message_ready_delivered_consumers: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_message_unacknowledged_delivered_consumers: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_messages_published: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 25.0 + meter_rabbitmq_messages_confirmed: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 25.0 + meter_rabbitmq_messages_routed: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 25.0 + meter_rabbitmq_messages_unconfirmed: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 0.0 + meter_rabbitmq_messages_unroutable_dropped: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 25.0 + meter_rabbitmq_messages_unroutable_returned: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 25.0 + meter_rabbitmq_queues: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_queues_declared_total: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 25.0 + meter_rabbitmq_queues_created_total: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 25.0 + meter_rabbitmq_queues_deleted_total: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 25.0 + meter_rabbitmq_channels: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_channels_opened_total: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 25.0 + meter_rabbitmq_channels_closed_total: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 25.0 + meter_rabbitmq_connections: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_connections_opened_total: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 25.0 + meter_rabbitmq_connections_closed_total: + entities: + - scope: SERVICE + service: 'rabbitmq::test-cluster' + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 25.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/rabbitmq/rabbitmq-cluster.yaml b/test/script-cases/scripts/mal/test-otel-rules/rabbitmq/rabbitmq-cluster.yaml new file mode 100644 index 000000000000..9ac04d9ac75b --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/rabbitmq/rabbitmq-cluster.yaml @@ -0,0 +1,86 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'rabbitmq-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'rabbitmq::' + tags.cluster}).service(['cluster'], Layer.RABBITMQ) +metricPrefix: meter_rabbitmq +metricsRules: + - name: memory_available_before_publisher_blocked + exp: rabbitmq_resident_memory_limit_bytes.sum(['cluster', 'node']) - rabbitmq_process_resident_memory_bytes.sum(['cluster', 'node']) + - name: disk_space_available_before_publisher_blocked + exp: rabbitmq_disk_space_available_bytes.sum(['cluster', 'node']) + - name: file_descriptors_available + exp: rabbitmq_process_max_fds.sum(['cluster', 'node']) - rabbitmq_process_open_fds.sum(['cluster', 'node']) + - name: tcp_socket_available + exp: rabbitmq_process_max_tcp_sockets.sum(['cluster', 'node']) - rabbitmq_process_open_tcp_sockets.sum(['cluster', 'node']) + + - name: message_ready_delivered_consumers + exp: rabbitmq_queue_messages_ready.sum(['cluster', 'node']) + - name: message_unacknowledged_delivered_consumers + exp: rabbitmq_queue_messages_unacked.sum(['cluster', 'node']) + + - name: messages_published + exp: rabbitmq_global_messages_received_total.sum(['cluster', 'node']).rate('PT1M') + - name: messages_confirmed + exp: rabbitmq_global_messages_confirmed_total.sum(['cluster', 'node']).rate('PT1M') + - name: messages_routed + exp: rabbitmq_global_messages_routed_total.sum(['cluster', 'node']).rate('PT1M') + - name: messages_unconfirmed + exp: rabbitmq_global_messages_received_confirm_total.sum(['cluster', 'node']).rate('PT1M') - rabbitmq_global_messages_confirmed_total.sum(['cluster', 'node']).rate('PT1M') + - name: messages_unroutable_dropped + exp: rabbitmq_global_messages_unroutable_dropped_total.sum(['cluster', 'node']).rate('PT1M') + - name: messages_unroutable_returned + exp: rabbitmq_global_messages_unroutable_returned_total.sum(['cluster', 'node']).rate('PT1M') + + # queues + - name: queues + exp: rabbitmq_queues.sum(['cluster', 'node']) + - name: queues_declared_total + exp: rabbitmq_queues_declared_total.sum(['cluster', 'node']).rate('PT1M') + - name: queues_created_total + exp: rabbitmq_queues_created_total.sum(['cluster', 'node']).rate('PT1M') + - name: queues_deleted_total + exp: rabbitmq_queues_deleted_total.sum(['cluster', 'node']).rate('PT1M') + + # channels + - name: channels + exp: rabbitmq_channels.sum(['cluster', 'node']) + - name: channels_opened_total + exp: rabbitmq_channels_opened_total.sum(['cluster', 'node']).rate('PT1M') + - name: channels_closed_total + exp: rabbitmq_channels_closed_total.sum(['cluster', 'node']).rate('PT1M') + + # connections + - name: connections + exp: rabbitmq_connections.sum(['cluster', 'node']) + - name: connections_opened_total + exp: rabbitmq_connections_opened_total.sum(['cluster', 'node']).rate('PT1M') + - name: connections_closed_total + exp: rabbitmq_connections_closed_total.sum(['cluster', 'node']).rate('PT1M') \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/rabbitmq/rabbitmq-node.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/rabbitmq/rabbitmq-node.data.yaml new file mode 100644 index 000000000000..6ea2fe82784e --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/rabbitmq/rabbitmq-node.data.yaml @@ -0,0 +1,377 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + rabbitmq_queue_messages_ready: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_global_messages_received_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_channels: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_channel_consumers: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_connections: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_queues: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_queue_messages_unacked: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_global_messages_redelivered_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_global_messages_delivered_consume_auto_ack_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_global_messages_delivered_consume_manual_ack_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_global_messages_delivered_get_auto_ack_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_global_messages_delivered_get_manual_ack_total: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + rabbitmq_consumers: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 + erlang_vm_allocators: + - labels: + cluster: test-cluster + node: test-node + alloc: test-value + usage: blocks_size + value: 100.0 + - labels: + cluster: test-cluster + node: test-node + alloc: test-value + usage: carriers_size + value: 100.0 + - labels: + cluster: test-cluster + node: test-node + alloc: test-value + usage: blocks_size + kind: mbcs + value: 100.0 + - labels: + cluster: test-cluster + node: test-node + alloc: test-value + usage: carriers_size + kind: mbcs + value: 100.0 + - labels: + cluster: test-cluster + node: test-node + alloc: test-value + usage: blocks_size + kind: mbcs_pool + value: 100.0 + - labels: + cluster: test-cluster + node: test-node + alloc: test-value + usage: carriers_size + kind: mbcs_pool + value: 100.0 + rabbitmq_process_resident_memory_bytes: + - labels: + cluster: test-cluster + node: test-node + value: 100.0 +expected: + meter_rabbitmq_node_queue_messages_ready: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_node_incoming_messages: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 25.0 + meter_rabbitmq_node_publisher_total: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 0.0 + meter_rabbitmq_node_connections_total: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_node_queue_total: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_node_unacknowledged_messages: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_node_outgoing_messages_total: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 150.0 + meter_rabbitmq_node_consumer_total: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_node_channel_total: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_node_allocated_used_percent: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 10000.0 + meter_rabbitmq_node_allocated_unused_percent: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 0.0 + meter_rabbitmq_node_allocated_used_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 300.0 + meter_rabbitmq_node_allocated_unused_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 0.0 + meter_rabbitmq_node_allocated_total_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 300.0 + meter_rabbitmq_node_process_resident_memory_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_node_allocated_by_type: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + alloc: test-value + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 300.0 + meter_rabbitmq_node_allocated_multiblock_used: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + alloc: test-value + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_node_allocated_multiblock_unused: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + alloc: test-value + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_node_allocated_multiblock_pool_used: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + alloc: test-value + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_node_allocated_multiblock_pool_unused: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + alloc: test-value + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_node_allocated_singleblock_used: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + alloc: test-value + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 + meter_rabbitmq_node_allocated_singleblock_unused: + entities: + - scope: SERVICE_INSTANCE + service: 'rabbitmq::test-cluster' + instance: test-node + layer: RABBITMQ + samples: + - labels: + alloc: test-value + cluster: 'rabbitmq::test-cluster' + node: test-node + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/rabbitmq/rabbitmq-node.yaml b/test/script-cases/scripts/mal/test-otel-rules/rabbitmq/rabbitmq-node.yaml new file mode 100644 index 000000000000..c1edc7380d73 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/rabbitmq/rabbitmq-node.yaml @@ -0,0 +1,80 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'rabbitmq-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'rabbitmq::' + tags.cluster}).instance(['cluster'], ['node'], Layer.RABBITMQ) +metricPrefix: meter_rabbitmq_node +metricsRules: + - name: queue_messages_ready + exp: rabbitmq_queue_messages_ready.sum(['cluster', 'node']) + - name: incoming_messages + exp: rabbitmq_global_messages_received_total.sum(['cluster', 'node']).rate('PT1M') + - name: publisher_total + exp: rabbitmq_channels.sum(['cluster', 'node']) - rabbitmq_channel_consumers.sum(['cluster', 'node']) + - name: connections_total + exp: rabbitmq_connections.sum(['cluster', 'node']) + - name: queue_total + exp: rabbitmq_queues.sum(['cluster', 'node']) + - name: unacknowledged_messages + exp: rabbitmq_queue_messages_unacked.sum(['cluster', 'node']) + - name: outgoing_messages_total + exp: rabbitmq_global_messages_redelivered_total.sum(['cluster', 'node']).rate('PT1M') + rabbitmq_global_messages_delivered_consume_auto_ack_total.sum(['cluster', 'node']).rate('PT1M') + rabbitmq_global_messages_delivered_consume_manual_ack_total.sum(['cluster', 'node']).rate('PT1M') + rabbitmq_global_messages_delivered_get_auto_ack_total.sum(['cluster', 'node']).rate('PT1M') + rabbitmq_global_messages_delivered_get_auto_ack_total.sum(['cluster', 'node']).rate('PT1M') + rabbitmq_global_messages_delivered_get_manual_ack_total.sum(['cluster', 'node']).rate('PT1M') + - name: consumer_total + exp: rabbitmq_consumers.sum(['cluster', 'node']) + - name: channel_total + exp: rabbitmq_channels.sum(['cluster', 'node']) + + - name: allocated_used_percent + exp: erlang_vm_allocators.tagEqual('usage' , 'blocks_size').sum(['cluster', 'node']) / erlang_vm_allocators.tagEqual('usage' , 'carriers_size').sum(['cluster', 'node']) * 10000 + - name: allocated_unused_percent + exp: (erlang_vm_allocators.tagEqual('usage' , 'carriers_size').sum(['cluster', 'node']) - erlang_vm_allocators.tagEqual('usage' , 'blocks_size').sum(['cluster', 'node'])) / erlang_vm_allocators.tagEqual('usage' , 'carriers_size').sum(['cluster', 'node']) * 10000 + - name: allocated_used_bytes + exp: erlang_vm_allocators.tagEqual('usage' , 'blocks_size').sum(['cluster', 'node']) + - name: allocated_unused_bytes + exp: erlang_vm_allocators.tagEqual('usage' , 'carriers_size').sum(['cluster', 'node']) - erlang_vm_allocators.tagEqual('usage' , 'blocks_size').sum(['cluster', 'node']) + - name: allocated_total_bytes + exp: erlang_vm_allocators.tagEqual('usage' , 'carriers_size').sum(['cluster', 'node']) + - name: process_resident_memory_bytes + exp: rabbitmq_process_resident_memory_bytes.sum(['cluster', 'node']) + + - name: allocated_by_type + exp: erlang_vm_allocators.tagEqual('usage' , 'carriers_size').sum(['cluster', 'node', 'alloc']) + - name: allocated_multiblock_used + exp: erlang_vm_allocators.tagEqual('usage' , 'blocks_size' , 'kind', 'mbcs').sum(['cluster', 'node', 'alloc']) + - name: allocated_multiblock_unused + exp: erlang_vm_allocators.tagEqual('usage' , 'carriers_size' , 'kind', 'mbcs').sum(['cluster', 'node', 'alloc']) + - name: allocated_multiblock_pool_used + exp: erlang_vm_allocators.tagEqual('usage' , 'blocks_size' , 'kind', 'mbcs_pool').sum(['cluster', 'node', 'alloc']) + - name: allocated_multiblock_pool_unused + exp: erlang_vm_allocators.tagEqual('usage' , 'carriers_size' , 'kind', 'mbcs_pool').sum(['cluster', 'node', 'alloc']) + - name: allocated_singleblock_used + exp: erlang_vm_allocators.tagEqual('usage' , 'blocks_size' , 'kind', 'mbcs').sum(['cluster', 'node', 'alloc']) + - name: allocated_singleblock_unused + exp: erlang_vm_allocators.tagEqual('usage' , 'carriers_size' , 'kind', 'mbcs').sum(['cluster', 'node', 'alloc']) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/redis/redis-instance.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/redis/redis-instance.data.yaml new file mode 100644 index 000000000000..4713ec0f50d4 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/redis/redis-instance.data.yaml @@ -0,0 +1,212 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + redis_uptime_in_seconds: + - labels: + host_name: test-host + value: 100.0 + redis_connected_clients: + - labels: + host_name: test-host + value: 100.0 + redis_memory_max_bytes: + - labels: + host_name: test-host + value: 100.0 + redis_memory_used_bytes: + - labels: + host_name: test-host + value: 100.0 + redis_commands_total: + - labels: + cmd: get + host_name: test-host + service_instance_id: test-instance + value: 100.0 + redis_keyspace_hits_total: + - labels: + host_name: test-host + value: 100.0 + redis_keyspace_misses_total: + - labels: + host_name: test-host + value: 100.0 + redis_net_input_bytes_total: + - labels: + host_name: test-host + value: 100.0 + redis_net_output_bytes_total: + - labels: + host_name: test-host + value: 100.0 + redis_db_keys: + - labels: + host_name: test-host + value: 100.0 + redis_expired_keys_total: + - labels: + host_name: test-host + value: 100.0 + redis_evicted_keys_total: + - labels: + host_name: test-host + value: 100.0 + redis_blocked_clients: + - labels: + host_name: test-host + value: 100.0 + redis_commands_duration_seconds_total: + - labels: + host_name: test-host + cmd: get + service_instance_id: test-instance + value: 100.0 +expected: + meter_redis_instance_uptime: + entities: + - scope: SERVICE_INSTANCE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + value: 100.0 + meter_redis_instance_connected_clients: + entities: + - scope: SERVICE_INSTANCE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + value: 100.0 + meter_redis_instance_memory_max_bytes: + entities: + - scope: SERVICE_INSTANCE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + value: 100.0 + meter_redis_instance_memory_usage: + entities: + - scope: SERVICE_INSTANCE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + value: 100.0 + meter_redis_instance_total_commands_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'redis::test-host' + instance: test-instance + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + cmd: get + service_instance_id: test-instance + value: 25.0 + meter_redis_instance_hit_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + value: 50.0 + meter_redis_instance_net_input_bytes_total: + entities: + - scope: SERVICE_INSTANCE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + value: 25.0 + meter_redis_instance_net_output_bytes_total: + entities: + - scope: SERVICE_INSTANCE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + value: 25.0 + meter_redis_instance_db_keys: + entities: + - scope: SERVICE_INSTANCE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + value: 100.0 + meter_redis_instance_expired_keys_total: + entities: + - scope: SERVICE_INSTANCE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + value: 100.0 + meter_redis_instance_evicted_keys_total: + entities: + - scope: SERVICE_INSTANCE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + value: 100.0 + meter_redis_instance_redis_blocked_clients: + entities: + - scope: SERVICE_INSTANCE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + value: 100.0 + meter_redis_instance_average_time_spent_by_command: + entities: + - scope: SERVICE_INSTANCE + service: 'redis::test-host' + instance: test-instance + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + cmd: get + service_instance_id: test-instance + value: 0.0 + meter_redis_instance_commands_duration_seconds_total_rate: + entities: + - scope: SERVICE_INSTANCE + service: 'redis::test-host' + instance: test-instance + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + cmd: get + service_instance_id: test-instance + value: 25.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/redis/redis-instance.yaml b/test/script-cases/scripts/mal/test-otel-rules/redis/redis-instance.yaml new file mode 100644 index 000000000000..f2aa2e03f71e --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/redis/redis-instance.yaml @@ -0,0 +1,69 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'redis-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.host_name = 'redis::' + tags.host_name}).service(['host_name'] , Layer.REDIS).instance(['host_name'], ['service_instance_id'], Layer.REDIS) +metricPrefix: meter_redis +metricsRules: + - name: instance_uptime + exp: redis_uptime_in_seconds + + - name: instance_connected_clients + exp: redis_connected_clients + - name: instance_memory_max_bytes + exp: redis_memory_max_bytes + - name: instance_memory_usage + exp: redis_memory_used_bytes * 100 / redis_memory_max_bytes + - name: instance_total_commands_rate + exp: redis_commands_total.sum(['cmd','host_name','service_instance_id']).rate('PT1M') + - name: instance_hit_rate + exp: redis_keyspace_hits_total * 100 / (redis_keyspace_misses_total + redis_keyspace_hits_total) + + + + - name: instance_net_input_bytes_total + exp: redis_net_input_bytes_total.rate('PT5M') + - name: instance_net_output_bytes_total + exp: redis_net_output_bytes_total.rate('PT5M') + + - name: instance_db_keys + exp: redis_db_keys + - name: instance_expired_keys_total + exp: redis_expired_keys_total + - name: instance_evicted_keys_total + exp: redis_evicted_keys_total + + - name: instance_redis_blocked_clients + exp: redis_blocked_clients + + - name: instance_average_time_spent_by_command + exp: (redis_commands_duration_seconds_total.sum(['host_name','cmd','service_instance_id']) / redis_commands_total.sum(['host_name','cmd','service_instance_id'])).rate('PT1M') + - name: instance_commands_duration_seconds_total_rate + exp: redis_commands_duration_seconds_total.sum(['host_name','cmd','service_instance_id']).rate('PT1M') \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/redis/redis-service.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/redis/redis-service.data.yaml new file mode 100644 index 000000000000..8858b829c48e --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/redis/redis-service.data.yaml @@ -0,0 +1,243 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + redis_uptime_in_seconds: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + redis_connected_clients: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + redis_blocked_clients: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + redis_memory_used_bytes: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + redis_memory_max_bytes: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + redis_commands_total: + - labels: + cmd: get + host_name: test-host + service_instance_id: test-instance + value: 100.0 + redis_keyspace_hits_total: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + redis_keyspace_misses_total: + - labels: + service_instance_id: test-instance + host_name: test-host + value: 100.0 + redis_net_input_bytes_total: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + redis_net_output_bytes_total: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + redis_db_keys: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + redis_expired_keys_total: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + redis_evicted_keys_total: + - labels: + host_name: test-host + service_instance_id: test-instance + value: 100.0 + redis_commands_duration_seconds_total: + - labels: + host_name: test-host + cmd: get + service_instance_id: test-instance + value: 100.0 +expected: + meter_redis_uptime: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + service_instance_id: test-instance + value: 100.0 + meter_redis_connected_clients: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + service_instance_id: test-instance + value: 100.0 + meter_redis_blocked_clients: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + service_instance_id: test-instance + value: 100.0 + meter_redis_memory_used_bytes: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + service_instance_id: test-instance + value: 100.0 + meter_redis_memory_max_bytes: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + service_instance_id: test-instance + value: 100.0 + meter_redis_total_commands_rate: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + cmd: get + service_instance_id: test-instance + value: 25.0 + meter_redis_hit_rate: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + service_instance_id: test-instance + value: 50.0 + meter_redis_net_input_bytes_total: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + service_instance_id: test-instance + value: 25.0 + meter_redis_net_output_bytes_total: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + service_instance_id: test-instance + value: 25.0 + meter_redis_db_keys: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + service_instance_id: test-instance + value: 100.0 + meter_redis_expired_keys_total: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + service_instance_id: test-instance + value: 100.0 + meter_redis_evicted_keys_total: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + service_instance_id: test-instance + value: 100.0 + meter_redis_commands_duration: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + cmd: get + service_instance_id: test-instance + value: 100.0 + meter_redis_commands_total: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + cmd: get + service_instance_id: test-instance + value: 100.0 + meter_redis_commands_duration_seconds_total_rate: + entities: + - scope: SERVICE + service: 'redis::test-host' + layer: REDIS + samples: + - labels: + host_name: 'redis::test-host' + cmd: get + service_instance_id: test-instance + value: 25.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/redis/redis-service.yaml b/test/script-cases/scripts/mal/test-otel-rules/redis/redis-service.yaml new file mode 100644 index 000000000000..8181ac0ccee4 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/redis/redis-service.yaml @@ -0,0 +1,70 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'redis-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.host_name = 'redis::' + tags.host_name}).service(['host_name'] , Layer.REDIS) +metricPrefix: meter_redis +metricsRules: + - name: uptime + exp: redis_uptime_in_seconds.max(['host_name','service_instance_id']) + - name: connected_clients + exp: redis_connected_clients.sum(['host_name','service_instance_id']) + - name: blocked_clients + exp: redis_blocked_clients.sum(['host_name','service_instance_id']) + - name: memory_used_bytes + exp: redis_memory_used_bytes.sum(['host_name','service_instance_id']) + - name: memory_max_bytes + exp: redis_memory_max_bytes.sum(['host_name','service_instance_id']) + - name: total_commands_rate + exp: redis_commands_total.sum(['cmd','host_name','service_instance_id']).rate('PT1M') + - name: hit_rate + exp: (redis_keyspace_hits_total * 100 / (redis_keyspace_misses_total + redis_keyspace_hits_total)).sum(['service_instance_id','host_name']) + + - name: net_input_bytes_total + exp: redis_net_input_bytes_total.sum(['host_name','service_instance_id']).rate('PT5M') + - name: net_output_bytes_total + exp: redis_net_output_bytes_total.sum(['host_name','service_instance_id']).rate('PT5M') + + - name: db_keys + exp: redis_db_keys.sum(['host_name','service_instance_id']) + - name: expired_keys_total + exp: redis_expired_keys_total.sum(['host_name','service_instance_id']) + - name: evicted_keys_total + exp: redis_evicted_keys_total.sum(['host_name','service_instance_id']) + + - name: commands_duration + exp: redis_commands_duration_seconds_total.sum(['host_name','cmd','service_instance_id']) + - name: commands_total + exp: redis_commands_total.sum(['host_name','cmd','service_instance_id']) + - name: commands_duration_seconds_total_rate + exp: redis_commands_duration_seconds_total.sum(['host_name','cmd','service_instance_id']).rate('PT1M') + + + diff --git a/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-broker.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-broker.data.yaml new file mode 100644 index 000000000000..8421448ff381 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-broker.data.yaml @@ -0,0 +1,81 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + rocketmq_broker_tps: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + rocketmq_broker_qps: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + rocketmq_producer_message_size: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 + rocketmq_consumer_message_size: + - labels: + cluster: test-cluster + broker: test-broker + value: 100.0 +expected: + meter_rocketmq_broker_produce_tps: + entities: + - scope: SERVICE_INSTANCE + service: 'rocketmq::test-cluster' + instance: test-broker + layer: ROCKETMQ + samples: + - labels: + broker: test-broker + cluster: 'rocketmq::test-cluster' + value: 100.0 + meter_rocketmq_broker_consume_qps: + entities: + - scope: SERVICE_INSTANCE + service: 'rocketmq::test-cluster' + instance: test-broker + layer: ROCKETMQ + samples: + - labels: + broker: test-broker + cluster: 'rocketmq::test-cluster' + value: 100.0 + meter_rocketmq_broker_producer_message_size: + entities: + - scope: SERVICE_INSTANCE + service: 'rocketmq::test-cluster' + instance: test-broker + layer: ROCKETMQ + samples: + - labels: + broker: test-broker + cluster: 'rocketmq::test-cluster' + value: 100.0 + meter_rocketmq_broker_consumer_message_size: + entities: + - scope: SERVICE_INSTANCE + service: 'rocketmq::test-cluster' + instance: test-broker + layer: ROCKETMQ + samples: + - labels: + broker: test-broker + cluster: 'rocketmq::test-cluster' + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-broker.yaml b/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-broker.yaml new file mode 100644 index 000000000000..4a6f721372ff --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-broker.yaml @@ -0,0 +1,46 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'rocketmq-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'rocketmq::' + tags.cluster}).instance(['cluster'], ['broker'], Layer.ROCKETMQ) +metricPrefix: meter_rocketmq_broker +metricsRules: + + - name: produce_tps + exp: rocketmq_broker_tps.sum(['cluster', 'broker']) + + - name: consume_qps + exp: rocketmq_broker_qps.sum(['cluster','broker']) + + - name: producer_message_size + exp: rocketmq_producer_message_size.sum(['cluster','broker']).downsampling(MAX) + + - name: consumer_message_size + exp: rocketmq_consumer_message_size.sum(['cluster','broker']).downsampling(MAX) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-cluster.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-cluster.data.yaml new file mode 100644 index 000000000000..b5d2866c1f3b --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-cluster.data.yaml @@ -0,0 +1,219 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + rocketmq_brokeruntime_msg_put_total_today_now: + - labels: + cluster: test-cluster + value: 100.0 + rocketmq_brokeruntime_msg_puttotal_yesterdaymorning: + - labels: + cluster: test-cluster + value: 100.0 + rocketmq_brokeruntime_msg_gettotal_today_now: + - labels: + cluster: test-cluster + value: 100.0 + rocketmq_brokeruntime_msg_gettotal_yesterdaymorning: + - labels: + cluster: test-cluster + value: 100.0 + rocketmq_producer_tps: + - labels: + cluster: test-cluster + value: 100.0 + rocketmq_consumer_tps: + - labels: + cluster: test-cluster + value: 100.0 + rocketmq_producer_message_size: + - labels: + cluster: test-cluster + value: 100.0 + rocketmq_consumer_message_size: + - labels: + cluster: test-cluster + value: 100.0 + rocketmq_group_get_latency_by_storetime: + - labels: + cluster: test-cluster + broker: test-broker + topic: test-topic + group: test-value + value: 100.0 + rocketmq_brokeruntime_commitlog_disk_ratio: + - labels: + cluster: test-cluster + brokerIP: test-value + value: 100.0 + rocketmq_brokeruntime_pull_threadpoolqueue_headwait_timemills: + - labels: + cluster: test-cluster + brokerIP: test-value + value: 100.0 + rocketmq_brokeruntime_send_threadpoolqueue_headwait_timemills: + - labels: + cluster: test-cluster + brokerIP: test-value + value: 100.0 + rocketmq_producer_offset: + - labels: + cluster: test-cluster + topic: test-topic + broker: test-broker + value: 100.0 +expected: + meter_rocketmq_cluster_messages_produced_today: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + value: 0.0 + meter_rocketmq_cluster_messages_consumed_today: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + value: 0.0 + meter_rocketmq_cluster_total_producer_tps: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + value: 100.0 + meter_rocketmq_cluster_total_consumer_tps: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + value: 100.0 + meter_rocketmq_cluster_producer_message_size: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + value: 100.0 + meter_rocketmq_cluster_consumer_message_size: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + value: 100.0 + meter_rocketmq_cluster_messages_produced_until_yesterday: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + value: 100.0 + meter_rocketmq_cluster_messages_consumed_until_yesterday: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + value: 100.0 + meter_rocketmq_cluster_max_consumer_latency: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + topic: test-topic + broker: test-broker + group: test-value + value: 100.0 + meter_rocketmq_cluster_max_commitLog_disk_ratio: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + brokerIP: test-value + value: 10000.0 + meter_rocketmq_cluster_commitLog_disk_ratio: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + brokerIP: test-value + value: 10000.0 + meter_rocketmq_cluster_pull_threadPool_queue_head_wait_time: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + brokerIP: test-value + value: 100.0 + meter_rocketmq_cluster_send_threadPool_queue_head_wait_time: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + brokerIP: test-value + value: 100.0 + meter_rocketmq_cluster_topic_count: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + value: 1.0 + meter_rocketmq_cluster_broker_count: + entities: + - scope: SERVICE + service: 'rocketmq::test-cluster' + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + value: 1.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-cluster.yaml b/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-cluster.yaml new file mode 100644 index 000000000000..ec61ec5ac4c7 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-cluster.yaml @@ -0,0 +1,81 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +filter: "{ tags -> tags.job_name == 'rocketmq-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'rocketmq::' + tags.cluster}).service(['cluster'], Layer.ROCKETMQ) +metricPrefix: meter_rocketmq_cluster + +metricsRules: + + - name: messages_produced_today + exp: rocketmq_brokeruntime_msg_put_total_today_now.sum((['cluster']))-rocketmq_brokeruntime_msg_puttotal_yesterdaymorning.sum(['cluster']) + + - name: messages_consumed_today + exp: rocketmq_brokeruntime_msg_gettotal_today_now.sum(['cluster'])-rocketmq_brokeruntime_msg_gettotal_yesterdaymorning.sum(['cluster']) + + - name: total_producer_tps + exp: rocketmq_producer_tps.sum(['cluster']) + + - name: total_consumer_tps + exp: rocketmq_consumer_tps.sum(['cluster']) + + - name: producer_message_size + exp: rocketmq_producer_message_size.sum(['cluster']).downsampling(MAX) + + - name: consumer_message_size + exp: rocketmq_consumer_message_size.sum(['cluster']).downsampling(MAX) + + - name: messages_produced_until_yesterday + exp: rocketmq_brokeruntime_msg_puttotal_yesterdaymorning.sum(['cluster']) + + - name: messages_consumed_until_yesterday + exp: rocketmq_brokeruntime_msg_gettotal_yesterdaymorning.sum(['cluster']) + + - name: max_consumer_latency + exp: rocketmq_group_get_latency_by_storetime.max(['cluster','broker','topic','group']) + + - name: max_commitLog_disk_ratio + exp: rocketmq_brokeruntime_commitlog_disk_ratio.max(['cluster','brokerIP'])*100 + + - name: commitLog_disk_ratio + exp: rocketmq_brokeruntime_commitlog_disk_ratio.sum(['cluster','brokerIP'])*100 + + - name: pull_threadPool_queue_head_wait_time + exp: rocketmq_brokeruntime_pull_threadpoolqueue_headwait_timemills.sum(['cluster','brokerIP']) + + - name: send_threadPool_queue_head_wait_time + exp: rocketmq_brokeruntime_send_threadpoolqueue_headwait_timemills.sum(['cluster','brokerIP']) + + - name: topic_count + exp: rocketmq_producer_offset.count(['cluster','topic']) + + - name: broker_count + exp: rocketmq_producer_offset.count(['cluster','broker']) \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-topic.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-topic.data.yaml new file mode 100644 index 000000000000..6399c16f49ce --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-topic.data.yaml @@ -0,0 +1,181 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + rocketmq_producer_message_size: + - labels: + cluster: test-cluster + topic: test-topic + value: 100.0 + rocketmq_consumer_message_size: + - labels: + cluster: test-cluster + topic: test-topic + group: test-value + value: 100.0 + rocketmq_group_get_latency_by_storetime: + - labels: + cluster: test-cluster + topic: test-topic + group: test-value + value: 100.0 + rocketmq_producer_tps: + - labels: + cluster: test-cluster + topic: test-topic + value: 100.0 + rocketmq_consumer_tps: + - labels: + cluster: test-cluster + topic: test-topic + value: 100.0 + rocketmq_producer_offset: + - labels: + cluster: test-cluster + topic: test-topic + broker: test-broker + value: 100.0 + rocketmq_consumer_offset: + - labels: + cluster: test-cluster + topic: test-topic + group: test-value + value: 100.0 +expected: + meter_rocketmq_topic_max_producer_message_size: + entities: + - scope: ENDPOINT + service: 'rocketmq::test-cluster' + endpoint: test-topic + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + topic: test-topic + value: 100.0 + meter_rocketmq_topic_max_consumer_message_size: + entities: + - scope: ENDPOINT + service: 'rocketmq::test-cluster' + endpoint: test-topic + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + topic: test-topic + group: test-value + value: 100.0 + meter_rocketmq_topic_consumer_latency: + entities: + - scope: ENDPOINT + service: 'rocketmq::test-cluster' + endpoint: test-topic + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + topic: test-topic + group: test-value + value: 100.0 + meter_rocketmq_topic_producer_tps: + entities: + - scope: ENDPOINT + service: 'rocketmq::test-cluster' + endpoint: test-topic + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + topic: test-topic + value: 100.0 + meter_rocketmq_topic_consumer_group_tps: + entities: + - scope: ENDPOINT + service: 'rocketmq::test-cluster' + endpoint: test-topic + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + topic: test-topic + value: 100.0 + meter_rocketmq_topic_producer_offset: + entities: + - scope: ENDPOINT + service: 'rocketmq::test-cluster' + endpoint: test-topic + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + topic: test-topic + value: 100.0 + meter_rocketmq_topic_consumer_group_offset: + entities: + - scope: ENDPOINT + service: 'rocketmq::test-cluster' + endpoint: test-topic + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + topic: test-topic + group: test-value + value: 100.0 + meter_rocketmq_topic_producer_message_size: + entities: + - scope: ENDPOINT + service: 'rocketmq::test-cluster' + endpoint: test-topic + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + topic: test-topic + value: 100.0 + meter_rocketmq_topic_consumer_message_size: + entities: + - scope: ENDPOINT + service: 'rocketmq::test-cluster' + endpoint: test-topic + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + topic: test-topic + group: test-value + value: 100.0 + meter_rocketmq_topic_consumer_group_count: + entities: + - scope: ENDPOINT + service: 'rocketmq::test-cluster' + endpoint: test-topic + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + topic: test-topic + value: 1.0 + meter_rocketmq_topic_broker_count: + entities: + - scope: ENDPOINT + service: 'rocketmq::test-cluster' + endpoint: test-topic + layer: ROCKETMQ + samples: + - labels: + cluster: 'rocketmq::test-cluster' + topic: test-topic + value: 1.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-topic.yaml b/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-topic.yaml new file mode 100644 index 000000000000..50751da5e3e6 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/rocketmq/rocketmq-topic.yaml @@ -0,0 +1,70 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> + +filter: "{ tags -> tags.job_name == 'rocketmq-monitoring' }" # The OpenTelemetry job name +expSuffix: tag({tags -> tags.cluster = 'rocketmq::' + tags.cluster}).endpoint(['cluster'], ['topic'], Layer.ROCKETMQ) +metricPrefix: meter_rocketmq_topic + +metricsRules: + + - name: max_producer_message_size + exp: rocketmq_producer_message_size.max(['cluster','topic']) + + - name: max_consumer_message_size + exp: rocketmq_consumer_message_size.max(['cluster','topic','group']) + + - name: consumer_latency + exp: rocketmq_group_get_latency_by_storetime.sum(['cluster','topic','group']) + + - name: producer_tps + exp: rocketmq_producer_tps.sum(['cluster','topic']) + + - name: consumer_group_tps + exp: rocketmq_consumer_tps.sum(['cluster','topic']) + + - name: producer_offset + exp: rocketmq_producer_offset.sum(['cluster','topic']).downsampling(MAX) + + - name: consumer_group_offset + exp: rocketmq_consumer_offset.sum(['cluster','topic','group']).downsampling(MAX) + + - name: producer_message_size + exp: rocketmq_producer_message_size.sum(['cluster','topic']).downsampling(MAX) + + - name: consumer_message_size + exp: rocketmq_consumer_message_size.sum(['cluster','topic','group']).downsampling(MAX) + + - name: consumer_group_count + exp: rocketmq_consumer_offset.count(['cluster','topic','group']) + + - name: broker_count + exp: rocketmq_producer_offset.count(['cluster','topic','broker']) + diff --git a/test/script-cases/scripts/mal/test-otel-rules/service-decorate-attributes.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/service-decorate-attributes.data.yaml new file mode 100644 index 000000000000..9cc2efeb51f2 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/service-decorate-attributes.data.yaml @@ -0,0 +1,38 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + control_last_status_by_service: + - labels: + service_name: test-service + value: 100.0 +expected: + meter_control_last_status_by_service: + entities: + - scope: SERVICE + service: '-|test-service|null|null|-' + layer: MESH + attr0: service + attr1: '-' + attr2: test-service + attr3: 'null' + attr4: 'null' + attr5: '-' + samples: + - labels: + service: '-|test-service|null|null|-' + control_bundle: + control_name: + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/service-decorate-attributes.yaml b/test/script-cases/scripts/mal/test-otel-rules/service-decorate-attributes.yaml new file mode 100644 index 000000000000..bb7595010964 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/service-decorate-attributes.yaml @@ -0,0 +1,40 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Test case: service-level decorate() with attr0-attr5. +# Validates decorate closure operating on MeterEntity bean properties. +# The service name is a pipe-delimited composite: "-|svcName|namespace|cluster|-" +# decorate() splits it and stores each part in attr0-attr5. + +# Metric Values +# 0 = Not Satisfied +# 1 = Satisfied +# 2 = Not Evaluated +filter: "{ tags -> tags.job_name == 'control-monitor' }" +expPrefix: tag({ tags -> tags.service = "-|${tags.service_name}|${tags.service_namespace}|${tags.cluster_name}|-".toString() }) +expSuffix: |- + service(['service'], Layer.MESH).decorate({ me -> + me.attr0 = 'service' + String[] parts = (me.serviceName ?: '').split("\\|", -1) + me.attr1 = parts.length > 0 ? parts[0] : '' + me.attr2 = parts.length > 1 ? parts[1] : '' + me.attr3 = parts.length > 2 ? parts[2] : '' + me.attr4 = parts.length > 3 ? parts[3] : '' + me.attr5 = parts.length > 4 ? parts[4] : '' + }) +metricPrefix: meter_control +metricsRules: + - name: last_status_by_service + exp: control_last_status_by_service.tagNotEqual('service_name' , null).sum(['service','control_bundle','control_name']).downsampling(LATEST) diff --git a/test/script-cases/scripts/mal/test-otel-rules/service-gstring-regex-split.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/service-gstring-regex-split.data.yaml new file mode 100644 index 000000000000..c69a9d2eba65 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/service-gstring-regex-split.data.yaml @@ -0,0 +1,38 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + oscal_control_last_status_by_service: + - labels: + service_name: test-service + value: 100.0 +expected: + meter_oscal_control_last_status_by_service: + entities: + - scope: SERVICE + service: '-|test-service|null|null|-' + layer: MESH + attr0: service + attr1: '-' + attr2: test-service + attr3: 'null' + attr4: 'null' + attr5: '-' + samples: + - labels: + service: '-|test-service|null|null|-' + oscal_control_bundle: + oscal_control_name: + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/service-gstring-regex-split.yaml b/test/script-cases/scripts/mal/test-otel-rules/service-gstring-regex-split.yaml new file mode 100644 index 000000000000..685acb8d29ba --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/service-gstring-regex-split.yaml @@ -0,0 +1,35 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Metric Values +# 0 = Not Satisfied +# 1 = Satisfied +# 2 = Not Evaluated +filter: "{ tags -> tags.job_name == 'oscal-control' }" # The OpenTelemetry job name +expPrefix: tag({ tags -> tags.service = "-|${tags.service_name}|${tags.service_namespace}|${tags.cluster_name}|-".toString() }) +expSuffix: |- + service(['service'], Layer.MESH).decorate({ me -> + me.attr0 = 'service' + def parts = (me.serviceName ?: '').split(/\|/, -1) + me.attr1 = parts.size() > 0 ? parts[0] : '' + me.attr2 = parts.size() > 1 ? parts[1] : '' + me.attr3 = parts.size() > 2 ? parts[2] : '' + me.attr4 = parts.size() > 3 ? parts[3] : '' + me.attr5 = parts.size() > 4 ? parts[4] : '' + }) +metricPrefix: meter_oscal_control +metricsRules: + - name: last_status_by_service + exp: oscal_control_last_status_by_service.tagNotEqual('service_name' , null).sum(['service','oscal_control_bundle','oscal_control_name']).downsampling(LATEST) diff --git a/test/script-cases/scripts/mal/test-otel-rules/vm.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/vm.data.yaml new file mode 100644 index 000000000000..cd9c95f7769f --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/vm.data.yaml @@ -0,0 +1,270 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + node_cpu_seconds_total: + - labels: + node_identifier_host_name: test-host + mode: user + value: 100.0 + node_load1: + - labels: + value: 100.0 + node_load5: + - labels: + value: 100.0 + node_load15: + - labels: + value: 100.0 + node_memory_MemTotal_bytes: + - labels: + value: 100.0 + node_memory_MemAvailable_bytes: + - labels: + value: 100.0 + node_memory_Buffers_bytes: + - labels: + value: 100.0 + node_memory_Cached_bytes: + - labels: + value: 100.0 + node_memory_SwapFree_bytes: + - labels: + value: 100.0 + node_memory_SwapTotal_bytes: + - labels: + value: 100.0 + node_filesystem_avail_bytes: + - labels: + node_identifier_host_name: test-host + mountpoint: / + value: 100.0 + node_filesystem_size_bytes: + - labels: + node_identifier_host_name: test-host + mountpoint: / + value: 100.0 + node_disk_read_bytes_total: + - labels: + node_identifier_host_name: test-host + value: 100.0 + node_disk_written_bytes_total: + - labels: + node_identifier_host_name: test-host + value: 100.0 + node_network_receive_bytes_total: + - labels: + node_identifier_host_name: test-host + value: 100.0 + node_network_transmit_bytes_total: + - labels: + node_identifier_host_name: test-host + value: 100.0 + node_netstat_Tcp_CurrEstab: + - labels: + value: 100.0 + node_sockstat_TCP_tw: + - labels: + value: 100.0 + node_sockstat_TCP_alloc: + - labels: + value: 100.0 + node_sockstat_sockets_used: + - labels: + value: 100.0 + node_sockstat_UDP_inuse: + - labels: + value: 100.0 + node_filefd_allocated: + - labels: + value: 100.0 +expected: + meter_vm_cpu_total_percentage: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + node_identifier_host_name: test-host + value: 2500.0 + meter_vm_cpu_average_used: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + node_identifier_host_name: test-host + mode: user + value: 2500.0 + meter_vm_cpu_load1: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 10000.0 + meter_vm_cpu_load5: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 10000.0 + meter_vm_cpu_load15: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 10000.0 + meter_vm_memory_total: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 100.0 + meter_vm_memory_available: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 100.0 + meter_vm_memory_used: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 0.0 + meter_vm_memory_buff_cache: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 200.0 + meter_vm_memory_swap_free: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 100.0 + meter_vm_memory_swap_total: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 100.0 + meter_vm_memory_swap_percentage: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: -0.0 + meter_vm_filesystem_percentage: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + node_identifier_host_name: test-host + mountpoint: / + value: -0.0 + meter_vm_disk_read: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + node_identifier_host_name: test-host + value: 25.0 + meter_vm_disk_written: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + node_identifier_host_name: test-host + value: 25.0 + meter_vm_network_receive: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + node_identifier_host_name: test-host + value: 0.0 + meter_vm_network_transmit: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + node_identifier_host_name: test-host + value: 0.0 + meter_vm_tcp_curr_estab: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 100.0 + meter_vm_tcp_tw: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 100.0 + meter_vm_tcp_alloc: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 100.0 + meter_vm_sockets_used: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 100.0 + meter_vm_udp_inuse: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 100.0 + meter_vm_filefd_allocated: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/vm.yaml b/test/script-cases/scripts/mal/test-otel-rules/vm.yaml new file mode 100644 index 000000000000..4937251af5f5 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/vm.yaml @@ -0,0 +1,97 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'vm-monitoring' }" # The OpenTelemetry job name +expSuffix: service(['node_identifier_host_name'] , Layer.OS_LINUX) +metricPrefix: meter_vm +metricsRules: + + #node cpu + - name: cpu_total_percentage + exp: (node_cpu_seconds_total * 100).tagNotEqual('mode' , 'idle').sum(['node_identifier_host_name']).rate('PT1M') + - name: cpu_average_used + exp: (node_cpu_seconds_total * 100).sum(['node_identifier_host_name' , 'mode']).rate('PT1M') + - name: cpu_load1 + exp: node_load1 * 100 + - name: cpu_load5 + exp: node_load5 * 100 + - name: cpu_load15 + exp: node_load15 * 100 + + #node Memory + - name: memory_total + exp: node_memory_MemTotal_bytes + - name: memory_available + exp: node_memory_MemAvailable_bytes + - name: memory_used + exp: node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes + - name: memory_buff_cache + exp: node_memory_Buffers_bytes + node_memory_Cached_bytes + - name: memory_swap_free + exp: node_memory_SwapFree_bytes + - name: memory_swap_total + exp: node_memory_SwapTotal_bytes + - name: memory_swap_percentage + exp: 100 - ((node_memory_SwapFree_bytes * 100) / node_memory_SwapTotal_bytes) + + #node filesystem + - name: filesystem_percentage + exp: 100 - ((node_filesystem_avail_bytes * 100).sum(['node_identifier_host_name' , 'mountpoint']) / node_filesystem_size_bytes.sum(['node_identifier_host_name' , 'mountpoint'])) + + #node disk + - name: disk_read + exp: node_disk_read_bytes_total.sum(['node_identifier_host_name']).rate('PT1M') + - name: disk_written + exp: node_disk_written_bytes_total.sum(['node_identifier_host_name']).rate('PT1M') + + #node network + - name: network_receive + exp: node_network_receive_bytes_total.sum(['node_identifier_host_name']).irate() + - name: network_transmit + exp: node_network_transmit_bytes_total.sum(['node_identifier_host_name']).irate() + + #node netstat + - name: tcp_curr_estab + exp: node_netstat_Tcp_CurrEstab + - name: tcp_tw + exp: node_sockstat_TCP_tw + - name: tcp_alloc + exp: node_sockstat_TCP_alloc + - name: sockets_used + exp: node_sockstat_sockets_used + - name: udp_inuse + exp: node_sockstat_UDP_inuse + + #node filefd + - name: filefd_allocated + exp: node_filefd_allocated + + + diff --git a/test/script-cases/scripts/mal/test-otel-rules/windows.data.yaml b/test/script-cases/scripts/mal/test-otel-rules/windows.data.yaml new file mode 100644 index 000000000000..2da5275289f0 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/windows.data.yaml @@ -0,0 +1,147 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + windows_cpu_time_total: + - labels: + node_identifier_host_name: test-host + mode: user + value: 100.0 + windows_cs_physical_memory_bytes: + - labels: + value: 100.0 + windows_os_physical_memory_free_bytes: + - labels: + value: 100.0 + windows_os_virtual_memory_free_bytes: + - labels: + value: 100.0 + windows_os_virtual_memory_bytes: + - labels: + value: 100.0 + windows_logical_disk_read_bytes_total: + - labels: + node_identifier_host_name: test-host + value: 100.0 + windows_logical_disk_write_bytes_total: + - labels: + node_identifier_host_name: test-host + value: 100.0 + windows_net_bytes_received_total: + - labels: + node_identifier_host_name: test-host + value: 100.0 + windows_net_bytes_sent_total: + - labels: + node_identifier_host_name: test-host + value: 100.0 +expected: + meter_win_cpu_total_percentage: + entities: + - scope: SERVICE + service: test-host + layer: OS_WINDOWS + samples: + - labels: + node_identifier_host_name: test-host + value: 2500.0 + meter_win_cpu_average_used: + entities: + - scope: SERVICE + service: test-host + layer: OS_WINDOWS + samples: + - labels: + node_identifier_host_name: test-host + mode: user + value: 2500.0 + meter_win_memory_total: + entities: + - scope: SERVICE + layer: OS_WINDOWS + samples: + - labels: + value: 100.0 + meter_win_memory_available: + entities: + - scope: SERVICE + layer: OS_WINDOWS + samples: + - labels: + value: 100.0 + meter_win_memory_used: + entities: + - scope: SERVICE + layer: OS_WINDOWS + samples: + - labels: + value: 0.0 + meter_win_memory_virtual_memory_free: + entities: + - scope: SERVICE + layer: OS_WINDOWS + samples: + - labels: + value: 100.0 + meter_win_memory_virtual_memory_total: + entities: + - scope: SERVICE + layer: OS_WINDOWS + samples: + - labels: + value: 100.0 + meter_win_memory_virtual_memory_percentage: + entities: + - scope: SERVICE + layer: OS_WINDOWS + samples: + - labels: + value: -0.0 + meter_win_disk_read: + entities: + - scope: SERVICE + service: test-host + layer: OS_WINDOWS + samples: + - labels: + node_identifier_host_name: test-host + value: 25.0 + meter_win_disk_written: + entities: + - scope: SERVICE + service: test-host + layer: OS_WINDOWS + samples: + - labels: + node_identifier_host_name: test-host + value: 25.0 + meter_win_network_receive: + entities: + - scope: SERVICE + service: test-host + layer: OS_WINDOWS + samples: + - labels: + node_identifier_host_name: test-host + value: 0.0 + meter_win_network_transmit: + entities: + - scope: SERVICE + service: test-host + layer: OS_WINDOWS + samples: + - labels: + node_identifier_host_name: test-host + value: 0.0 diff --git a/test/script-cases/scripts/mal/test-otel-rules/windows.yaml b/test/script-cases/scripts/mal/test-otel-rules/windows.yaml new file mode 100644 index 000000000000..de2a651111d5 --- /dev/null +++ b/test/script-cases/scripts/mal/test-otel-rules/windows.yaml @@ -0,0 +1,72 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This will parse a textual representation of a duration. The formats +# accepted are based on the ISO-8601 duration format {@code PnDTnHnMn.nS} +# with days considered to be exactly 24 hours. +# <p> +# Examples: +# <pre> +# "PT20.345S" -- parses as "20.345 seconds" +# "PT15M" -- parses as "15 minutes" (where a minute is 60 seconds) +# "PT10H" -- parses as "10 hours" (where an hour is 3600 seconds) +# "P2D" -- parses as "2 days" (where a day is 24 hours or 86400 seconds) +# "P2DT3H4M" -- parses as "2 days, 3 hours and 4 minutes" +# "P-6H3M" -- parses as "-6 hours and +3 minutes" +# "-P6H3M" -- parses as "-6 hours and -3 minutes" +# "-P-6H+3M" -- parses as "+6 hours and -3 minutes" +# </pre> +filter: "{ tags -> tags.job_name == 'windows-monitoring' }" # The OpenTelemetry job name +expSuffix: service(['node_identifier_host_name'] , Layer.OS_WINDOWS) +metricPrefix: meter_win +metricsRules: + #cpu windows don't expose cpu load metrics + - name: cpu_total_percentage + exp: (windows_cpu_time_total * 100).tagNotEqual('mode' , 'idle').sum(['node_identifier_host_name']).rate('PT1M') + - name: cpu_average_used + exp: (windows_cpu_time_total * 100).sum(['node_identifier_host_name' , 'mode']).rate('PT1M') + + #memory + - name: memory_total + exp: windows_cs_physical_memory_bytes + - name: memory_available + exp: windows_os_physical_memory_free_bytes + - name: memory_used + exp: windows_cs_physical_memory_bytes - windows_os_physical_memory_free_bytes + - name: memory_virtual_memory_free + exp: windows_os_virtual_memory_free_bytes + - name: memory_virtual_memory_total + exp: windows_os_virtual_memory_bytes + - name: memory_virtual_memory_percentage + exp: 100 - ((windows_os_virtual_memory_free_bytes * 100) / windows_os_virtual_memory_bytes) + + #disk + - name: disk_read + exp: windows_logical_disk_read_bytes_total.sum(['node_identifier_host_name']).rate('PT1M') + - name: disk_written + exp: windows_logical_disk_write_bytes_total.sum(['node_identifier_host_name']).rate('PT1M') + + #network + - name: network_receive + exp: windows_net_bytes_received_total.sum(['node_identifier_host_name']).irate() + - name: network_transmit + exp: windows_net_bytes_sent_total.sum(['node_identifier_host_name']).irate() + + + + + + + diff --git a/test/script-cases/scripts/mal/test-telegraf-rules/vm.data.yaml b/test/script-cases/scripts/mal/test-telegraf-rules/vm.data.yaml new file mode 100644 index 000000000000..ceeec7efa5ef --- /dev/null +++ b/test/script-cases/scripts/mal/test-telegraf-rules/vm.data.yaml @@ -0,0 +1,278 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + cpu_usage_active: + - labels: + host: test-host + cpu: cpu-total + value: 100.0 + - labels: + host: test-host + cpu: cpu0 + value: 80.0 + system_load1: + - labels: + host: test-host + value: 100.0 + system_load5: + - labels: + host: test-host + value: 100.0 + system_load15: + - labels: + host: test-host + value: 100.0 + mem_total: + - labels: + host: test-host + value: 100.0 + mem_available: + - labels: + host: test-host + value: 100.0 + mem_used: + - labels: + host: test-host + value: 100.0 + mem_swap_free: + - labels: + host: test-host + value: 100.0 + mem_swap_total: + - labels: + host: test-host + value: 100.0 + disk_used_percent: + - labels: + host: test-host + device: eth0 + value: 100.0 + diskio_read_bytes: + - labels: + host: test-host + value: 100.0 + diskio_write_bytes: + - labels: + host: test-host + value: 100.0 + net_bytes_recv: + - labels: + host: test-host + value: 100.0 + net_bytes_sent: + - labels: + host: test-host + value: 100.0 + netstat_tcp_established: + - labels: + host: test-host + value: 100.0 + netstat_tcp_time_wait: + - labels: + host: test-host + value: 100.0 + netstat_tcp_listen: + - labels: + host: test-host + value: 100.0 + netstat_udp_socket: + - labels: + host: test-host + value: 100.0 +expected: + meter_vm_cpu_total_percentage: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + cpu: cpu-total + value: 100.0 + meter_vm_cpu_average_used: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + cpu: cpu0 + value: 80.0 + meter_vm_cpu_load1: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 100.0 + meter_vm_cpu_load5: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 100.0 + meter_vm_cpu_load15: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 100.0 + meter_vm_memory_total: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 100.0 + meter_vm_memory_available: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 100.0 + meter_vm_memory_used: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 100.0 + meter_vm_memory_swap_free: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 100.0 + meter_vm_memory_swap_total: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 100.0 + meter_vm_memory_swap_percentage: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: -0.0 + meter_vm_filesystem_percentage: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + device: eth0 + value: 100.0 + meter_vm_disk_read: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 25.0 + meter_vm_disk_written: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 25.0 + meter_vm_network_receive: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 0.0 + meter_vm_network_transmit: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 0.0 + meter_vm_tcp_curr_estab: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 100.0 + meter_vm_tcp_tw: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 100.0 + meter_vm_tcp_alloc: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 100.0 + meter_vm_udp_inuse: + entities: + - scope: SERVICE + service: test-host + layer: OS_LINUX + samples: + - labels: + host: test-host + value: 100.0 diff --git a/test/script-cases/scripts/mal/test-telegraf-rules/vm.yaml b/test/script-cases/scripts/mal/test-telegraf-rules/vm.yaml new file mode 100644 index 000000000000..e47c98a7d62e --- /dev/null +++ b/test/script-cases/scripts/mal/test-telegraf-rules/vm.yaml @@ -0,0 +1,72 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +expSuffix: service(['host'], Layer.OS_LINUX) +metricPrefix: meter_vm +metricsRules: + + # cpu + - name: cpu_total_percentage + exp: cpu_usage_active.tagEqual('cpu', 'cpu-total') + - name: cpu_average_used + exp: cpu_usage_active.tagNotEqual('cpu', 'cpu-total').avg(['host', 'cpu']) + - name: cpu_load1 + exp: system_load1 + - name: cpu_load5 + exp: system_load5 + - name: cpu_load15 + exp: system_load15 + + # memory + - name: memory_total + exp: mem_total + - name: memory_available + exp: mem_available + - name: memory_used + exp: mem_used + + # swap + - name: memory_swap_free + exp: mem_swap_free + - name: memory_swap_total + exp: mem_swap_total + - name: memory_swap_percentage + exp: 100 - ((mem_swap_free / mem_swap_total) * 100) + + #node filesystem + - name: filesystem_percentage + exp: disk_used_percent.avg(['host','device']) + + #node disk + - name: disk_read + exp: diskio_read_bytes.rate('PT1M') + - name: disk_written + exp: diskio_write_bytes.rate('PT1M') + + #node net + - name: network_receive + exp: net_bytes_recv.irate() + - name: network_transmit + exp: net_bytes_sent.irate() + + #node netstat + - name: tcp_curr_estab + exp: netstat_tcp_established + - name: tcp_tw + exp: netstat_tcp_time_wait + - name: tcp_alloc + exp: netstat_tcp_listen + - name: udp_inuse + exp: netstat_udp_socket \ No newline at end of file diff --git a/test/script-cases/scripts/mal/test-zabbix-rules/agent.data.yaml b/test/script-cases/scripts/mal/test-zabbix-rules/agent.data.yaml new file mode 100644 index 000000000000..23bd2d3056f1 --- /dev/null +++ b/test/script-cases/scripts/mal/test-zabbix-rules/agent.data.yaml @@ -0,0 +1,211 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +input: + system_cpu_load: + - labels: + host: test-value + 2: avg1 + value: 100.0 + - labels: + host: test-value + 2: avg5 + value: 100.0 + - labels: + host: test-value + 2: avg15 + value: 100.0 + system_cpu_util: + - labels: + 2: test-value + host: test-value + value: 100.0 + vm_memory_size: + - labels: + host: test-value + 1: total + value: 100.0 + - labels: + host: test-value + 1: available + value: 100.0 + system_swap_size: + - labels: + host: test-value + 2: free + value: 100.0 + - labels: + host: test-value + 2: total + value: 100.0 + - labels: + host: test-value + 2: pused + value: 100.0 + vfs_fs_inode: + - labels: + 1: test-value + host: test-value + 2: pused + value: 100.0 + vfs_fs_size: + - labels: + 1: test-value + 2: test-value + host: test-value + value: 100.0 + vfs_dev_read: + - labels: + value: 100.0 + vfs_dev_write: + - labels: + value: 100.0 +expected: + meter_vm_cpu_load1: + entities: + - scope: SERVICE + service: test-value + layer: OS_LINUX + samples: + - labels: + host: test-value + value: 10000.0 + meter_vm_cpu_load5: + entities: + - scope: SERVICE + service: test-value + layer: OS_LINUX + samples: + - labels: + host: test-value + value: 10000.0 + meter_vm_cpu_load15: + entities: + - scope: SERVICE + service: test-value + layer: OS_LINUX + samples: + - labels: + host: test-value + value: 10000.0 + meter_vm_cpu_average_used: + entities: + - scope: SERVICE + service: test-value + layer: OS_LINUX + samples: + - labels: + 2: test-value + host: test-value + value: 100.0 + meter_vm_cpu_total_percentage: + entities: + - scope: SERVICE + service: test-value + layer: OS_LINUX + samples: + - labels: + host: test-value + value: 100.0 + meter_vm_memory_total: + entities: + - scope: SERVICE + service: test-value + layer: OS_LINUX + samples: + - labels: + host: test-value + value: 100.0 + meter_vm_memory_available: + entities: + - scope: SERVICE + service: test-value + layer: OS_LINUX + samples: + - labels: + host: test-value + value: 100.0 + meter_vm_memory_used: + entities: + - scope: SERVICE + service: test-value + layer: OS_LINUX + samples: + - labels: + host: test-value + value: 0.0 + meter_vm_memory_swap_free: + entities: + - scope: SERVICE + service: test-value + layer: OS_LINUX + samples: + - labels: + host: test-value + value: 100.0 + meter_vm_memory_swap_total: + entities: + - scope: SERVICE + service: test-value + layer: OS_LINUX + samples: + - labels: + host: test-value + value: 100.0 + meter_vm_memory_swap_percentage: + entities: + - scope: SERVICE + service: test-value + layer: OS_LINUX + samples: + - labels: + 2: pused + host: test-value + value: 100.0 + meter_vm_filesystem_percentage: + entities: + - scope: SERVICE + service: test-value + layer: OS_LINUX + samples: + - labels: + 1: test-value + host: test-value + value: 100.0 + meter_vm_vfs_fs_size: + entities: + - scope: SERVICE + service: test-value + layer: OS_LINUX + samples: + - labels: + 1: test-value + 2: test-value + host: test-value + value: 100.0 + meter_vm_disk_read: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 102400.0 + meter_vm_disk_written: + entities: + - scope: SERVICE + layer: OS_LINUX + samples: + - labels: + value: 102400.0 diff --git a/test/script-cases/scripts/mal/test-zabbix-rules/agent.yaml b/test/script-cases/scripts/mal/test-zabbix-rules/agent.yaml new file mode 100644 index 000000000000..05a08a878983 --- /dev/null +++ b/test/script-cases/scripts/mal/test-zabbix-rules/agent.yaml @@ -0,0 +1,89 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +metricPrefix: meter_vm +expSuffix: service(['host'], Layer.OS_LINUX) +entities: + hostPatterns: + - .+ + labels: +requiredZabbixItemKeys: + # cpu + - system.cpu.load[all,avg15] + - system.cpu.load[all,avg1] + - system.cpu.load[all,avg5] + - system.cpu.util[,guest] + - system.cpu.util[,guest_nice] + - system.cpu.util[,idle] + - system.cpu.util[,interrupt] + - system.cpu.util[,iowait] + - system.cpu.util[,nice] + - system.cpu.util[,softirq] + - system.cpu.util[,steal] + - system.cpu.util[,system] + - system.cpu.util[,user] + # memory + - vm.memory.size[available] + - vm.memory.size[pavailable] + - vm.memory.size[total] + - vm.memory.size[pavailable] + # swap + - system.swap.size[,free] + - system.swap.size[,total] + - system.swap.size[,pused] + # file + - vfs.fs.inode[/,pused] + - vfs.fs.size[/,total] + - vfs.fs.size[/,used] + - vfs.dev.read[,ops,avg1] + - vfs.dev.write[,ops,avg1] + +metrics: + # cpu + - name: cpu_load1 + exp: system_cpu_load.tagEqual('2', 'avg1').avg(['host']) * 100 + - name: cpu_load5 + exp: system_cpu_load.tagEqual('2', 'avg5').avg(['host']) * 100 + - name: cpu_load15 + exp: system_cpu_load.tagEqual('2', 'avg15').avg(['host']) * 100 + - name: cpu_average_used + exp: system_cpu_util.avg(['2', 'host']) + - name: cpu_total_percentage + exp: system_cpu_util.tagNotEqual('2', 'idle').sum(['host']) + # memory + - name: memory_total + exp: vm_memory_size.tagEqual('1', 'total').avg(['host']) + - name: memory_available + exp: vm_memory_size.tagEqual('1', 'available').avg(['host']) + - name: memory_used + exp: vm_memory_size.tagEqual('1', 'total').avg(['host']) - vm_memory_size.tagEqual('1', 'available').avg(['host']) + # swap + - name: memory_swap_free + exp: system_swap_size.tagEqual('2', 'free').avg(['host']) + - name: memory_swap_total + exp: system_swap_size.tagEqual('2', 'total').avg(['host']) + - name: memory_swap_percentage + exp: system_swap_size.tagEqual('2', 'pused') + # file + - name: filesystem_percentage + exp: vfs_fs_inode.tagEqual('2', 'pused').avg(['1', 'host']) + - name: vfs_fs_size + exp: vfs_fs_size.avg(['1', '2', 'host']) + - name: disk_read + # `* 1024` is for adapting to `divide 1024` by the VM UI template configuration, which converts `byte`->`kb` because of OTEL/Prometheus Node exporter. + exp: vfs_dev_read * 1024 + - name: disk_written + # `* 1024` is for adapting to `divide 1024` by the VM UI template configuration, which converts `byte`->`kb` because of OTEL/Prometheus Node exporter. + exp: vfs_dev_write * 1024