Skip to content

Comments

[ET-VK][qconv] Add apply_relu support to q8ta conv operators#17506

Merged
meta-codesync[bot] merged 3 commits intogh/SS-JIA/434/basefrom
gh/SS-JIA/434/head
Feb 20, 2026
Merged

[ET-VK][qconv] Add apply_relu support to q8ta conv operators#17506
meta-codesync[bot] merged 3 commits intogh/SS-JIA/434/basefrom
gh/SS-JIA/434/head

Conversation

@SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Feb 17, 2026

Stack from ghstack (oldest at bottom):

The quantized convolution pattern detector correctly identifies ReLU nodes between conv output and the output quantize node, but the pattern replacement did not pass this information to the fused q8ta operator. When the pattern replaced dequant → conv → relu → quant with q8ta_conv2d, the relu node was removed from the graph but its effect was not preserved. This silently removed all conv-relu non-linearity from int8 quantized models.

Add an apply_relu parameter throughout the full pipeline:

  • Custom op schemas and reference implementations (custom_ops_lib.py)
  • Pattern replacement (quantized_convolution.py)
  • C++ dispatch logic extracts apply_relu and passes it as a spec constant (Q8taConv2d.cpp, Q8taConv2dDW.cpp, Q8taConv2dPW.cpp, Q8taConv2dIm2Col.cpp)
  • GLSL shaders apply conditional max(value, 0) after dequantization and before requantization (q8ta_conv2d.glsl, q8ta_conv2d_dw.glsl, q8ta_conv2d_pw.glsl)
  • Test operator wrappers updated with proper legacy path handling (TestQ8taConv2d.cpp)

Differential Revision: D93511632

The quantized convolution pattern detector correctly identifies ReLU nodes between conv output and the output quantize node, but the pattern replacement did not pass this information to the fused q8ta operator. When the pattern replaced `dequant → conv → relu → quant` with `q8ta_conv2d`, the relu node was removed from the graph but its effect was not preserved. This silently removed all conv-relu non-linearity from int8 quantized models.

Add an `apply_relu` parameter throughout the full pipeline:
- Custom op schemas and reference implementations (custom_ops_lib.py)
- Pattern replacement (quantized_convolution.py)
- C++ dispatch logic extracts apply_relu and passes it as a spec constant (Q8taConv2d.cpp, Q8taConv2dDW.cpp, Q8taConv2dPW.cpp, Q8taConv2dIm2Col.cpp)
- GLSL shaders apply conditional max(value, 0) after dequantization and before requantization (q8ta_conv2d.glsl, q8ta_conv2d_dw.glsl, q8ta_conv2d_pw.glsl)
- Test operator wrappers updated with proper legacy path handling (TestQ8taConv2d.cpp)

Differential Revision: [D93511632](https://our.internmc.facebook.com/intern/diff/D93511632/)

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Feb 17, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17506

Note: Links to docs will display an error until the docs builds have been completed.

❌ 4 New Failures, 1 Pending, 3 Unrelated Failures

As of commit 22ac237 with merge base 7b843e4 (image):

NEW FAILURES - The following jobs have failed:

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

ssjia and others added 2 commits February 18, 2026 13:02
The quantized convolution pattern detector correctly identifies ReLU nodes between conv output and the output quantize node, but the pattern replacement did not pass this information to the fused q8ta operator. When the pattern replaced `dequant → conv → relu → quant` with `q8ta_conv2d`, the relu node was removed from the graph but its effect was not preserved. This silently removed all conv-relu non-linearity from int8 quantized models.

Add an `apply_relu` parameter throughout the full pipeline:
- Custom op schemas and reference implementations (custom_ops_lib.py)
- Pattern replacement (quantized_convolution.py)
- C++ dispatch logic extracts apply_relu and passes it as a spec constant (Q8taConv2d.cpp, Q8taConv2dDW.cpp, Q8taConv2dPW.cpp, Q8taConv2dIm2Col.cpp)
- GLSL shaders apply conditional max(value, 0) after dequantization and before requantization (q8ta_conv2d.glsl, q8ta_conv2d_dw.glsl, q8ta_conv2d_pw.glsl)
- Test operator wrappers updated with proper legacy path handling (TestQ8taConv2d.cpp)

Differential Revision: [D93511632](https://our.internmc.facebook.com/intern/diff/D93511632/)

[ghstack-poisoned]
The quantized convolution pattern detector correctly identifies ReLU nodes between conv output and the output quantize node, but the pattern replacement did not pass this information to the fused q8ta operator. When the pattern replaced `dequant → conv → relu → quant` with `q8ta_conv2d`, the relu node was removed from the graph but its effect was not preserved. This silently removed all conv-relu non-linearity from int8 quantized models.

Add an `apply_relu` parameter throughout the full pipeline:
- Custom op schemas and reference implementations (custom_ops_lib.py)
- Pattern replacement (quantized_convolution.py)
- C++ dispatch logic extracts apply_relu and passes it as a spec constant (Q8taConv2d.cpp, Q8taConv2dDW.cpp, Q8taConv2dPW.cpp, Q8taConv2dIm2Col.cpp)
- GLSL shaders apply conditional max(value, 0) after dequantization and before requantization (q8ta_conv2d.glsl, q8ta_conv2d_dw.glsl, q8ta_conv2d_pw.glsl)
- Test operator wrappers updated with proper legacy path handling (TestQ8taConv2d.cpp)

Differential Revision: [D93511632](https://our.internmc.facebook.com/intern/diff/D93511632/)

[ghstack-poisoned]
@meta-codesync meta-codesync bot merged commit 2bda7d8 into gh/SS-JIA/434/base Feb 20, 2026
170 of 178 checks passed
@meta-codesync meta-codesync bot deleted the gh/SS-JIA/434/head branch February 20, 2026 01:13
SS-JIA pushed a commit that referenced this pull request Feb 20, 2026
Pull Request resolved: #17506

The quantized convolution pattern detector correctly identifies ReLU nodes between conv output and the output quantize node, but the pattern replacement did not pass this information to the fused q8ta operator. When the pattern replaced `dequant → conv → relu → quant` with `q8ta_conv2d`, the relu node was removed from the graph but its effect was not preserved. This silently removed all conv-relu non-linearity from int8 quantized models.

Add an `apply_relu` parameter throughout the full pipeline:
- Custom op schemas and reference implementations (custom_ops_lib.py)
- Pattern replacement (quantized_convolution.py)
- C++ dispatch logic extracts apply_relu and passes it as a spec constant (Q8taConv2d.cpp, Q8taConv2dDW.cpp, Q8taConv2dPW.cpp, Q8taConv2dIm2Col.cpp)
- GLSL shaders apply conditional max(value, 0) after dequantization and before requantization (q8ta_conv2d.glsl, q8ta_conv2d_dw.glsl, q8ta_conv2d_pw.glsl)
- Test operator wrappers updated with proper legacy path handling (TestQ8taConv2d.cpp)
ghstack-source-id: 342806070
@exported-using-ghexport

Differential Revision: [D93511632](https://our.internmc.facebook.com/intern/diff/D93511632/)
SS-JIA pushed a commit that referenced this pull request Feb 20, 2026
Pull Request resolved: #17506

The quantized convolution pattern detector correctly identifies ReLU nodes between conv output and the output quantize node, but the pattern replacement did not pass this information to the fused q8ta operator. When the pattern replaced `dequant → conv → relu → quant` with `q8ta_conv2d`, the relu node was removed from the graph but its effect was not preserved. This silently removed all conv-relu non-linearity from int8 quantized models.

Add an `apply_relu` parameter throughout the full pipeline:
- Custom op schemas and reference implementations (custom_ops_lib.py)
- Pattern replacement (quantized_convolution.py)
- C++ dispatch logic extracts apply_relu and passes it as a spec constant (Q8taConv2d.cpp, Q8taConv2dDW.cpp, Q8taConv2dPW.cpp, Q8taConv2dIm2Col.cpp)
- GLSL shaders apply conditional max(value, 0) after dequantization and before requantization (q8ta_conv2d.glsl, q8ta_conv2d_dw.glsl, q8ta_conv2d_pw.glsl)
- Test operator wrappers updated with proper legacy path handling (TestQ8taConv2d.cpp)
ghstack-source-id: 342806070
@exported-using-ghexport

Differential Revision: [D93511632](https://our.internmc.facebook.com/intern/diff/D93511632/)
SS-JIA pushed a commit that referenced this pull request Feb 20, 2026
Pull Request resolved: #17506

The quantized convolution pattern detector correctly identifies ReLU nodes between conv output and the output quantize node, but the pattern replacement did not pass this information to the fused q8ta operator. When the pattern replaced `dequant → conv → relu → quant` with `q8ta_conv2d`, the relu node was removed from the graph but its effect was not preserved. This silently removed all conv-relu non-linearity from int8 quantized models.

Add an `apply_relu` parameter throughout the full pipeline:
- Custom op schemas and reference implementations (custom_ops_lib.py)
- Pattern replacement (quantized_convolution.py)
- C++ dispatch logic extracts apply_relu and passes it as a spec constant (Q8taConv2d.cpp, Q8taConv2dDW.cpp, Q8taConv2dPW.cpp, Q8taConv2dIm2Col.cpp)
- GLSL shaders apply conditional max(value, 0) after dequantization and before requantization (q8ta_conv2d.glsl, q8ta_conv2d_dw.glsl, q8ta_conv2d_pw.glsl)
- Test operator wrappers updated with proper legacy path handling (TestQ8taConv2d.cpp)
ghstack-source-id: 342806070
@exported-using-ghexport

Differential Revision: [D93511632](https://our.internmc.facebook.com/intern/diff/D93511632/)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants