Skip to content

Conversation

@illwieckz
Copy link
Member

@illwieckz illwieckz commented Feb 8, 2026

Detect Moore Threads vendor and disable SSAO on buggy Moore Threads driver.
Detect Moore Threads hardware and disable ARB_texture_barrier on buggy Moore Threads driver.

Also reword things around.

Moore Threads is a Chinese vendor of gaming GPUs similar to AMD/Intel/Nvidia targeted at the gaming PC market, with drivers for Linux and Windows.

I'll open an issue for the SSAO bug so we can track the status of it and see if I can report it to Moore Threads.

For now this makes the high and ultra preset working out of the box.

There is also a geometry bug similar to the Intel one (#909, #1354), that only happens when the camera is perfectly aligned on some axis, but this is almost invisible when playing and doesn't make the game look annoying, unlike the SSAO bug. I'll also make an issue for that.

This was tested on MTT S80:

Edit: the issue is now there:

@slipher
Copy link
Member

slipher commented Feb 8, 2026

Does the SSAO work if you set r_readonlyDepthBuffer 2?

@illwieckz
Copy link
Member Author

illwieckz commented Feb 8, 2026

Does the SSAO work if you set r_readonlyDepthBuffer 2?

Correct.

@illwieckz
Copy link
Member Author

illwieckz commented Feb 8, 2026

@slipher should we test for the driver to declare that textureBarrierAvailable isn't there (broken), or should we test for the driver to declare that usingReadonlyDepth is 2 (always)?

@slipher
Copy link
Member

slipher commented Feb 8, 2026

@slipher should we test for the driver to declare that textureBarrierAvailable isn't there (broken), or should we test for the driver to declare that usingReadonlyDepth is 2 (always)?

Either one would work. We don't have any other cases using texture barrier, so it's hard to say whether the brokenness would generalize to other uses. The important thing is to respect the semantics of r_readonlyDepthBuffer: only with the value 1 should the engine decide whether to use it. 0 should force it to be disabled and 2 should force it to be used. If you want to predict that texture barriers are broken in general, declaring them unavailable is a good idea and will trigger the desired semantics. Or if you don't, the logic can go in the switch ( r_readonlyDepthBuffer.Get() ) block, inside case 1. (The AMD + bindless textures case also ought to go in case 1 but I missed that in review.)

@illwieckz
Copy link
Member Author

illwieckz commented Feb 8, 2026

Semantically, can we consider textureBarrier to be available but broken in either MT or AMD cases?

@slipher
Copy link
Member

slipher commented Feb 9, 2026

Someone says that texture barriers "don't make sense" for a tiled GPU architecture in general. So for the Moore Threads case it seems good to declare the texture barrier extension unavailable. This is distinct from my AMD GPU which has the right architecture to implement, and sometimes it works, but it fails in a specific configuration.

@illwieckz illwieckz changed the title Detect Moore Threads vendor and disable SSAO on buggy Moore Threads driver Detect Moore Threads hardware and disable ARB_texture_barrier Feb 10, 2026
@illwieckz
Copy link
Member Author

Good to know! So a tiled GPU architecture cannot claim any OpenGL 4.5 support?

Now the code simply disables ARB_texture_barrier when Moore Threads hardware is detected but the driver provides the extension anyway.

SSAO works.

@illwieckz
Copy link
Member Author

That should now be ready to merge.

@illwieckz
Copy link
Member Author

illwieckz commented Feb 10, 2026

Good to know! So a tiled GPU architecture cannot claim any OpenGL 4.5 support?

To answer myself, I guess a driver for a tiled GPU can probably implement some slow emulation, or provide implementations just defeating the optimizations brought by the tiled architecture, just to provide 4.5 compliance.

The thing is that OpenGL guarantees that a feature is implemented, not that it is efficient, or meaningful on the said hardware.

So considering that it is a bug to provide the feature on an hardware that is not made for it doesn't look right. We better disable the feature because 1. the emulated implementation is buggy, or 2. we know it's not the fastest path on the said hardware even if they fix the emulation one day.

As an example some early Intel GMA provides OpenGL 2.1 on Linux but 1.x on Windows because Mesa emulated some parts and it is fine: providing the features makes software working out of the box and it can be better than just not running. Mesa also did that with VideoCore IV. One good example are desktop compositors being satisfied. This is especially true when the said software requires some feature from a version of OpenGL, but will not use other ones, including not using the ones being emulated.

Here the same scenario happens with more modern hardware that is the MTT S80. And I guess a 4.5+ OpenGL driver on Apple AGX would do the same: provide all the features, with some features being slow or non-optimal because of not matching the hardware design.

So it's good to identify the hardware to select a different code path, the emulation being buggy or not.

I'll reword the comment in the code but the logic looks to now be ready.

@illwieckz
Copy link
Member Author

I'll reword the comment in the code but the logic looks to now be ready.

Done.

@slipher
Copy link
Member

slipher commented Feb 11, 2026

LGTM

@illwieckz illwieckz merged commit 1672200 into master Feb 11, 2026
9 checks passed
@illwieckz illwieckz deleted the illwieckz/mtt branch February 11, 2026 02:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants