-
Notifications
You must be signed in to change notification settings - Fork 68
Detect Moore Threads hardware and disable ARB_texture_barrier #1890
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Does the SSAO work if you set |
Correct. |
|
@slipher should we test for the driver to declare that |
Either one would work. We don't have any other cases using texture barrier, so it's hard to say whether the brokenness would generalize to other uses. The important thing is to respect the semantics of |
|
Semantically, can we consider |
|
Someone says that texture barriers "don't make sense" for a tiled GPU architecture in general. So for the Moore Threads case it seems good to declare the texture barrier extension unavailable. This is distinct from my AMD GPU which has the right architecture to implement, and sometimes it works, but it fails in a specific configuration. |
|
Good to know! So a tiled GPU architecture cannot claim any OpenGL 4.5 support? Now the code simply disables SSAO works. |
7ce5ac7 to
359f9c9
Compare
|
That should now be ready to merge. |
To answer myself, I guess a driver for a tiled GPU can probably implement some slow emulation, or provide implementations just defeating the optimizations brought by the tiled architecture, just to provide 4.5 compliance. The thing is that OpenGL guarantees that a feature is implemented, not that it is efficient, or meaningful on the said hardware. So considering that it is a bug to provide the feature on an hardware that is not made for it doesn't look right. We better disable the feature because 1. the emulated implementation is buggy, or 2. we know it's not the fastest path on the said hardware even if they fix the emulation one day. As an example some early Intel GMA provides OpenGL 2.1 on Linux but 1.x on Windows because Mesa emulated some parts and it is fine: providing the features makes software working out of the box and it can be better than just not running. Mesa also did that with VideoCore IV. One good example are desktop compositors being satisfied. This is especially true when the said software requires some feature from a version of OpenGL, but will not use other ones, including not using the ones being emulated. Here the same scenario happens with more modern hardware that is the MTT S80. And I guess a 4.5+ OpenGL driver on Apple AGX would do the same: provide all the features, with some features being slow or non-optimal because of not matching the hardware design. So it's good to identify the hardware to select a different code path, the emulation being buggy or not. I'll reword the comment in the code but the logic looks to now be ready. |
359f9c9 to
ef49911
Compare
Done. |
|
LGTM |
Detect Moore Threads vendor and disable SSAO on buggy Moore Threads driver.Detect Moore Threads hardware and disable ARB_texture_barrier on buggy Moore Threads driver.
Also reword things around.
Moore Threads is a Chinese vendor of gaming GPUs similar to AMD/Intel/Nvidia targeted at the gaming PC market, with drivers for Linux and Windows.
I'll open an issue for the SSAO bug so we can track the status of it and see if I can report it to Moore Threads.
For now this makes the
highandultrapreset working out of the box.There is also a geometry bug similar to the Intel one (#909, #1354), that only happens when the camera is perfectly aligned on some axis, but this is almost invisible when playing and doesn't make the game look annoying, unlike the SSAO bug. I'll also make an issue for that.
This was tested on MTT S80:
Edit: the issue is now there: