Citra renderer direct 3D

In near future do we expect that citra can also use direct 3d as renderer? Instead of opengl software?


What’s better about Direct3D? We rarely hear any actual arguments why Direct3D would be better (except for standard marketing phrases).

glad that microsoft fanbois are not part of citra devs.

opengl/vulkan ftw

the only thing i hated so much about DX12 in particular was the windows 10 only release, not only they stopped there but also new cpu architecture like kabby lake from intel and ryzen from amd will no longer support windows 8 and 7.

microsoft is really trying hard to push their spyware infested OS down on people.

It is simply easier to code it in OpenGL due to the 3DS using OpenGL

Don’t do the Trump: This is absolutely not true. We don’t even emulate the GPU on API level either.

well the shader bytecode could be a useful target for the Shader JIT. Other than that, mostly more compatibility with old video cards that have better support for Direct3D (Intel, ahem, ahem).

Anyways, the first benefit can be done through OpenGL (JIT to glsl though, is hard as fuck though).

TL;DR Just a few benefits but, in general, they are not worth it. It would be better to focus just on OpenGL than on Dx11-12 support. Probably Vulkan in the future if Apple decides to support it rather than their shitty Metal API.

Well… Running vertex shaders [if you meant those?] on the GPU is a bad idea (probably).
You will have to fight with different precision on different hardware (which is why I dropped the approach of doing shaders in GLSL for xqemu). I’ve tried hard to get a softfloat subset running in xqemu too, but even the most basic shaders hit the limits on my HD 4000. The performance is also horrible, there is almost no caching and at the end of the day most drivers will probably recognize the shader complexity and emulate it on the CPU (which is slower than just writing your own JIT for the CPU).
Also we’d also still require the CPU JIT for GS in the case of Citra (as OpenGL GS are more limited and mostly have horrible performance too).
3DS shaders also have branch-instructions and loops which are rather powerful. Chances are not every GPU will support some of the more complex shaders. You’d also have to do more state switches, upload more data to the GPU etc.

A useful target for the vertex shader JIT would be a multi-threaded approach, mimicking the actual hardware vertex-units.

A good target for the fragment shader shader-gen would be GLSL and x86 JIT (for the sw-rast) at once.
We still have to rewrite the shader with integer math anyway so there is certainly an opportunity there.
I’d also like to have a JIT for the sw-rasterizer anyway and we could share more code, meaning it’s harder to break one of the renderers.

Also I’d love to move away from D3D and OGL as I consider them more limited than Vulkan appears to be (never tried it though, but saw some presentations and skimmed over the sample code). We’d also benefit from better driver support with Vulkan (which is part of the concept with Vulkan).

I’ve been looking into it lately and there’s a few things about why 3ds would be different.

First, I tested fast instructions vs sanatized muls (i tested instruction dpps, mulps, etc) I have yet to find a game that breaks due to inaccuracy. So HLE is possible if the benefit in speed seems like a good trade off. CEMU does actualy decompile to glsl shader and caches them.

Second, PICA branches are easily decompiled in 99% of the cases since they are mostly used when nesting is not possible (most probably there’s a limit in nesting structures within PICA).

Third, caching gpu shaders can be done with the extension (ARB_GET_PROGRAM_BINARY) which is mostly supported in most drivers which support 3.0.

This does not mean throwing the current JIT since there should always be an LLE option. I’m currently working on a recompiler called “Marssel” In the style of dynarmic. Currently, I’m analyzing the flow on PICA shaders and trying to eliminate deadcode and fix stale branching with constant propagation. My first goal is to make a more robust x86/x64 JIT based on I suggest you take a look at it is way more robust than xbyak (it has a decent register allocator, for starters, an ARM backend, runtime VEX encoding if available, multitarget jitting (x86 x64 depending on current machine) and is lightweight[200kb]). After that, I’ll use the same frontend to generate GLSL and/or SPIR-V as well.

That was researched and implemented specifically to fix games that broke because of it. (For example, OoT3D failed to render most of the interface before that was fixed.) So it is very much required.

oh yeah checked it again you are right, it can be worked around though. Either way GLSL is not a priority right now.

FTR: I meant caching the vertex shading results which are way slower to compute now. The continous re-upload and recomputation of vertex data will be slow. (which might not be the case if GLSL is actually faster than the CPU, but even then it’s easier to add a cache on the CPU than it is on the GPU).

Back to topic? We’ve discussed GLSL vertex shaders enough now I think.
I guess @paulconrad12345 is probably overloaded with too much technical information now too. I guess it still shows how much thought goes into picking APIs and how things should be implemented and that there is no ideal solution anywhere.

People with technical understanding would also see that D3D doesn’t solve any of these mentioned issues either while being a Windows exclusive solution (meaning we need another one for Linux, Apple devices and mobile).

by not using directx (or direct3d whatever) you are cutting nearly HALF of the computers that can run 3ds games (intel graphics computers) including me!

If your GPU doesn’t support OGL’s minimum version for Citra, you probably won’t get playable speeds regardless, even if there was a D3D backend. I have no idea where you are getting that “half” statistic from, but I really doubt it, it seems like you just have no idea what you’re talking about honestly.

1 Like

looking at this we can see the Intel HD Graphics 2000 and the Intel HD Graphics 3000 take up roughly 25% of the DIRECTX 10 GPUS category. The DIRECTX 10 GPUS category is 8.33% of all GPUs on steam, so 25% of 8.33% is about 2% of all users.

Thats not even close to the N E A R L Y H A L F that you are claiming need direct x. Hate to break it to you, but cards without opengl 3.3 support (a 10 year old standard mind you) are very much the minority.

(That said, I’m not against removing that requirement, but that means software rendering/rasterizing only, and thats WAY slower than hardware rendering. And lol, citra would already be unplayably slow on the computer with hardware rendering on anyway.)


If this is allowed, I know I’m necrobumping, but overall D3D is more friendlier for those with AMD hardware. OpenGL was always AMD’s weak point, compared to Team Green. So Vulkan or D3D would be welcome, this MIGHT be also why as an example I have speed drops when I open a menu in a game (I already opened a post about this, so not explaining it here aswell).

I have on my PC Open GL 3.1 and it says i have no Open GL 3.1 and i have no Vulcan but i have Direct x11 and Direct 3D 11 lmao.