Question about internal resolution

Issue:
Hello,
I wanted to test some games to see the new improvements and now i face a strange issue.
By default my resolution was scaled by 4x. With that i could get 22-23 fps in the game that i was testing (professor layton). To see if i could get more, i tried to render at native resolution and my fps got up to 28-29 which was very good. But here comes my problem, i tried to render at 2x to see if i could get something like ~25fps but the performances were the same as with 4x. Then i tried with 8x and 9x but again the performances were the same. So my question is: why do i get a huge performance boost in native resolution and constant performances in the other resolutions?
Oh and sorry for my English, i’m french and i still need to improve myself.
System Information

  • Operating System: Windows 10
  • CPU: intel core is 3770k
  • GPU: nvidia gtx 1080ti
  • Citra Version (found in title bar): HEAD-a709e65
  • Game: Professor layton but i don’t know if that maters here.

Because when Citra has to render something in software it has to resize it’s progress to match the 3DS framebuffer, this obviously doesn’t need to happen if it’s already rendering at the 3DS native resolution. I might’ve gotten the terminology mixed up but that’s the general idea.

This is what i thought at the beginning and that explain the FPS gap between native and 2x. BUT this does not explain why the performances are the same no matter the scale. if there is no gap between 2x and 8x then there shouldn’t be any between 2x and native unless i misunderstood something?

Suppose you have to multiply by a a number with 67367. Takes a lot of time while doing manually doesn’t it?
Next you multiply the same number with 78979 (or any other 5 digit number). Does it take more time? It shouldn’t. At least roughly.

The thing is you are doing the same algorithm. So it Citra. It doesn’t care if it has to scale up 2x or 2000x . For it, it is only a algorithm (a really complex one, which takes a lot of time), & by The Fundamental Law of Computations:

Fundamental Law of Computations

Running an algorithm with the exact same initial values & constants in a fixed environment (one which doesn’t change over time) will repeatedly give the same result in same time.

So, scaling up 2000x doesn’t mean 1000 times work as scaling up 2x , contrary to common sense. It only mean a very small amount more. So it doesn’t affect performance, at least not noticeably.

1 Like

Simple answer: it’s because Citra uses your GPU for that. Since Citra doesn’t use the GPU for anything else, the internal resolution can have extra headroom.

Well, it does. But the explanation to

is inside my post.

Well i’m ok with the answer of Hexagon12 ( there are still some things that seam weird but i won’t pursue the matter)
But Adityarup_Laha, your answer suppose that the time to compute of a 200x200 image would be roughly the same as with a 20000x20000 image and i don’t think i can agree with that (or i misread and i’m very sorry ^^)

since all the other answers in here are pretty hand wavy right now, i’ll chime in.

resolution upscaling is a feature that citra supports. it works like this: game uploads the textures into the emulated gpu memory space. citra will copy this to the host gpu (aka your gpu), rescale the textures to the new size, and mark the memory region in the emulated gpu memory as cached. Whenever the game tries to do something with the textures, the gpu emulation will check the memory region to see if its cached, and if it is, it will try to use the methods in the texture cache to emulate what the game is doing. If that fails, it will copy the data from the gpu to the cpu memory, run the code that the game was doing on the cpu, then copy it back up to the gpu next time its cached. This is the major difference between native and 2x. When its native, we don’t have to rescale the textures back to native resolution, its just a straightforward copy, but when its upscaled, we have to downscale the textures at this point. But the cost of the rescaling isn’t very different between 2x and 4x, when compared to the cost of rescaling at all. Thats why there is a big jump from native to 2x, and almost no jump from 2x to 4x. If you kept going higher and higher to something like 10x for example, you’d notice its much slower than 4x because you are starting to actually put some workload on the gpu, but 2x to 4x isn’t that crazy.

hope that clears it up a bit. don’t have any time to proof read this, so if its confusing i apologize in advance :stuck_out_tongue:

3 Likes

Thank you, that explain why i couldn’t understand the other answers.
And don’t worry, that was clear enough :grinning: