Programming GPUs is fun, but it usually comes with a few headaches. So I decided to share some of my 'GPU adventures' to hopefully provide some usefull tips, and why not serve as a starting point for future discussions.
But enough introduction. So here I am today, testing a fairly complicated pixel shader (500+ instr. of ps 3.0 code), comparing its results between a GeForce 6800 and a GeForce 7800. The great thing with the GeForce 7800 is that you double your frame rate 'for free' (of course, 'for free' refers to development time, considering the current price of the beast - as far as I am concern, it is well worth it :-).
The good news, the frame rate is high! The bad news, the result has artifacts on the GeForce 7800. "So, " - do I ask myself - " by which miracle, the same code, on almost two identical cards (at least from a functionnality point of view) using the same driver and the same version of everything else, ends up producing a different result?". And of course, I tend to suspect something's wrong in my code (good assumption in most cases - remember the last time you thought you found a compiler bug?). I must be doing something wrong resulting in an undefined behaviour, which just happens to be ok on the GeForce 6800 and not on the GeForce 7800. Or maybe the internal precision changed? (do not laugh, I have some stories! but this is an entire other subject).
Well, as it turned out - after some non neglictable amount of time spend looking for bugs in my code (which was not a total waste of time...) - the driver settings were different. Yes, the ones in the Windows properties pannel. I never really paid much attention to these, appart from antialiasing and vertical sync. And let me tell you: this was a big mistake of mine! Here you find overrides for such things as antialiasing (so far so good), anisotropic filtering (!), negative LOD bias clamp (!!). (Now I know why negative lod bias did not work). All were set on "Application controlled" on the GeForce 7800 and "Off" on the GeForce 6800. Well, turning "anisotropic filtering" off saved my day. Now, what I still don't understand, is how "application controlled" meant "On" in my case. None of my (D3D) samplers are set on anistropic filtering (most of them do not use MIPmap anyway) ... If someone happens to know, please fell free to leave a comment here.
So, anyway, my point is, go in these settings now and turn off everything you don't need right away - especially if you are using shaders for image processing purposes. The scariest one (imho) goes to ATI, since you can actually change the way the Direct3D rasterizer treats texture coordinates to make it match the OpenGL rasterizer. If you use hlsl to do image processing, you probably understand what nightmare it would be if someones happens to run your code with this setting on! (in short, texture samples would no longer be mapped 1 by 1 with render target's pixels). Note that I am not complaining about having these settings - it's great to have felxibility - I just hope there is a way to check and enforce all of these from the application. As demonstrated by my today's adventure, it does not seem to be always the case.
No comments:
Post a Comment