A day well spent (encoding floats to RGBA)

Breaking news: sometimes seemingly trivial tasks take insane amounts of time! I am sure no one knew this before!

So it was yesterday - almost whole day spent fighting rounding/precision errors when encoding floating point numbers into regular 8 bit RGBA textures. You know, the trivial stuff where you start with
inline float4 EncodeFloatRGBA( float v ) {
return frac( float4(1.0, 256.0, 65536.0, 16777216.0) * v );
inline float DecodeFloatRGBA( float4 rgba ) {
return dot( rgba, float4(1.0, 1.0/256.0, 1.0/65536.0, 1.0/16777216.0) );
and everything is fine until sometimes, somewhere there's "something wrong". Must be rounding or quantizations errors; or maybe I should use 255 instead of 256; plus optionally add or subtract 0.5/256.0 (or would that be 0.5/255.0?). Or maybe the error is entirely somewhere else, and I'm just chasing ghosts here!

What would you do then? Why, of course, build an Encoding Floats Into Textures Studio 2007! (don't tell me it's not a great idea for a commercial software package! game studios would pay insane amounts of money for a tool like this!) The images here are exactly that - render into a texture, encoding UV coordinate as RGBA, then read from that texture, displaying RGBA and error from the expected value in some weird way. Turns out image postprocessing filters in Unity are a pretty good tool to do all this. Yay!

Sometimes in situations like this I figure out that graphics hardware still leaves a lot to be desired. This last image shows some calculations that depend only on the horizontal UV coordinate, so they should produce some purely vertical pattern (sans the part at the bottom, that is expected to be different). Heh, you wish!

Have you tried exactly the same code on CPU? My pure non-educated guess would be that such operations screw up precision anyhow...
... give a bit of a thought for a previous (offline) discussion. Corrected code:

// rgba_encoded_b = tex2d(s)
// x = (a >= rgba_encoded_b)

float4 t = sign(EncodeFloatRGBA(a) - rgba_encoded_b);
x = (dot(float4(8,4,2,1), t) >= 0);
Does not quite work. I think this code depends on sign() returning zero in case arguments are equal; otherwise only the first component (dotted with 8) affects the result.

Now, one argument comes from a 8-bit texture, while the other is encoded on the fly. So basically they are never equal (at least I couldn't make them equal). What would be needed here is some sort of QuantizeFloatAsYouWouldDoWhenWritingIntoATexture() function :)
I suppose you can simulate quantinization by scaling and clamping. For example something along the lines:

// t = sign(EncodeFloatRGBA(a)-rgba_encoded_b);

float INV_EPSILON = 127;// or 255?
d = EncodeFloatRGBA(a)-rgba_encoded_b;
t = (clamp(d*INV_EPSILON+0.5)-0.5)*2;
...but it still won't be exactly like the quantized version. For example, it looks like Radeon X1600 quantizes like this: x*255.0-0.55781; the last number is approximate. No idea why it's a minus, because I would think it should be a plus, but experiments tell otherwise.

So my thinking is that the chances of getting quantized-float-myself and float-coming-from-8bit-texture equal are pretty slim.
Hmmm... I wonder if you would get the same quantinization by doing 'dummy' texture reads from the 256x256 texture filled with f(u,v,0,1)... but even if that would work, it's of course 2 additional texture reads per pixel, which is basically a crap :(
Post a Comment

<< Home