2007-03-25

 

Blog moved

I moved my blog to my own website. It is now at a new URL: aras-p.info/blog. If you have used FeedBurner feed to read it, everything should point to new blog now. If you've been just browsing the website, or have used Blogger's builtin feed, go to new URL from now on. This blog will not be updated anymore.

2007-03-22

 

ARB_vertex_buffer_object is stupid

OpenGL vertex buffer functionality, I mock thee too! Why couldn't they make the specification simple&clear, and then why can't the implementations work as expected?

It started out like this: converting some existing code that generates geometry on the fly. It used to generate that into in-memory arrays and then Just Draw Them. Probably not the most optimal solution, but that's fine. Of course we can optimize that, right?

So with all my knowledge how things used to work in D3D I start "I'll just do the same in OpenGL" adventure. Create a single big dynamic vertex buffer, a single big dynamic element buffer; update small portions of it with glBufferSubData, "discard" it (=glBufferData with null pointer) when the end is reached, rinse & repeat.

Now, let's for a moment ignore the fact that updating portions of index buffer does not actually work on Mac OS X... Everything else is fine and it actually works! Except for... it's quite a lot slower than just doing the old "render from memory" thing. Ok, must be some OS X specific thing... Nope, on a Windows box with GeForce 6800GT it is still slower.

Now, there are three things that could have gone wrong: 1) I did something stupid (quite likely), 2) VBOs for dynamically updated chunks of geometry suck (could be... they don't have a way to update just one chunk without one extra memory copy at least), 3) both me and VBOs are stupid. If I was me I'd bet on the third option.

What I don't get is: D3D has had a buffer model that is simple to understand and actually works for, like, 6 years now! Why ARB_vertex_buffer_object guys couldn't just copy that? The world would be a better place! No, instead they make a way to map only whole buffer; updating chunks is extra memory copy; there are confusing usage parameters (when should I use STREAM and when DYNAMIC?); performance costs are unclear (when is glBufferSubData faster than glMapBuffer?) etc. And in the end when an OpenGL noob like me tries to actually make them work - he can't! It's slow!

2007-03-17

 

Back from Seattle

Just got back from MVP Global Summit 2007 in Seattle. Among usual things, like watching Bill's keynote, meeting other MVPs, DirectX/XNA guys, getting a grip of some NDA information and such, here are some of the other highlights:

Amsterdam airport:
Officer: You speak English sir?
Me: Yeah.
O (takes a look at my passport): Ah, you speak Russian of course!
M: No, not really.
O: But your language is very similar to Russian, right?
M: Hm...
Well, here we know who gets the Linguist of the Year award.

Seattle-Tahoma airport, lady at checkin: "what kind of passport is that?". It also takes 5 times to enter my last name properly, from the printed letters in the passport. Each time trying to persuade me that I did change the ticket date of course!

Seattle-Tahoma airport, security: "sir, you have been selected for additional screening". Do they randomly select people for that quite involved process? Why this "selection" happens immediately after they take a look at my passport?

Random quotes:
Ten minutes walk is a long distance! Ten minutes of walking distance in the States is a very good reason to buy a car. At least SUV; preferably a Hummer.
DirectX SDK is the source of all sorts of high frequency goodness.
Sony is always good at announcements.
No? Rumours on the internet? Shock! Horror!

2007-03-03

 

A day well spent (encoding floats to RGBA)

Breaking news: sometimes seemingly trivial tasks take insane amounts of time! I am sure no one knew this before!

So it was yesterday - almost whole day spent fighting rounding/precision errors when encoding floating point numbers into regular 8 bit RGBA textures. You know, the trivial stuff where you start with
inline float4 EncodeFloatRGBA( float v ) {
return frac( float4(1.0, 256.0, 65536.0, 16777216.0) * v );
}
inline float DecodeFloatRGBA( float4 rgba ) {
return dot( rgba, float4(1.0, 1.0/256.0, 1.0/65536.0, 1.0/16777216.0) );
}
and everything is fine until sometimes, somewhere there's "something wrong". Must be rounding or quantizations errors; or maybe I should use 255 instead of 256; plus optionally add or subtract 0.5/256.0 (or would that be 0.5/255.0?). Or maybe the error is entirely somewhere else, and I'm just chasing ghosts here!

What would you do then? Why, of course, build an Encoding Floats Into Textures Studio 2007! (don't tell me it's not a great idea for a commercial software package! game studios would pay insane amounts of money for a tool like this!) The images here are exactly that - render into a texture, encoding UV coordinate as RGBA, then read from that texture, displaying RGBA and error from the expected value in some weird way. Turns out image postprocessing filters in Unity are a pretty good tool to do all this. Yay!

Sometimes in situations like this I figure out that graphics hardware still leaves a lot to be desired. This last image shows some calculations that depend only on the horizontal UV coordinate, so they should produce some purely vertical pattern (sans the part at the bottom, that is expected to be different). Heh, you wish!