Lighting techniques

D2X-XL - Descent II update for modern systems with many new features and enhanced graphics. Home Page

Moderators: Grendel, Aus-RED-5

Post Reply
pATCheS
DBB Ace
DBB Ace
Posts: 187
Joined: Sat Apr 03, 2004 9:12 pm
Contact:

Lighting techniques

Post by pATCheS »

Diedel wrote:I also tried to add OpenGL hardware lighting, but OpenGL only supports 8 lights (at least on NVidia and ATI hardware - actually this is up to the driver developer, OpenGL only demands at least eight), and D2 levels usually have a lot of static lights already, not to speak of dynamic lights during fights. So I tried to do this with a vertex shader, but there are so harsh limititations on shader programming that it looks like that will become a failure too. This is very disappointing. Hardware lighting looks quite a bit better than D2's static and dynamic lighting (particularly the latter). :cry:

I could work around that limitation for static lights, but dynamic lights render that task next to impossible.
The only way to get the lighting to look any better than it already does is to use per-pixel lighting. The OGL hardware lights are per-vertex only unless you use their values in a pixel shader. Regardless, it's still only 8 lights so it's not really viable. I don't know how you would work around that limitation even with only static lights, short of using blending or combining lights.

You could calculate the intensity values per-vertex on the CPU (which D2 already does), and pass them into a vertex/pixel shader pair, which will interpolate the values per-pixel. This wouldn't be terribly difficult to slap on I would think, however it wouldn't look all that much better since the lighting data is still per-vertex (ie, putting a flare on the middle of the surface will not light it up very well compared to putting it in the corner).

Since pixel shaders have texture lookups, perhaps you could upload a texture each frame that stores light positions (dynamic lights only is best, and use vertex lighting for static lights, for both performance and level design reasons)? You'd need to pack positions into multiple pixels to maintain the needed level of accuracy. If you used four pixels per light position on a 16x16 texture, that'd give you 64 lights to loop through for each pixel (you could pass in the actual number of lights as a..oh, I forget the parameter types...fixed I think. dunno if anything less than SM3.0 will let you base your loop on a non-constant though). That's quite a few, though you can easily pass that up with more than three or four players in a multiplayer game. hrm, and another thing to consider is light intensity and color. Since light is additive, you could scale the color to an appropriate intensity and store that in a fifth pixel. If you shift the whole color value down a bit or two on the CPU and fill in those bits with overbright information, shift them back up by that amount on the GPU, you can do overbright effects without significant loss in intensity precision (Q3A achieves its overbright effect by tweaking the textures and the display's gamma curve). Maybe using three pixels for position would be sufficient, so you could keep everything in nice 4 byte sets. You'd have to test that obviously.

And of course, since I have very little actual graphics programming experience, I have no idea how it'll perform. 256 texture samples per pixel is a lot to choke down for most video cards. It'd be a good idea to combine lights somehow. You could resort to using multiple 8x8 textures, use one texture unit for switching between them so you don't have to do any switching in your shader. Since lights on one side of a room won't necessarily affect the other side, you could separate levels into light regions through approximation of cube volume and use a different 8x8 texture for each region. It would duplicate storage (lights would frequently wind up on multiple textures), but it would guarantee constant performance since no face in a cube will see more than one 8x8 light texture.
User avatar
Diedel
D2X Master
D2X Master
Posts: 5278
Joined: Thu Nov 05, 1998 12:01 pm
Contact:

Post by Diedel »

No need to tell me anything about how D2 or OpenGL do their lighting. ;)

I know about per pixel lighting, but that would be even more computing expensive, and given the huge number of lights in a D2 level, I would have the same problems as with per vertex lighting (passing all the (relevant) lights to the shader).

I have per vertex lighting shader code using light data from a texture, but that is not as easy as you think, and it gets worse for fragment shaders (where per pixel lighting is done), as they only receive projected textures (which would affect light data transporting textures).

Read some stuff about GPGPU to find out more.
pATCheS
DBB Ace
DBB Ace
Posts: 187
Joined: Sat Apr 03, 2004 9:12 pm
Contact:

Post by pATCheS »

I wasn't telling you how D2 does things, you know far more about how it does things than I ever will. But even if you would have understood what I was saying without that statement, others might not've. Don't belittle my context-establishing sentences! :x :P

Now what the heck do you mean by \"projected textures\"? As far as the light data texture goes, transformations are not relevant, since the shader I was describing would be hardcoded to read specific texture locations. You'd also need to set the light data texture to nearest pixel filtering. Obviously the whole point is knowing the position of the pixel relative to the light, so you need to ensure the coordinate systems are equivalent (only rotated/translated; scaling is unacceptable. perhaps this is what you're referring to by \"projected\"? :?).

hm, I'm thinking a little more about the actual expense of PPL, and only now did it hit me that you'd have to compute a normal for each light for every pixel. I still think it can be done, but this technique would definitely be relegated to higher end cards.

There's not any realistic way to pass in light normals to be interpolated by the vertex shader, is there... You could do it with SM3.0, since vertex shaders can perform texture reads. But then you only get so many streams between the vertex and fragment shaders (don't know how many), so you're still pretty limited as to how many lights you can have per surface.


aaaaaaaaahhhhh screw it, it's not worth doing :P Performance requirements are too high, implementation would be a severe pain, and the benefits are too low. I guess in some ways this isn't too far off from the issues you had with shadows. The fact of the matter is, D2 has too many lights. It was SO ahead of its time. lol
User avatar
Diedel
D2X Master
D2X Master
Posts: 5278
Joined: Thu Nov 05, 1998 12:01 pm
Contact:

Post by Diedel »

When a texel reaches the fragment shader, it's already some interpolated value based on the texels position in the 2D projection of the source texture in the 3D world.

Hence, if you pass a rectangular texture containing data, happily expecting it to arrive unchanged in the fragment shader, it won't do so at all.

Read this GPGPU tutorial - it's very comprehensive and rather easy to understand. ;)

You don't necessarily need to compute normals per pixel as you can use its triangle's normal.

For bump maps however you have to, but that is done in an extra preprocessing step, and the normals are passed in a normal map.
pATCheS
DBB Ace
DBB Ace
Posts: 187
Joined: Sat Apr 03, 2004 9:12 pm
Contact:

Post by pATCheS »

The fragment shader doesn't get texels unless it asks for them. And if you read a texture that you've disabled all filtering on with this (copied from that GPGPU page):

Code: Select all

glTexParameteri(texture_target, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(texture_target, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
It should sample the texture isotropically. The problem that the GPGPU document addresses is multi-step data-parallel processing, which is what requires the orthogonal texture projection so that you can render the results of a computational step to their corresponding texel positions. This functionality is not required for lighting, as it can be computed in a single pass (multiple lights can be done in a loop).

For the normals, I wasn't talking about the surface normal, I was talking about the surface-to-light direction normal that you need in order to perform the dot product that gives you the amount of light that that pixel is receiving. If you transform light positions into tangent space, you can eliminate the stream for surface normals and use a constant instead (bump mapping shaders do this so that they don't have to transform normal map coordinates), but that's about it. You'd still need the light's position in tangent space in order to get correct distance falloff.

If you precompute light normals, it kills much of the point of per-pixel lighting, since the angle to the light changes over the whole surface, especially when the light is close to the surface. And you can't pass in enough light normals per vertex to get the card to interpolate the light normals over the surface without doing multiple passes, which goes back to the original reason for using light data textures in the first place.

This also all leads up to the problem of what formula should be used for computing the sum of the light. Obviously this technique makes colored weapon light much more practical, but if you have a weapon that can take the light on a surface to 1.0, what if you have two of them very close to the surface? It should be additive, but 2.0 would wash out the texture completely. This is something that would need a bit of fiddling, perhaps use a logarithmic mixing curve or something rather that's scaled so that some overexposure is possible (overexposure is pretty :P).

Like I said before, I'm confident it can be done, but it would be pretty bandwidth intensive and probably not worth the effort. It'd be cool as all hell to see, but it's not like it'll sell more copies of the game or get more people to play.
Lehm
DBB Ace
DBB Ace
Posts: 121
Joined: Thu Nov 05, 1998 12:01 pm

Post by Lehm »

Well there is one way to get realtime lighting with 'only' 8 lights. Before attempting lightmapping, which would have worked great except for the way that d2 handles multi-texturing, I was considering simply finding which groups of lights are visible to each other and then averaging those groups down to 8 per group and then loading and unloading them as you progress through a level.
User avatar
Diedel
D2X Master
D2X Master
Posts: 5278
Joined: Thu Nov 05, 1998 12:01 pm
Contact:

Post by Diedel »

Sorry Lehm,

You haven't looked into D2 levels enough to really judge feasibility of hardware lighting using only 8 lights (level 1 of D2:Counterstrike e.g. contains over 130 light sources!). If you had looked at D2X-XL's render options menu you would have noticed that D2X-XL only considers the X nearest light sources anyway, where 4 <= X <= 32, and there are enough instances where you get lighting flaws if using less than 32.

Your lightmapping code was half-baked because it couldn't deal really well with dynamic lights. And then it was like something you were interested in finding out the workings of, but not in giving it the fine tuning and polish it needed to be really useful, which frankly isn't a very constructive way to participate in the development of software like D2X-XL.

If you really want to explore some new terrain (and have me polish it), you can find out how to put my lighting code into shaders. ;)
Post Reply