I will describe today our new graphics pipeline:
- One or more passes (from a different camera) of the geometry is done to produce the shadow map(s).
- Using flexible buffer objects I then render the scene into 3 buffers: one buffer holds the lit & textured objects, another the normals, and another the depth map.
- If needed, another pass calculates the relative movement of all objects during the last frame and stores the direction of all moving pixels in a 2D "velocity" map. This is used to emulate the motion blurring effect.
- They are all combined (in what could be called deferred shading) by a 2D pass - in here we get the screen space ambient occlusion and occasionally some other minor effects (the effect in rupture is done in this pass).
- The output of that pass (a 2D texture) is passed to a "blur" and "radial blur" filter, in much lower resolution (a quarter in each dimension). There is a per pixel flag (stored in the alpha channel and calculated in the second step) to determine which pixels are blurred and which are not.
- The combined output of all that is passed to a further "depth of field" full resolution 2D effect. This pass also does the per pixel "smearing" that is easily calculated using the velocity map. Additionally there is a very light full screen motion blurring effect to enhance the "handycam" effect.
And then you have a frame. Enjoy.
Cool, thanks :)ReplyDelete
Just to clarify, you output
fbo0,rgb: "lit&textured objects" (albedo?)
fbo1,rgb: worldspace normals, xyz
fbo2,rgba: packed depth
(fbo3,rgba: screenspace motionvectors)
Then do a single fullscreen pass, processing a varying number of lightsources, calculate blur-amount, add ao etc, and output:
fbo4,rgb: rgb-lit color
fbo4,a: amount of blur
Is that about right? What is a field-of-view effect?
you got that right. The fbo0,a is the amount of shadowing the pixel has received (to be used in SSAO) and the fbo1,a is the amount of blur. So fbo4,a is not needed.
I meant to say depth of field sorry (further objects appearing blurry).
Why not perform lighting in the "2d pass" as well (i.e. really deferred shading). That would allow you to have hundreds of smallish light sources if you only run the "lighting" shader for the pixels each light affects. That's what we used to do in Theseis and it rulez. (Here's the first test I did of that technique before we integrated it into the Theseis 3D engine: http://nuclear.dnsalias.com/shots/deferred.png )ReplyDelete
Maybe because multiple light sources are eye cancer.ReplyDelete
I almost don't even bother with lighting. There is a very small per vertex diffuse component (0.8+0.2*dot..), no speculars, no enviromental not bump. That's all you need.
Well that might be all you need but I definitely don't agree. Still if it works for you, keep it up :)ReplyDelete