I've been pondering about OIT for few weeks now, I don't really have freetime so, I haven't had time to implement this theory and see if it actually makes any sense.
Typical rendering pipeline for rendering stuff goes the way of:
- Put each type of rendering elements into appropriate list, Opaque objects into one list, and Transparent/See through objects into another list.
- Order the opaque list front to back, render the opaque list elements.
- (Deferred shading lighting pass.)
- Order the transparent object list back to front, render the transparent list elements.
- (Fullscreen effects rendered.)
The deferred shading renders into a fat G buffer target, all the properties needed for the lighting calculations.
more about deferred shading:
https://learnopengl.com/Advanced-Lighting/Deferred-Shading
https://www.hiagodesena.com/pbr-deferred-renderer.html
The PBR deferred renderer puts more data on the GBuffer, in hiago desena pdf the gbuffer is:
Order Independent Transparency can be achieved with Pixel linked list? somehow?
Nvidia released this in 2014 already.. sigh, and its already 2023..
I was thinking maybe, it could be done by having 8 GBuffers, as "render targets", + a index list (stencil buffer?) that would indicate which index is free, and then each render result would be stored in #free GBuffer fragment, until full. Though, as a specialization, GBuffer #8 should be used purely for opaque results fragments.
The gbuffer would be like:
And the render pipeline would then be:
- Put each type of rendering elements into a list.
- Order the list front to back, render elements.
- (Deferred shading lighting pass.)
- Fullscreen pass
- order transparent pixel fragments, front to back. (if opaque is always at fragment 8)
- start with value from opaque fragment.
- only use transparent fragments which have depth value less than in opaque fragment.
- produce depth effects with transparent fragments (if the current fragment is front and next fragment is back, then we could possible assume that we can produce dimming over distance of the material, or something similar, haze or something).
- produce single pixel value from the fragments.
- (Fullscreen effects rendered.)
I think achievable effects would be:
- no need to worry about order, the pixels will order themselves
- transparent objects could have depth, if there be front&back rendering enabled, and the materials could provide translucency style effects. Skin etc.
- deferred lighting could be applied to transparent objects, allowing more hacks.
Bad things:
- 8 fragments per pixel isn't a lot, the nvidia paper of transparent car, would totally brick this. On the flipside of a coin, in real life, transparency is rare, translucency and opaque is common.
- The unused slots in gbuffer. Maybe these can be packed.
- Memory usage,
- Lets assume 32 bit float per slot, each rendertarget is 4*32bits, Depth probably should be just 32bit float per pixel.
- GBuffer would be [#1 diffuse (4), #2 normal (4), #3 pbr (4), depth (1)] = (4 + 4 + 4 + 1) * 4bytes = 52bytes per pixel per GBuffer
- 8 GBuffers + index for atomic buffer indexing (1 byte should be enough) = 52 * 8 + 1 = 417 bytes
- 4k monitor, 3840x2160 = 3840*2160*417bytes = 3 450 470 400 bytes = 3450 megabytes
- Maybe this was a bad idea afterall.
Well I might one day try this.
Ei kommentteja:
Lähetä kommentti