# Tiled deferred renderer for the PS4

This project has been made as a custom engine project. I wrote the tiled deferred renderer on the PS4, I also worked on the shared API since the aim was to make it cross platform (Windows, DX12). The duration of this project was 8 weeks. I spent until week 4 on a forward renderer on the PS4, the core of the shared API's and UI + text rendering. Week 5 until week 8 were spent on writing the tiled deferred renderer. I'm still continuing with this as a side project, there still were some issues that needed to be solved and I want to move to a PBR lighting model.

## A quick overview

In the geometry pass I write to multiple render targets and a depth buffer. The two resulting textures contain the texture colors of all geometry rendered in the first pass and the screen space normals together with the specular intensity values of the materials. I then convert all light positions and rotations to screen space and use them with the screen space normals and light color values to calculate the diffuse and specular color values which are combined with the texture colors to obtain the final pixel color.

# Buffer breakdown

# Screenspace normals

# Required for lighting calculations

# Texture color values

# Required for the final color calculation

# Diffuse lighting

# Used for the final color calculation

# Specular lighting

# Used for the final color calculation

## Light attenuation & tiled rendering

For light attenuation I used an implementation that was used by the Unreal engine and the Frostbite engine.

I learned about this equation via the book Real-time rendering the fourth edition.

It divides the distance to the origin of the light by the max light distance which results in a value from zero to one when positive. that value is then raised to the power of four and normalized before being squared. The author of the book used the + symbol to clamp negative numbers to 0.

## Tiled rendering optimization

The previously mentioned attenuation equation assures that light doesn't affect geometry beyond the range of the light. This makes it possible to test if lights affect pixels before calculating the final pixel color. I split up the screen in tiles of 64 by 64 pixels and calculated a bounding box in screen space with the near and far depth values obtained from the first pass, for this I used an H-tile buffer but the depth buffer can also be used. I then obtain the nearest and farthest intersection distances for each pixel asynchronously in a compute shader and test all lights against the bounding boxes. I then use bitmasks to indicate for each tile which lights are affecting it and store it in memory shared over the different threads. This is achieved in a compute shader which I execute after the geometry pass and the compute passes that converted the light positions and rotations to view space. I layout the thread groups in two dimensions of 8 by 8 threads since a thread group can have a maximum of 64 threads (internally used as wavefront which has 64 threads). The amount of thread groups is then calculated with the screen size. Since calculating one thread per pixel is quite inefficient I calculate a block of 8x8 pixels per thread.

Problem-solving

After the eight weeks, this project lasted I still wanted to solve the remaining artifacts and implement PBR. Since I have to do this as a side-project I will not be able to work on this as much as I would have liked. Bugs I encounter that might be considered interesting will be summarized and added as a link below.