Screen Space Planar Reflections in Ghost Recon Wildlands

Reflections done in screen-space

The screen space reflections algorithm (SSR) is a technology now used in numerous video games to provide the reflection visuals. They are expected to be less expensive than actual reflection rendering and more accurate than cubemap-based reflections, as long as the reflection source is present on the screen. Still, this technology isn’t that affordable in a realtime application and it suffers from a lot of graphic caveats: SSR isn’t the final solution to the reflection inputs. However, on specific – controlled – situations, these flaws can be alleviated and SSR can deliver astonishing results.

Motivation

Here we’re going to talk about another form of SSR that has been developed to support the water rendering ambitions of Ghost Recon Wildlands. It aims to  provide multiple high-quality reflection surfaces from close to far distances with little performance impact, for the specific case of planar reflectors.

Projection
Hash resolve
Filling the gaps
Optimizations
Multiple water planes

Projection

The concept on which the Screen Space Planar Reflections (SSPR) is based on is the Projection Hash Buffer. This screen-space texture will hold information about the location where the pixels from the main view should be projected in the reflection view.

To this end, for every pixel in the main depth buffer:

  • The pixel is reprojected in world space
  • This world position is reflected with a known water plane and projected in screen space
  • We write the screen location of the source pixel at the reflected screen position in the projection hash buffer via an UAV write.

In practice, we encode the projected pixel location on a single R32_UINT following the simple hash formula

uint ProjectionHash = PixelY << 16 | PixelX

(why this ordering ? Wait for the next section)

 

float WaterHeight = …

float4 PS_ProjectHash(float2 ScreenUV) : SV_Target0
{
	float3 PosWS = Unproject(ScreenUV, MainDepthBuffer);
	float3 ReflPosWS = float3(PosWS.xy, 2 * WaterHeight – PosWS.z);
	float2 ReflPosUV = Project(ReflPosWS);

	uint2 SrcPosPixel = ScreenUV * FrameSize;
	uint2 ReflPosPixel = ReflPosUV * FrameSize;

	ProjectionHashUAV[ReflPosPixel] = SrcPosPixel.y << 16 | SrcPosPixel.x;

	return 0; // Dummy output
}

Here, “ReflPosPixel” is the pixel-space location of the projected pixels in the reflection. We use it to store in the projection hash buffer where the source pixel “SrcPosPixel” lies in the main view. Once it’s encoded in the hash texture it’s ready to be used for the second pass.

Hash resolve

The idea is quite simple: a fullscreen quad is going to:

  • Fetch the projection hash texture
  • Decode the hashes to retrieve the location of the source color pixels
  • Fetch the source pixels and output them in the final reflection target.
float4 PS_ResolveHash(float2 ScreenUV) : SV_Target0
{
	uint Hash = ProjectionHashTex[ScreenUV * FrameSize].x;
	uint x = Hash & 0xFFFF; uint y = Hash >> 16;

	if(Hash != 0)
	{
		float4 SrcColor = MainColorTex[uint2(x, y)];
		return SrcColor;
	}
	else
		return 0;
}

 

That was pretty straightforward, but at this point the results can be disappointing:

Concurrent reflective candidates resulting in blinking pixels

There are two flagrant issues here:

  • The projection of the source positions into the reflection is an injective transformation, which means that two different pixels from the main view can be merged in the reflection and become “reflective concurrents” without knowing which one of them should prevail. Hence the blinking pixels.
  • There are gaps in the reflection caused by the occlusion in the main view that prevents valid pixels in the reflection to be projected.

 

That’s where the Projection hash texture and the hashing function we chose eventually make sense. By using the intrinsic InterlockedMax when writing on the UAV, two hashes are going to be sorted first by their high bytes and so by their PixelY value.

// Read-write max when accessing the projection hash UAV
uint projectionHash = SrcPosPixel.y << 16 | SrcPosPixel.x;
InterlockedMax(ProjectionHashUAV[ReflPosPixel], projectionHash, dontCare);

The concurrent projection is now sorted “from-bottom-to-top” and the source pixel locations stored in the projection hash are now the ones closest to the water plane thus the closest to the camera in the reflection view. The projection is now stable.

Filling the gaps

The missing geometry is the #1 issue with the SSR approach and we need to find a way to fill it or else the reflection effect would be disastrously broken.

First we’ll deal with the missing reflection on the screen borders. This is due to geometry absent from the main view but needed in the reflection. There’s no actual solution besides rendering a bigger out-of-screen main frame so we’d be able to fetch it to fill the borders. But in a real world game where every microsecond counts, this is not a option.

Instead we’ll add some stretch on the projected location based on the distance between the source pixel and the water plane.

float HeightStretch = (PosWS.z – WaterHeight);
float AngleStretch = saturate(- CameraDirection.z);
float ScreenStretch = saturate(abs(ReflPosUV.x * 2 - 1) – Threshold);

ReflPosUV.x *= 1 + HeightStretch * AngleStretch * ScreenStretch * Intensity;

Reflection stretching to fill the missing pixels on the borders

Then, it is time to deal with the holes in the projection, which were created by the geometry occluded by closer pixels, and which couldn’t have been projected.

  • A classic temporal reprojection helps a lot and very little movement actually suffices to almost completely fill the cracks.
  • As a fallback for pixels that still couldn’t be filled with relevant information but which could still be valid (i.e not in the sky), we’ll just make the reflection surface we generated the frame before “bleed” on the current one. It surely isn’t correct but it gracefully avoids any discontinuity and fills the remnant gaps with a coherent color/luminosity.

The reprojection and bleeding fill the gaps with coherent values

Optimizations

We have a nice real-time solution at this point but let’s try to get some extra bits of performance. We are going to use an empty additional stencil and see how it can drastically change the cost of the SSPR.

  • We discard the pixels whose height is below the water plane as they have no chance to participate in the reflection. The successful pixels are marked in the additional stencil.
  • The hash resolve pass uses this mask to only resolve the reflection on the pixels which have been previously discarded (thus whose stencil value has not been marked), as they are the only ones able to be reflective.

With these optimizations, the SSPR cost is linearly dependant on the percentage of the screen where pixels aren’t the sky and are located below the water plane.

On Ghost Recon Wildlands, we were then able to generate any water reflection surface for a cost of 0.3~0.4 ms on consoles at 1/4 resolution.

Multiple water planes

As we achieved our goal of an affordable reflection technology, we’re able to compute it several times in a single frame, allowing us to handle multiple water surfaces.

To achieve this, we’ll have to know which planes need the SSPR rendered.

  • When rendering the water, each water pixel increments a counter with its plane ID
  • These counters are then processed to know which planes are actually visible, and we keep the N planes which are the most present on the screen.
  • We generate N reflection layers in a texture array using SSPR.
  • In the next frame, the water will use its ID to fetch the array and retrieve the reflection.

Multiplanar reflection on the pool and the lake surfaces

A complete example

Lit color buffer

 

 

SSPR stencil + Projection Hash

Resolved hash

Water rendering

12 Replies to “Screen Space Planar Reflections in Ghost Recon Wildlands”

  1. Hi Remi,
    Your solution is similar to the one I presented at this year’s Siggraph in “Optimized pixel-projected reflections for planar reflectors” talk. What’s even more interesting is that I also developed the reflections stretching for screen borders that you describe in the blog-post. It was presented at Siggraph 2015 by Balazs Torok in “The rendering features of the Witcher 3: Wild Hunt” talk. I implemented this for standard raymarched SSR though. Looks like it would be cool to do some research together.
    AC

    1. Hi Adam,
      Very cool paper ! I guess I wasn’t up-to-date on the Siggraph entries.
      Our techniques have indeed a lot in common, especially the use of an intermediate buffer to store our projections, even if we diverge afterwards on the encoding method to sort the hashes and the image correction. The SSR subject is still clearly an open problem.
      I’m working on other topics as for now but I always love to chat about pixels 🙂

      Rémi

  2. Yeah, my encoding is a bit more complicated to allow handling arbitrary directions. And additionally supports filtering which is nice on lower resolutions. I was also thinking about frame to frame holes filling but concluded that it would be cool not to rely on history data.

    Btw. are you doing anything with big missing areas in the reflection so that they wouldn’t stick for too long? I guess there are some corner cases for this (i.e. no camera movement for some time), but for animated water offsets probably it’s not a big deal if the missing areas are not too big. The way I handled this for Witcher3 was just a mipmap based flood filling instead of keeping history values (also described in W3 talk I mentioned).

    AC

  3. Hi Remi, great post! How do you handle non-flat water surface (waves) for the rejections?

  4. @Adam, well the big missing areas are indeed annoying. However I realized at some point that the visual nuisance didn’t really come from the lack of pixels, but from the wrong color/lighting filling the gaps. If they were filled with black, it just felt wrong; with the sky color, they were overbright.
    My solution was eventually to use the GI / Sky visibility / Indoor data to fill the remnant pixels with a coherent value. It helped a lot in terms of credibility.

  5. @Arnaud, actually I don’t 🙂
    If you play Ghost Recon you’ll see the rivers don’t have any reflection as the SSPR works only on planar reflectors. The world water mesh is made of identified water planes linked with rivers. The planes increment the multiplanar counters (which will at the end decide what surfaces should have SSPR applied) while the rivers don’t.

  6. Hey Rémi,

    Thank you for this blog post! It’s awesome to see a behind-the-scene look at the development process of this game. I had a blast playing it, clocking in a 68H, 100/100% completion.

    Reading your blog post made me go back to it again and have a look at reflections especially. Hopped into a chopper, blasted a few patrols on my way to get accustomed to the new controls, and finally found a lake. And I noticed that the current implementation of SSR does not play well with my ultra widescreen monitor (21:9, 3440×1440, all Ultra).

    You can clearly see that it appears to work much better on the right, and clearly fails a lot on the left. And we can also notice the sudden disappearance of the center mountain in the reflection when that mountain is off-screen: the cloud suddenly pops out in front of the shaded-out bottom of the mountain.

    Anyway, great post, even greater work on the game 🙂 From France, I wish you the best for whatever you work on next!

    1. Screen-space techniques like SSR usually only reconstruct pixels that already were on the screen before reflections (that’s why it’s one of the computationally cheaper methods to get there). On your video the mountains in the reflection disappears at the same rate that the top of the screen hides it, that’s why you’re ‘missing’ some of the details.

      1. Yes and it is well explained in the post. The video is merely highlighting this.

        The only issue I see with ultra widescreen is that the stretching visually fails on the left of the screen, but is OK on the right.

Comments are closed.