Published: March 3, 2022
29
253
1.7k

So I had this idea for an improvement to the way I render lines in Mars First Logistics. I've had a lot of people ask about how that works, so here's a (somewhat technical) thread about it and the improvement I recently made.

The lines are rendered using edge detection. This is a post process effect where first everything is rendered to a texture and then we read that texture to work out where the lines should go. Here's what it looks like before and after applying the post process shader:

Image in tweet by Ian MacLarty
Image in tweet by Ian MacLarty

With edge detection we're looking for pixels whose "value" is different from its neighbouring pixels (I'll get into what "value" means shortly). We can then darken these pixels to make lines along the edges.

Image in tweet by Ian MacLarty
Image in tweet by Ian MacLarty

I look at several values to determine if a pixel is along an edge: its depth (distance from camera), surface normal and colour. This info is encoded in 2 textures: the depth texture generated by Unity and the camera’s 4 channel colour buffer (16 bits per channel).

Colours are stored as an index into a palette in the blue channel. This means I only need to use one channel for colours and it gives me complete control of the colours at different times of day. Here’s what the day time palette looks like:

Image in tweet by Ian MacLarty

Instead of trying to store the x, y and z components of the normal separately, I store the dot product of the normal with the view direction and another orthogonal direction. These two dot products are stored in the green and alpha channels.

This gives pretty good results, but it’s not perfect. You can see little gaps in the lines where the dot products are not sufficiently different to be regarded as an edge:

Image in tweet by Ian MacLarty

Another issue is on curved surfaces, at particular distances, the normals can change too rapidly from pixel to pixel and the whole surface gets detected as an edge:

The idea I had to fix both these issues was to pre-compute the “surface IDs” of each mesh and use these values instead of the normals for edge detection. A surface here means a set of vertices that share triangles.

This works because vertices along sharp edges of a mesh do not share triangles, so the pixels around sharp edges will have different surface ids. Here’s what the game looks like with all the surfaces coloured differently:

Image in tweet by Ian MacLarty

And here’s with edge detection applied. Perfect!

Image in tweet by Ian MacLarty
Image in tweet by Ian MacLarty

I was feeling pretty pleased with myself, but then I started noticing some weird artifacts on some surfaces, like this:

Image in tweet by Ian MacLarty

Using RenderDoc, I tracked this down to the surface ids losing precision somewhere. This is a closeup showing pixels that should all have the same surface id:

Image in tweet by Ian MacLarty

It wasn’t an issue with the texture channel precision, because the surface ids are small enough to be exactly represented by 16 bit floats (they’re all integers in the 0-600 range).

The weird thing was even if I set all vertices to have the same surface id in the vertex shader, the values in the fragment shader would still be inconsistently different.

I eventually figured out it was the interpolation done during rasterization that was messing up the values. I guess this calculation must be done at a fairly low precision. If I turned off interpolation on surface ids the problem went away.

The problem now was that Unity’s surface shaders don’t support nointerpolation. It was possible to reproduce the surface shader features I needed in an unlit shader (basically just shadows and directional lighting), but this felt like it would be harder to maintain.

In the end a simple round(surfaceid) in the fragment shader seemed to fix the problem. Phew!

I did have to clean up a few of my models where I hadn’t marked surfaces as smooth that should have been, but that was worth doing anyway, if only to reduce the vertex count.

Image in tweet by Ian MacLarty
Image in tweet by Ian MacLarty

Even with surface ids, I do still keep the dot product of normal and view direction in a channel, because that’s still useful when using the depth to detect edges.

Consider the case where a surface is almost parallel to the camera’s view direction. The depth can change very rapidly from pixel to pixel, leading to false edges like this:

Image in tweet by Ian MacLarty

This can be fixed by biasing the depth edge threshold by the dot product of the normal and view direction. If the dot product is close to zero, then we don’t detect edges.

Image in tweet by Ian MacLarty

Finally the red channel is used for gradients between palette colours, which is useful for things like sunsets.

Image in tweet by Ian MacLarty

Shadows are stored in the sign bit of the blue channel. Here’s my final layout of data in the colour buffer:

Image in tweet by Ian MacLarty

Thanks for reading! Here’s a bonus debug screenshot.

Image in tweet by Ian MacLarty

Extra tidbit: The wireframe effect behind dust is achieved using a colour mask. The dust shader only writes to the red and blue channels, preserving the surface-ids and normal-dot-view-dirs of the objects behind the dust, while replacing the colour ids.

@ianmaclarty Super interesting info on how you dealt with this, some of it reminds me of Lucas pope's work to do similar in obra din, smart work from both of you, looking forward to the game!

@TomNullpointer Thanks! Yeah I read his posts. It's interesting how everyone has a slightly different take on it. Manifold garden is similar, but different again. One glove does not fit all.

@ianmaclarty Thanks for sharing! Do you have any techniques for anti-aliasing the outlines or controlling their thickness?

@antovsky FXAA works well. and no, all my lines are the same thickness.

@ianmaclarty Hey Ian, hope you dont mind, I've got another question! How did you markup the surfaces so you got the desired final colors in your Lookup Table? I can see how a mesh routine could list surfaces but how to assign a specific id to each?

@TomNullpointer I don't mind at all. Although I'm not sure I follow. The colour ids (used to look up final colours in the palette) are separate from the surface ids (used only for edge detection). Surface ids are just assigned by incrementing an integer.

@ianmaclarty also Im assuming you re-applied the correct color in the post process stage after using the previous color for the edge detection?

@TomNullpointer The colour is simply looked up in an array using the colour id in pp. I'm not really sure what you mean by previous colour?

Share this thread

Read on Twitter

View original thread

Navigate thread

1/34