We have seen in a previous article** **why floating point representation may shake our objects in scene. We will now see how to use space change tricks to mitigate this problem.

## A brief reminder about World, Model and View space

When dealing with 3D coordinates, we generally use 3 coordinate systems: World space, Model space and View space. World space is stuck on the ground. Model space is stuck on the object. So there are lots of model spaces in you scene, one for each object. View space is stuck to the camera. I won’t talk about Projection space, which define camera lens, in this article.

Coordinates of P are always relative to a space. So here, coordinates of P relative to Model space are (1, 0, 1). But we can Also define coordinates of P relative to other spaces:

- P_model = (1, 0, 1)
- P_view = (-1, 0, 3)
- P_world = (4, 0, 5)

## What about precision of those coordinates ?

One can see that P_world have a higher magnitude than P_model, because P is far away from World space origin. Well, not so much in this exemple, but imagine Model spaces millions miles away from World Space. What happen when taking in account floating precision error ?

P will actually jump to a floating representable position. But what happen if we locate P using P_model instead ?

P would jump to a much closer position. So close that you would barely see P jumping when moving it relative to model space. However, you may have noticed that model space has to be aligned into world space representable values. In this case, moving model space will make it jump from one position to another. So to use this trick, you have to define a fixed space close to your point of interest, and move you points relative to this reference space.

## What about View space ?

The View space moves a LOT when doing 3D rendering. It is probably the most dynamic space. It can be linked to a car for racing game, player for FPS, mouse for CAD application, etc… So we really have to ensure this space move smoothly if we don’t want to give our users a headache. To do so, we can use the same trick than before. Instead of locating camera relative to world space, we can define a “Tripod” space, and locate your camera relative to this tripod:

## Isn’t it Double precision with extra steps ?

In the end, what we did was encoding a position with 2 floats, giving us 64bit floating precision. But this trick can be implemented using your current render engine architecture. It doesn’t require using double precision values in computations, and any render engine already has a “Translation” object that can be used to do this. Last but not least, it doesn’t change your current floating precision API.

Unfortunately, for various reasons, Render engines require expressing all the coordinates in View space at some point. Accumulating transformations like we did won’t solve the precision error we will have when expressing coordinates in View space. We will see in the next article how to handle this with very local modifications to our matrix accumulation code.

## Leave a Reply