Having large worlds in a game is a challenge on many levels, from storing the data, culling, instancing and coordinates. The traditional way to store the position of an object is usually based on float (single, 32bit). While this works ok for small levels of a few km, when the player (camera) is getting away from the global origin (0,0,0), jitter in the movement and positioning starts to appear due to the floating point precision.
To fix this problem, there are several solutions available:
- have the world split into world axis aligned sectors (boxed areas), each with its own local coordinate system, player will move from sector to sector switching seamlessly its coordinate system to the current sector
- some games like Star Citizen have transformable zones, for example a space ship has its own zone, a planet its own zone and so on
- just use plain double (64bit floating point) to cover huge distances
I did not like the various areas and such for logic, seemed a bit complicated to manage. I like the idea of a flat space with 64bit floats. The main problem with this approach is that your math classes need to be in the double floating point and worse, the shaders also need to work in double. The math code in C++ maybe would be fine with double, but on the GPU, it will slow things down a lot and its not even practical (at least in the present day).
So, my solution is to keep the position of objects as double and when the scene is rendered from the camera position (which is also double), I would bring the visible range objects into a local 32bit float space near the camera (its position would be the current origin of the world). This way the math will be kept on 32bit float along with the shaders. The operation that needs to be done in 64bit float is the movement of objects, since their position is kept as double precision. Scale and rotation are local to objects and have not connection to the world coordinates. There might be other operations like ray casting into the world and various culling and checks, which can use the template matrix/vector/ray of 64bit float.
This scheme would allow a linear world without zone switching logic and the inherent issues or mistakes associated with that. The “32bit floatization” of the current camera surroundings would be transparent to the game logic coder. Since we are in the future, I would like to not use the hacks of the past and have the systems as linear as possible, especially now since we are not so much constrained hardware-wise, at least on several points.
I do not know how this will work with zooming and very far away objects (extra planetary objects on orbit or even far away planets or stars), but tests will check that out. The Z-buffer range is not enough for distant objects, a scaling technique or multi-pass rendering of the scene might be needed, along with the post-processing and other related things to be handled, but in the end I really want a zone-free system, at least transparent to the user.
The system is already implemented at the logic component level, there is still work to be done in the world renderer system.