One of the most distinctive features of the space station’s interior design is the sheer scale of the place. The station has a very large interior and the player is capable of navigating it with complete freedom. This is in contrast to most other games that take place from a first person perspective, which often only allow the player a very limited degree of freedom of movement. Typically, a player can move around only on the floor that they have been placed and if they want to reach higher areas then they will need to find an elevator, or some stairs or a convenient stack of crates to jump up. In Human Orbit, this is not the case – because the player in Human Orbit inhabits the body of a floating drone, they can move up and down with complete freedom. This means that the player can cruise along the ground around the ankles of the NPCs, or it means they can zip around in the relatively uncluttered spaces near the ceiling.
I think this is a good experience for the player. Having an extra degree of freedom in the way that you are capable of moving creates an extra level of cognitive separation between the player (an AI) and the NPCs (humans). The player’s experience of the station is quite different to the (hypothetical) experience of the NPCs. It also means, and this is probably more important for most players, that you can zoom around the station much more quickly than the NPCs can – you won’t ever need to wait for an elevator, for a start. Another benefit is that it allows the player to fly up to the very top of the station’s main cupola and look out across the whole station from the centre of it all. The station’s main cupola is high – very, very high – and looking out across the station from this height, across the expansive garden and into the modules lined neatly around its circumference makes for quite a spectacular view. It also makes a great vantage point for surveying what is going on in the station – you’ll be able to see many NPCs wandering across the gardens, relaxing near the cantina, chatting on the benches around the ornaments and performing other such human activities. It’s a great spot for surveillance.
The Price of Freedom
Unfortunately, allowing the player this level of freedom doesn’t come without a price – rendering all of this stuff is not cheap! Rendering the garden’s terrain, the lake and its waterfall, the citizenry, clutter, and the architecture of the station itself all adds up. On our development machines, which have quite modest specs, this has been taking a toll on the framerate and we’ve been seeing some stuttering when we’re exploring these complex areas of the station – if you’ve got a good eye then you may even have picked up on a bit of this showing up in the promo trailer that we released in December.
It was about time I fixed the situation, so the first thing to do was to start gathering data. I examined the relative performance of every shader that we are using in the game, put them into a spreadsheet and found out what was costing the most. I wrote a little script that would scan the scene and identify when and where those expensive effects were being used and Karl and I went over the list with a fine-toothed comb to find out where we could tweak those effects for better performance or remove them without harming the look and feel. That saved us a little fps – and a few frames per second makes all the difference – but I knew that the real gains laid elsewhere.
It was time to look at the way that our graphics data was being batched. I won’t go into too much detail about the way that batching works because there is already sufficient information out there on the Internet already, but I will give a brief intro:
When your computer submits data to the graphics card to be processed, it typically hands that data over to the processor in groups that we call batches. In the worst case scenario, a group of objects in a game will be split into a number of batches that is equal to the number of objects multiplied by the number of different materials/textures that are used by those objects. If any of those objects share materials/textures, then the renderer is often smart enough to bundle them together and submit them together in a single batch. Submitting a batch is a very slow process so, when there are too many batches, you’ll see your FPS plummet. If you have a scene that you like the look of, but it is split into too many batches, then the performance will be bad. In such a case, one can either make the scene simpler or arrange the input data in such a way that batches can be reduced without altering the scene.
That was exactly the case for Human Orbit, so I started looking at ways that I could reduce the number of batches being used in the scene without negatively altering the way that it looks. For objects on the station that don’t move, it is possible to combine their geometry data into one big clump. Another thing that we can do is to group materials/textures together in cases where they are similar and where it makes sense to keep the texture data together – for instance, we have a table with a plain surface and some built in lights with a striated pattern and a blue glow. We can pack these three materials into a single texture to improve batching.
Well, I spent all of boxing day going over one module of the station, writing long lists and checking them twice. It took a long time and I didn’t get very far – eventually I had to say “Forget this!” and switch to a different method. Writing up all those lists manually was too time-consuming and, frankly, very hard to get right! So, of course, I wrote a bit of code to do it for me. The script runs over the scene and checks every object to see which objects are structurally grouped together in the scene and, of those groups, which elements share materials and which are suitable for having their meshes combined. The result was a tool that would scan the scene and then present me with a list of suitable combinations for review. I could have taken that list and manually set about about combining things, but I decided that would be super boring and so I instead made a button that I could press to process the list automatically. I could just press the button and wander away for a couple of hours to see a processed scene.
The results were good!
In the meantime, Karl had been experimenting with a different approach to the graphics altogether. Actually, it was an approach that would mean a switch to an entirely different version of the game engine. I’ll let Karl go into more detail about that in a future blog post, but for now I’ll highlight the part that is relevant to performance – the new engine version uses a completely different model for the way that it handles materials. I won’t go into detail, but what it meant for me was that I could now combine a great many more materials than I could before.
And, even better, the lighting model is gorgeous! In my screenshots, the materials are badly configured and they look all wrong but what I want to draw your attention to is the massive improvements in the lighting model – where the old screenshots look flat and boring, the new screenshots really pop! I mean, they still look wrong, but I want to be very clear that that’s my fault for not updating the materials before posting the screenshots. Even so, it’s already a massive improvement in the way that things look – but that’s not what I’m talking about right now. I’m talking about performance – and the performance here is a massive improvement on the performance from before as well! For those keeping count, 699FPS is 291% of the original FPS!
That’s all for now. This has been a long blog post, so I’ll sign out now before I bore you any further – but keep watching this space, Karl’s going to be posting some screenies soon which actually have correctly configured materials (not weird ones like I’ve just shown). Look forward to it!