The Engine

The engine for HollowFear was developed in XNA, from scratch. It uses no third party components. In it’s current state, it is not a general-purpose engine. It is tailored and optimised for games with top-down(isometric) view, and open-world, outdoor environments. Of course, the engine is no cutting edge in any aspect. Nonetheless, it is more than just the sum of it’s part. It is light weight, optimised, and it more than does it’s job for the needs of this game. In order to make sure the engine would run well on older graphics hardware, my main development rig gpu is still nVidia Geforce 9600GT, which is almost 6 years old now. And the game runs flawlessly on it, with 60 fps, 2xFSAA, SSAO, soft shadows, at 1920×1200.
 
The development, was a beautiful and rewarding journey for me. A journey, that would have been impossible, without the XNA and the general game development community out there. Since my time is really limited at the moment, I cannot write any in-depth articles about the engine and the whole development process. I hope I will have time to do that in the future, after this project is finished. I feel I owe that to the forementioned community.
 
So here is a brief description of some base aspects/elements of the engine, alongside with my thoughts on development, and the codebase used. Even though there is no third party components used, the codebase for many core elements is taken from various sources, modified, optimised, extended, to fit the needs. I hope this will be of any help to all the struggling xna indies out there, and give you some basic ideas and guidelines. This is not a tutorial, and I cannot go into technical details of implementation. Most of all, I want to show you, to encourage you, that all the code you need to create a fully functional engine is out there, in books, on the web, free, within your reach.
 

TERRAIN

Terrain QuadtreeTerrain is a lowpoly, quadtree based mesh, generated and manipulated in the engine’s level editor. It is textured using vertex texture weights, rather than opacity maps. Ground textures currently used are of size 512×512 pixels, and I think this will not change, as I believe that is an optimal performance/quality balance for the needs and scope of HollowFear.
 
Textures however, do make use of normal maps. That is not a normal practice, when textures are hand painted, as in this case. But I wanted to experiment, and I believe ground normal maps help creating the right atmosphere, and am satisfied with the results, even though it surely still needs some tweaking. Some ground textures, such as rocks, snow, etc, make use of specular maps, stored in main texture’s alpha channel.Codebase used for the terrain quadtree and texturing is from the forementioned Riemer’s book and his web articles.
 
3D grass3D grass is not done using billboarding, but using simple low poly mesh with an alpha blended texture(or alpha tested at lower quality settings). The trick however, to get the decent looks, is in the design of the mesh (see the image), in even, but slightly randomised distribution across the terrain with randomised mesh rotation and scaling, and in an appropriate texture. After some playing around, I realised that the texture works best when it has a gradient overlay from dark gray at the bottom, to bright gray on top. Then in my level editor I set the color overlay of grass meshes so that the color of bottom part of the mesh at least roughly matches with the terrain texture used as 3d grass base. For the sake of performance, all the grass meshes are merged to a single vertexbuffer per terrain quadtree. Grass is animated in a very simple manner, ofsetting top vertices in vertex shader based on randomised vertex colors and three sinusoids of different magnitude.
 
Highly optimised terrain decals are also supported, redrawing only the portions of terrain required(details of implementation are beyond the scope of this article).

WATER

WaterWater is rendered using only two triangles, spanning across entire terrain, at a specified height, with their vertices slowly moving up and down using an offset vertex shader sine. Base water pixel color is calculated with fresnel term between a custom solid color and reflection color from environment cubemap. To create soft terrain intersection edges, a linear geometry depth map is rendered first to an offscreen surface (half screen size), for access in water pixel shader (this same depth map is then used for soft particle edges and ssao aswell). Water depth factor is also used to apply some “foam” texture along the coastline. Water bumps are created using two wave normal maps, slowly sliding in opposite directions. Specular highlights are also applied, based on the two normal maps.
 
The basic water implementation ideas came again from Riemer’s web article and a recipe from his book.
 
Just recently, I coincidentally ran across this soft edged XNA water sample. Even though my technique is a little bit different, I think it is a great sample to start with.

SHADOW MAPPING

Shadows were one of the features I found very intriguing to implement. After some research, I realised that using the “classic” shadow mapping technique makes the most sense. I will not go into details about it’s base implementation here, as you can find numbers of great resources on the web, for it’s been a common, widely used technique for years now.
 
Shadow mapping - various edge filtering techniques (expand to full size by clicking the button in upper right corner) Softening the rough edges of basic shadow mapping, however, is another story. I implemented and experimented with a number of various techniques. Some results are presented on this image (make sure to expand it to full size by clicking on a button in the upper right corner).
 
The first technique I used, was a 16 tap (4×4) sample PCF(percentage closer filtering). As you can see in the image, the result can be quite satisfactory if the shadow map size is not too small. However, with 16 texture lookups per pixel, this technique is quite expensive. Looking for ways to optimise it, I realised I could get comparable results with just 8 texture fetches per pixel using fixed(tried rotating it aswell, but it wasn’t much of an improvement) poisson disc sampling pattern. Even though I wasn’t completely satisfied with the result (not smooth enough, instability, biasing problems, still 8 fetches per pixel…), I decided to stick with it.
 
Up until recently. As I implemented SSAO, I knew it was time to take my shadow filtering one step further aswell. So I experimented with VSM(variance shadow maps) and ESM(exponential shadow maps) technique. They are similar in many ways, and not difficult to implement once you have basic shadow mapping done. However in contrast with PCF, they both require linear lightview depth map, and occluder aswell as the receiver must be rendered to the depth map(PCF works with log depthmap and the receivers only can be left out). However, they both enable you to prefilter them with gaussian blur and use fast hardware linear/anisotropic filters, and multisample antialiasing(MSAA) capabilities. And last but definitely not least, they require just a single texture fetch per pixel when calculating the occlusion factor. They are both supperior to PCF – better looking and faster to render. Furthermore, I found ESM to be supperior to VSM in every aspect. It is faster (it doesn’t require squared depth in second channel), easier to implement, and without VSM light bleeding artifacts. Biasing is virtually a non issue aswell. The only drawback of ESM is a lack of shadow at the contact point of occluder and receiver. Luckily I was able to overcome this problem using a little cheat(however it is very specific to my situation and not a general solution at all). A more expensive, but a good solution for all the drawbacks of VSM and ESM is actually combining both of them into an EVSM technique. I believe this is the right direction for the future, and you can already find some good papers on it out there.
 
So at last, I ended up using ESM with a single shadow map (as directional light is the only one casting shadows) of size 1024×1024, prefiltering it with a 3×3 bilateral gaussian blur, using linear sampling and 2xMSAA. For lightview depth map I am using a RG32 surface format (2 fixed point 16 bit channels). I get away with that without any precision issues for 2 reasons: depth is linearised and the viewport distance is very small due to specific, upside down, fixed camera angle, with the distance of near and far view plane only about 40. I write depth to both channels. However when blurring it, I only blur the first channel, leaving second one intact. So I am using the first channel when calculating VSM term, and second channel when doing a simple depth testing with no edge filtering at all. I am using the latter for 3d grass. As it does not cast shadows, so I can leave it out of depth buffer, and as any sort of shadow edge filtering on it would be pointless. All in all, I am very satisfied with the final result now. It is a very fast and good looking solution. At least compared to previous, 8 tap PCF filtering. Finally!;)
 
I don’t want to make any general judgements here, but in my situation, ESM turned out to be the best solution by far. You should definitely at least give it a try!
 
Some of the resources used:
http://xbox.create.msdn.com/en-US/education/catalog/sample/shadow_mapping_1
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch08.html
http://http.developer.nvidia.com/GPUGems/gpugems_ch11.html
http://www.punkuser.net/vsm/
http://devmaster.net/posts/3002/shader-effects-shadow-mapping
http://www.olhovsky.com/2011/07/exponential-shadow-map-filtering-in-hlsl/

LIGTS & REFLECTIONS

Lights & ShadowsThe whole scene is lit up by a single directional light, which is also the only light that casts shadows, and creates specular highlights. In addition to that, there is a dynamic point light system, that enables a fixed amount of arbitrary point lights at a time, mainly used for fire and spell effects. Since it is a forward rendering engine, all done in a single pass(besides pre rendered view and light view linear depth for the purpose of soft edge intersection, ssao and shadow mapping), I tried to keep the number of active point lights as low as possible for optimal performance. At the moment, the limit is set to 6 active lights at a time. I realised that in my scenario, I would rarely need more, since the lifespan and range of these lights is normally very short, and that even if there is an occasional lack of the “light flash”, it is not very noticeable when there’s already 6 active pointlights on screen.
 
Normal maps are used on all objects. Alpha channel of normal maps is used for glow(self illumination) factor, and the alpha channel of diffuse maps is used for specular factor(I can afford that as only foilage is actually using alpha channel for alpha blending/testing, which has a preset fixed specular factor). So 2 textures all in all for all the lighting info needed.
 
For highly reflective surfaces (swords, armor, etc), the engine makes use of environment cubemap reflections. No rocket science in the implementation, but definitely not a trivial task when it comes to creation or selection of appropriate cubemaps. I am not completely satisfied yet with the ones I am using at the moment.
 
Rim lighting is used in shaders as a cheap(no need for an extra draw to offscreen surface) model “outline” effect. Rim lighting decently does the job, as long as the models are not too low poly.

SCREEN SPACE AO & BLOOM

Final image with SSAO
Ambient occlusion was one of the last features implemented. Given it’s popularity, I just had to give it a try;). And I am happy I did, I like the results. Since this engine is using forward rendering technique, and light-weigthness is it’s motto, I decided to implement the depth-only version of ssao. Extra rendering of position and normal buffer just for the sake of ssao, made no sense. On the other hand, I was already rendering linearised depth buffer for other purposes, mainly soft particle and water edges. So the original, depth-only ssao, as first introduced by CryTek, was the most obvious choice. After some research and experimentation, I implemented a technique presented by ArKano on Gamedev.net, and further improved by Martins Upitis here. Big thanks to both of them for sharing their excellent work.
 
SSAO map
For optimisation purposes, linearised depth (it doesnt’t work with original logarithmic one) and ssao are rendered at half the screen resolution. The core of my ssao shader is basically just a slightly modified, HLSL version of the of the forementioned technique. Depth map is sampled 16 times. Ssao map is blurred with a 3×3 gaussian pass, before applied to geometry using linear sampling (to get some more cheap smoothing). All in all, it gives quite nice results at a relatively low cost (low cost for a global illumination effect that is;). In general however, the result of depth-only ssao with relatively low depth sampling count, and half sized buffer, is more of a stylistic rather than realistic effect. Well, that’s exactly what we need in HollowFear:).
 
SSAO as applied to geometry
The tricky part of getting a nice final result is fine-tuning various ssao parameters to fit your scene, and the way you apply it to your geometry/final image. Nothing difficult, it just takes some time to experiment. One of the main purposes of ssao in my case was to nail map prefabs to the ground. That is why, I didn’t consider lighting factors when applying ssao term to the ground (always fully applied). On the other hand, I took into account directional light exposure when applying it to objects, to create additional contrast and dynamics between shadowed and lit up parts. Again, more of a stylistic rather than realistic effect.
 
Some of the resources used during development:
http://www.gamedev.net/topic/550699-ssao-no-halo-artifacts/
http://devlog-martinsh.blogspot.com/search/label/SSAO
A Simple and Practical Approach to SSAO
http://theorangeduck.com/page/pure-depth-ssao
 
 
Postprocessing bloomBloom is the only post processing effect used in HollowFear. Again, the base code comes from XNA official education catalogue. It is just slightly modified and optimised. It really aint difficult to implement, and you should definitely give it a try, it is worth the while. Fine tweaking of bloom parameters is important to get the final image you want. After playing with it for a while, I realised it is a must to be able to set different bloom parameters for each scene/level, based on the time of day(directional light color, strength and angle), weather, and the atmosphere you want to create in general. So I incorporated that option into my level editor.
 
I was also experimenting with hdr rendering. After playing with it for a while, I realised that for the purpose of HollowFear, hdr does not give much more appeal to the final image compared to bloom, at least in my personal opinion. And since bloom is cheaper to render, I decided to stick with it for the moment, and I believe it will remain so in the future.

SPECIAL EFFECTS

Special EffectsSpecial effects…well, nowdays, I wouldn’t call them special anymore, but rather a common, must-have effects;). Fire, explosions, smoke, trails, magic… HollowFear is no exception.
 
Most of these effects are done using a particle system, built upon an excellent sample/system directly from xna official education catalog. The codebase of the sample was extended to be a more flexible, so a single particle system (the shader actually) takes size, color and duration parameters, meaning you can create sligthly different effects using a single system, reducing the number of needed systems significantly. Another feature that was added was rotating particles in vertex shader to the direction they are moving, which is needed if you want render sparks with trails for example. Other than that, the mentioned sample is a great, optimised piece of code. Particles are updated entirely on gpu, cpu only takes care of adding new ones.
 
Projectile trails are not rendered as particles. They are rendered as a plane, consisting of two triangles, stretching in the direction the projectile is flying. Nothing fancy here. Of course, all the active trails are batched together to a single vertexbuffer, to reduce the number of draw calls.
 
I am also making use of simple horisontal planes and billboarded planes for some basic effects (using one texture and one vertexbuffer for all of them), where using a particle system would be an overkill.
 
RibbonsFor melee weapon trails primarily(like a very obvious sword swing trail), I developed a ribbon trail system. The base implementation is rather simple. You define the length of a ribbon with a number of frames of duration, 20 frames for example. Then you create a triangle list(or strip) of 20 quads. Then each frame, you shift each pair of vertices(top and bottom of the trail) over one, and you’re done. For trails of slow objects, that should do it. However, a swing of a sword for example, is a very fast movement, finished in only a few frames. That creates a very angular trail, rather than a smooth, circular one, as we’d expect in this case. So, on top of our base system, we have to implement some kind of smoothing, by inserting points between our base(control) points. First, I implemented Catmull-Rom spline smoothing, and although it worked well, the trail of sword swing looked unnatural. Next I smoothed it with Bezier spline. It looked better but still not perfect. Finally, I used B splines, and I was satisfied with the result. So now I use all 3 of them for different situations, as they all have a bit different characteristics. To visualise the differences between them, I used a very handy java applet.

AI, PATHFINDING & COLLISION DETECTION

Pathfinding gridThe core of npc’s intelligence in HollowFear is a fast A* pathfinding algorithm, operating on top of a terrain pathfinding grid, constructed and edited in the engine level editor. During online research, how to improve my own old implementation of A*, I ran across a far better, very optimised, and well documented A* implementation by Gustavo Franco. Many thanks to him for sharing his excellent work.
 
On top of the pathfinding, I developed a simple navigation system, with chase and evade functionality, and a simple, state-based, parametric npc decision making system.
 
Chase and evadeFor the purpose of collision detection, every object in the world has a very low poly collision mesh, geometry of which is imported into model tag, using custom model processor, to be available for collision checks on the cpu. First, a close proximity check is done, using the model’s bounding sphere. Then, if needed, in most of the cases a ray vs. model collision mesh triangles check is done, with the ray transformed to object’s local space, rather than each triangle transformed to world space. For player vs. npcs, a very simple, distance based collision check is used.

WEATHER

Snow effectIn my opinion, nice and intense weather effects can significantly contribute to the atmosphere of any game set in an outdoor world. At the moment, this engine supports shader fog, rain and snow effects.
 
Rain and snow effects are implemented as 3d particles, with each node of the terrain quadtree having it’s own particle vertexbuffer. Each of them is initialised with particles of random position, size and velocity. Since after creation, particles are updated entirely on the gpu, based on elapsed time passed to the shader, and every quadtree node gets rendered only if on screen, the problem of too many active/live raindrops or snowflakes is solved. Particles that are too close or too far, are faded out in the pixel shader. Ceiling and bottom for particles are preset and fixed. Some of the ideas came from this article.
 
I like snow. No, I love snow. That is why I wanted to make a little extra step with it in my engine. I wanted to create some sort of real, volumetric snow accumulation. And I did get halfway there, using this nvidia sample as a base. Unfortunately, there were some glitches and performance problems, nothing that couldn’t be solved though. But given the fact that I am alone on this one, and so I cannot afford to loose too much time on implementing these extra eye candies, if I ever want to finish this project, I had to put it aside for now. It is one of the things I will definitely implement in the bright future to come;).

ANIMATION

AnimationXNA does not include any runtime animation classes. Luckily, a very good sample is available in official XNA education catalogue. It works perfectly, and I am making good use of it, with just a few minor modifications.
 
However, it has no support for animation blending, which is very likely you are going to need, no matter what kind of game you are making. So I developed a new blended animation class, which basically just uses two base animation classes, and linearly blends their resulting bone transformations, based on a given blend duration. For animating npcs, that did it. For player character, it did not.
 
In HollowFear, you can move around, and swing your sword, or cast a spell, simultaneously. That requires character animation system to be able to animate lower part of the body (legs) separately from the upper part (torso), and then combine them to a final output, with the junction bone compensating rotational differences. I managed that using two blended animation classes, assigning each one only the bone indices it needed to transform, and voila, “multifunctional” player character was born.

USER INTERFACE

Postprocessing bloomUser interface was the only component I wasn’t really keen on developing myself from scratch, because I’ve been working a lot on those sorts of things during the development of my business app, so I thought that rather than being a challenge, it would be a boring and a routine job. So I searched the web, looking for a simple and lightweight, but fast and stable GUI XNA component. I couldn’t find any (I’m not saying anything bad about those existing ones, but they just weren’t what I was looking for). I got some basic ideas from Riemer’s book, and since the UI requirements for HollowFear are rather basic, I was done quickly. I did take an extra step to create 2d animated menu backgrounds, which I think look pretty cool thanks to nice artwork aswell.
 
At this point, I would like to say one more thing, a very personal opinion of mine: Developing a simple GUI, could actually be nice sandbox, a good starting point for learning basics of XNA, and upgrade your let’s say basic knowledge of C# language, in terms of better understanding and practicing object oriented programming principles (polymorphism, interfaces, abstract classes, delegates, events, etc).

LEVEL EDITOR

Level editorAll along the development of the engine, I was also developing a level editor for it. During the development of engine’s core components, it served me as a testing platform. As those were more or less finished, I added some polish and gameplay elements to it, such as creation and manipulation of various gameplay triggers, enemy spawn points, static enemies, etc. So now, it serves, as it’s name suggests, as a level editor for HollowFear.
 
The editor is based on .NET windows forms. Given the power and ease of use of managed C# and .NET framework’s winforms, it was no difficult task. Another case, where XNA in combination with Visual Studio shines bright!

 

Here are some major, freely available online resource collections, that I’ve found of great help during the engine development in general:

Shawn Hargreaves Blog
Official XNA Education Catalogue
Riemer’s XNA tutorials
Gamedev.net forums and articles
Official XNA Community Forum
Nvidia GPU Programming guide
Nvidia GPU Gems
Nvidia Shader Library
Catalin’s Game Development Blog

Copyright

©2014 CodeRebellion Ltd.
All rights reserved.

Web design by Uros Berce.

Follow us on:

Follow us on Facebook.Join my network on LinkedInWatch us on Youtube.