On Testability and Unity 3D

Testability is a crucial consideration when we write code, and for us it includes the ability to execute and unit test our code outside of Unity.

Unfortunately Unity throws up a few challenges here; MonoBehaviours are not testable, along with most types inside UnityEngine.dll.

This has led us to develop ‘Uniject’ – a C# testability framework for Unity that offers:

  • Plain Old C Sharp, testable MonoBehaviour equivalents
  • A robust and flexible way of creating GameObjects automatically, by inference of the code that drives them
  • Constructors!
  • An extremely flexible code base – in short, the benefits of DI + IOC.

The first attempt

Here’s how to make an untestable zombie, taken from our latest game, The Clones of Corpus.:

[RequireComponent(typeof(SphereCollider))]
[RequireComponent(typeof(AudioSource))]
[RequireComponent(typeof(NavMeshAgent))]
...
public class Zombie : MonoBehaviour {

    private AudioSource audioSource;
    ...

    void Start () {
        this.audioSource = GetComponent();
        ...
    }
}

Problems:

  • Only Unity knows how to create MonoBehaviours
  • We depend on concrete types in the UnityEngine namespace that we can’t mock out

So, how might we make our zombie testable?

The key is to break its dependence on UnityEngine*, and instead depend on interfaces which mirror their UnityEngine equivalent, supplied as parameters using the Dependency Injection pattern.

Here’s how our testable zombie looks (many dependencies omitted):

[GameObjectBoundary]
public class Zombie : Testable.TestableComponent {

    private IAudioSource audioSource;
    ...
    public Zombie(Testable.TestableGameObject obj, IAudioSource audioSource...) : base(obj) {
        this.audioSource = audioSource;
        ...
    }
}

We use Ninject, an inversion of control framework, to actually construct our objects at runtime.

A test!

We’re now working with Plain Old C Sharp Objects, here’s one of our NUnit tests:

[Test]
public void testZombieKilled() {
    Zombie zombie = kernel.Get<Zombie>();
    zombie.kineticDamage(5);
    step(1);
    Assert.AreEqual(ZombieState.DYING, zombie.getState());
}

(Not shown is the testing base class that sets up Ninject for us and provides the means to ‘step’; simulating one or more frames).

How it works

Everything we need to instantiate a zombie is declared as a constructor parameter:

TestableGameObject

This is a dependency of the TestableComponent base class. It is equivalent to the UnityEngine.GameObject class; in the same way that MonoBehaviours belong to GameObjectsTestableComponents belong to a TestableGameObject.

IAudioSource

This is identical to the UnityEngine AudioSource class. There are a number of other parameters which are not shown, such as INavMeshAgent, ISphereCollider…

Auto wiring

We configure Ninject with different Modules for running under NUnit and Unity. The NUnit module tells Ninject to use mock implementations of our interfaces, and the Unity module tells  it to use our ‘real’ implementations that wrap their UnityEngine equivalents.

The Unity Ninject module contains some special scoping to ensure our TestableComponents are translated into an appropriate GameObject hierarchy.

An instantiation

So how does our zombie actually get created when we call the following?

kernel.Get<Zombie>();

Ninject sees that our Zombie requires an instance of TestableGameObject. This is bound to a class that wraps a UnityEngine.GameObject, so Ninject creates one, and our Unity GameObject is created.

Next, Ninject tries to create our IAudioSource parameter. This is bound to a concrete class that wraps the UnityEngine.AudioSource class (a monobehaviour). This wrapper itself depends on having a GameObject to add the AudioSource to, which it takes as a constructor parameter. A custom Ninject scoping ensures that the same GameObject is supplied as was created for the TestableGameObject.

This process continues for the remaining dependencies.

Portability

An interesting consequence of this decoupling is the ease of porting our code to another game engine. To get our code running on Windows Phone 7, one would merely need to provide XNA based implementations of the interfaces in the Testable namespace.

The original Last Stand, despite being published as a pure java android game, is playable as a standalone desktop java application (and was mostly playtested this way).

Compatibility

The framework has been verified on Desktop, Android and iOS builds. Name mangling makes it unsuitable for Flash builds.

The Price!

Performance:

  • All calls to UnityEngine now go through an interface.
  • Object construction speed

Practically, we did not notice these making The Clones of Corpus.

What we did notice was the massive increase in productivity these patterns can bring, which is extensively documented elsewhere.

*Mostly, we still use some essential structs like Vector3.

Advertisements

On Concept Art

These are the original concepts for the characters in the introduction sequence

Image Production

I am still slightly old fashioned when it comes to the production of my images. All my work is planned in rough on paper, transferred and cleaned up on the lightboard, coloured with ink and only then put into the computer to be composed and have tone added. I still feel that a piece of paper is the best way to lay down ideas, since you can arrange all your images in front of you. I colour before the image is put into the computer because it gives the image a natural, chaotic ink texture that I love.

I’ve used this picture of the office by way of illustration, as it were.

I started with the dimensions ofthe screen as seen in the faint grey line. I then went about planing the image, trying to fit everything in whilst keeping the composition correct. I map out in blue pencil – a habit picked up from the animation part of my degree – as I can then go over the top with a darker pencil, so when traced on the lightboard only the clean markings of the dark pencil show through.

Once the clean image has been traced onto a new piece of paper, the image is inked in nothing more than a tone of grey ink. This ‘flat’ image is then scanned in on a flatbed.

Using photoshop the image is broken up into layers of varying tone. The top layer is then deleted in areas where I want to add areas of tone. The process is repeated down through the increasingly dark layers. I do this in preference to just adding black, since it retains the texture of the ink and is a very simple method of giving a painted effect.

The final stage is to add any lighting effects, which is simply a matter of painting white over the top. Once that’s done, further alterations are made to the tone to balance out anything off-set by the added light. I also add a little blur to give the image more depth.