Category Archives: Unit Testing

NUnit TestCaseSource

While working on this project, I found a need to abstract away a base type that the unit tests use (in this instance, it was a queue type). I was only testing a single type (PriorityQueue); however, I wanted to create a new type, but all the basic tests for the new type are the same as the existing ones. This led me to investigate the TestCaseSource attribute in NUnit.

As a result, I needed a way to re-use the tests. There are definitely multiple ways to do this; the simplest one is probably to create a factory class, and pass in a string parameter. The only thing that put me off this is that you end up with the following test case:

        [TestCase("test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9"]
        [TestCase("a1", "a", "a1", "b", "c", "d", "a"]
        public void Queue_Dequeue_CheckResultOrdering(
            string first, string last, params string[] queueItems)
        {

Becoming:

        [TestCase("PriorityQueue", "test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9"]
        [TestCase("PriorityQueue2", "test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9"]
        [TestCase("PriorityQueue", "a1", "a", "a1", "b", "c", "d", "a"]
        [TestCase("PriorityQueue2", "a1", "a", "a1", "b", "c", "d", "a"]
        public void Queue_Dequeue_CheckResultOrdering(
            string queueType, string first, string last, params string[] queueItems)
        {

This isn’t very scaleable when adding a third or fourth type.

TestCaseSource

It turns out that the (or an least an) answer to this is to use NUnit’s TestCaseSource attribute. The NUnit code base dog foods quite extensively, so that is not a bad place to look for examples of how this works; however, what I couldn’t find was a way to mix and match. To better illustrate the point; here’s the first test that I changed to use TestCaseSource:

        [Test]
        public void Queue_NoEntries_CheckCount()
        {
            // Arrange
            PQueue.PriorityQueue<string> queue = new PQueue.PriorityQueue<string>();

            // Act
            int count = queue.Count();

            // Assert
            Assert.AreEqual(0, count);
        }

Which became:

        [Test, TestCaseSource(typeof(TestableQueueItemFactory), "ReturnQueueTypes")]
        public void Queue_NoEntries_CheckCount(IQueue<string> queue)
        {
            // Arrange


            // Act
            int count = queue.Count();

            // Assert
            Assert.AreEqual(0, count);
        }

(For completeness, the TestableQueueItemFactory is here):

    public static class TestableQueueItemFactory
    {
        public static IEnumerable<IQueue<string>> ReturnQueueTypes()
        {
            yield return new PQueue.PriorityQueue<string>();
        }
    }

However, when you have a TestCase like the one above, there’s a need for the equivalent of this (which doesn’t work):

        [Test, TestCaseSource(typeof(TestableQueueItemFactory), "ReturnQueueTypes")]
        [TestCase("test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9")]
        [TestCase("a1", "a", "a1", "b", "c", "d", "a")]
        public void Queue_Dequeue_CheckResultOrdering(string first, string last, params string[] queueItems)
        {

A quick look at the NUnit code base reveals these attributes to be mutually exclusive.

Compromise

By no means is this a perfect solution, but the one that I settled on was to create a second TestCaseSource helper method, which looks like this (along with the test):

        private static IEnumerable Queue_Dequeue_CheckResultOrdering_TestCase()
        {
            foreach(var queueType in TestableQueueItemFactory.ReturnQueueTypes())
            {
                yield return new object[] { queueType, "test", "test9", new string[] { "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9" } };
                yield return new object[] { queueType, "a1", "a", new string[] { "a1", "b", "c", "d", "a" } };
            }
        }

        [Test, TestCaseSource("Queue_Dequeue_CheckResultOrdering_TestCase")]
        public void Queue_Dequeue_CheckResultOrdering(
            IQueue <string> queue, string first, string last, params string[] queueItems)
        {

As you can see, the second helper method doesn’t really help readability, so it’s certainly not a perfect solution; in fact, with a single queue type, this makes the code more complex and less readable. However, When a second and third queue type are introduced, the test suddenly becomes resilient.

YAGNI

At first glance, this may appear to be an example of YAGNI. However, in this article, Martin Fowler does state:

Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify.

Which, I believe, is what we are doing here.

References

http://www.smaclellan.com/posts/parameterized-tests-made-simple/

http://stackoverflow.com/questions/16346903/how-to-use-multiple-testcasesource-attributes-for-an-n-unit-test

https://github.com/nunit/docs/wiki/TestCaseSource-Attribute

http://dotnetgeek.tumblr.com/post/2851360238/exploiting-nunit-attributes-valuesourceattribute

https://github.com/nunit/docs/wiki/TestCaseSource-Attribute

Testing for Exceptions using the Arrange Act Assert Pattern in C# 7

Unit testing massively benefits from following the Arrange / Act / Assert pattern. I’ve seen tests that are not written in this way, and they can be sprawling and indecipherable, either testing many different things in series, or testing nothing at all except the .Net Framework.

I recently found an issue while trying to test for an exception being thrown, which is that Nunit (and probably other frameworks) test for an exception by accepting a delegate to test. Here’s an example:

        [Test]
        public void Test_ThrowException_ExceptionThrown()
        {
            // Arrange
            TestClass tc = new TestClass();

            // Act / Assert
            Assert.Throws(typeof(Exception), tc.ThrowException);
        }

We’re just testing a dummy class:

    public class TestClass
    {
        public void ThrowException()
        {
            throw new Exception("MyException");
        }
    }

C# 7 – Inline functions

If you look in the references at the bottom, you’ll see something more akin to this approach:

        public void Test_ThrowException_ExceptionThrown2()
        {
            // Arrange
            TestClass tc = new TestClass();

            // Act
            TestDelegate throwException = () => tc.ThrowException();            

            // Assert
            Assert.Throws(typeof(Exception), throwException);
        }

However, since C# 7, the option on a local function has arrived. The following has the same effect:

        [Test]
        public void Test_ThrowException_ExceptionThrown3()
        {
            // Arrange
            TestClass tc = new TestClass();

            // Act
            void CallThrowException()
            {
                tc.ThrowException();
            }

            // Assert
            Assert.Throws(typeof(Exception), CallThrowException);
        }

I think that I, personally, still prefer the anonymous function for this; however, the local function does present some options; for example:


        [Test]
        public void Test_ThrowException_ExceptionThrown4()
        {
            void CallThrowException()
            {
                // Arrange
                TestClass tc = new TestClass();

                // Act
                tc.ThrowException();
            }

            // Assert
            Assert.Throws(typeof(Exception), CallThrowException);
        }

Now I’m not so sure that I still prefer the anonymous function.

References

http://stackoverflow.com/questions/33897323/nunit-3-0-and-assert-throws

https://pmbanugo.wordpress.com/2014/06/16/exception-testing-pattern/

http://stackoverflow.com/questions/24070115/best-approach-towards-applying-the-arrange-act-assert-pattern-when-expecting-exc

Designing and Debugging Database Unit Tests

There are many systems out there in the wild, and some new ones being written, that use database logic extensively. This article discusses how and why these pieces of logic should be tested, along with whether they should exist at all.

In general, for unit tests, it’s worth asking the question of what, exactly, is being tested, before starting. This is especially true in database tests; for example, consider a test where we update a field in a database, and then assert that the field is what it has been set to. Are you testing your trigger logic, or are you simply testing Microsoft SQL Server works?

The second thing to consider is whether or not it makes any sense to use testable database logic in new code. That is, say we have a stored procedure that:
– Takes a product code
– Looks up what the VAT is for that product
– Calculates the total price
– Writes the result, along with the parameter and the price to a new table

Does it make sense for all that logic to be in the stored procedure, or would it make more sense to retrieve the values needed via one stored procedure, do the calculation in a testable server-side function, and call a second procedure to write the data?

FIRST

Unit testing a database is a tricky business. First of all, if you have business logic in the database then it, almost by definition, depends on the state of the data. You obviously can simply run unit tests against the database and change the data, but let’s have a look at the FIRST principles, and see where database tests are inherently difficult.

Fast

It depends exactly what is meant by fast, but in comparison to a unit test that asserts some logic in C# code, database tests are slow (obviously, in comparison to conducting the test manually, they are very fast). Realistically, they are probably going to be sufficiently slow to warrant taking them out of your standard unit test suite. A sensible test project (that is, one that tests some actual code) may contain a good few hundred tests, let’s assume they all take 200ms – that means that 300 tests take a total of 60 seconds!

One thing that conducting DB tests does give you is an idea as to how fast (or slow) they actually are:

Isolated

It’s incredibly difficult to produce a database unit test that is isolated because, by its nature, a database had dependencies. Certainly, if anything you’re testing is dependent on a particular data state (for example, in the case above, the product that we are looking for must exist in a table, and have a VAT rate) then, unless this state is set-up in the test itself, this rule is broken.

Repeatable

Again – this isn’t a small problem with databases. Should I change Column A to test a trigger on the table, am I then able to change it again. What if the data is in a different state when I run the unit tests from the last time – I might get rogue fails, or worse, rogue passes. What happens if the test crashes half way through, how do we revert?

Self-verifying

In my example before, I changed Column A in order to test a trigger, and I’ll maybe check something that is updated by the trigger. Providing that the assertion is inside the test, the test is self-verifying. Obviously, this is easier to do wrong in a database context, because if I do nothing, the data is left in a state that can be externally verified.

Timely

This refers to when a test is written. There’s nothing inherent about database tests that prevent them from being written before, or very shortly after the code is written. However, see the comment above as to whether new code written like this makes sense.

Problems With A Database Test Project

Given what we’ve put above, let’s look at the outstanding issues that realistically need to be solved in order to use database tests:

1. Deployment. Running a standard code test will run the code wherever you are; however, a database test, whichever way you look at it, needs a database before it runs.

2. Rollback. Each test needs to be isolated, and so there needs to be a way to revert to the database state before the tests began.

3. Set-up. Any dependencies that the tests have, must be inside the test; therefore, if a table needs to have three rows in it, we need to add those rows within the test.

4. Assertion. What are we testing, and what makes sense to test; each test needs a defined purpose.

Example Project

In order to explore the various possibilities when setting up a database project, I’m going to use an example project:

Let’s start with some functionality to test. I’m going to do it this way around for two reasons: having code to test better illustrates the problems faced by database tests, and it is my belief that much of the database logic code is legacy and, therefore, already exists.

Here’s a new table, and a trigger that acts upon it:

CREATE TABLE [dbo].[SalesOrder]
(
    [Id] INT NOT NULL PRIMARY KEY, 
    [ProductCode] NCHAR(10) NOT NULL, 
    [NetAmount] DECIMAL(18, 2) NULL, 
    [Tax] DECIMAL(18, 2) NULL, 
    [TotalAmount] DECIMAL(18, 2) NULL, 
    [Comission] DECIMAL(18, 2) NULL
)
GO
 
CREATE TRIGGER SalesOrderAfterInsert ON SalesOrder
AFTER INSERT, UPDATE
AS
BEGIN
	DECLARE @CalcTax Decimal(18,2) 
	DECLARE @CalcComission Decimal(18,2) 
     
	SELECT @CalcTax = INSERTED.NetAmount * 0.20 FROM INSERTED
	SELECT @CalcComission = INSERTED.NetAmount * 0.10 FROM INSERTED
	 
    UPDATE S
    SET S.Tax = @CalcTax,
		S.Comission = @CalcComission,
		S.TotalAmount = S.NetAmount + S.Tax
	FROM INSERTED, SalesOrder S
    WHERE S.Id = INSERTED.Id
END
GO

This is for the purpose of illustration, so obviously, there are things here that might not make sense in real life; however, the logic is very testable. Let’s deploy this to a database, and do a quick manual test:

Once the database is published, we can check and test it in SSMS:

Quick edit the rows:

And test:

At first glance, this seems to work well. Let’s create a test:

[TestMethod]
public void CheckTotalAmount()
{
    using (SqlConnection sqlConnection = new SqlConnection(
        @"Data Source=TLAPTOP\PCM2014;Initial Catalog=MySqlDatabase;Integrated Security=SSPI;"))
    {
        sqlConnection.Open();
        using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
        {
            sqlCommand.CommandText = "INSERT INTO SalesOrder (Id, ProductCode, NetAmount) " +
                "VALUES (2, 'test', 10)";
            sqlCommand.ExecuteNonQuery();
        }
 
        using (SqlCommand sqlCommandCheck = sqlConnection.CreateCommand())
        {
            sqlCommandCheck.CommandText = $"SELECT TotalAmount FROM SalesOrder WHERE Id = 1";
            decimal result = decimal.Parse(sqlCommandCheck.ExecuteScalar().ToString());
 
        }
    }
}

Okay – there are a number of problems with this test, but let’s pretend for a minute that we don’t know what they are; the test passes:

Let’s run it again, just to be sure:

Oops.

Let’s firstly check this against the test principles that we discussed before.
1. Is it fast? 337ms means that we can run 3 of these per second. So that’s a ‘no’.
2. Is it Isolated? Does is have a single reason to fail – and can it live independently? If we accept that the engine itself is a reason to fail, but ignore that, then we can look specifically at the test, which asserts nothing. What’s more, it is doing two separate things to the DB, so both can fail realistically.
3. Is it Repeatable? Clearly not.
4. Is it self-verifying? No – it isn’t, because we have no assertions in it. Although we know that on the first run, both queries worked, we don’t know why.
5. Timely – well, we did write it directly after the code, so that’s probably a tick.

So, we know that the second run didn’t work. A quick look at the DB will tell us why:

Of course, the test committed a transaction to the database, as a result, any subsequent runs will fail.

The Solution

What follows is a suggested solution for this kind of problem, along with the beginnings of a framework for database testing. The tests here are using MSTest, but the exact same concept is easily achievable in Nunit and, I imagine, every other testing framework.

Base Test Class

The first thing is to create a deployment task:

The deployment task might look a little like this:

public static bool DeployDatabase(string projectFile)
{
    ILogger logger = new BasicFileLogger();
 
    Dictionary<string, string> globalProperties = new Dictionary<string, string>()
    {
        { "Configuration", "Debug" },
        { "Platform", "x86" },
        { "SqlPublishProfilePath", @"MySqlDatabase.publish.xml" }
    };
 
    ProjectCollection pc = new ProjectCollection(
        globalProperties, new List<ILogger>() { logger }, ToolsetDefinitionLocations.Registry);
        
    BuildParameters buildParameters = new BuildParameters(pc);            
    BuildRequestData buildRequestData = new BuildRequestData(
        projectFile, globalProperties, null, new string[] { "Build", "Publish" }, null);
 
    BuildResult buildResult = BuildManager.DefaultBuildManager.Build(
        buildParameters, buildRequestData);
 
    return (buildResult.OverallResult == BuildResultCode.Success);
}

Publish Profiles

This uses a publish profile. These are basically XML files that tell the build how to publish your database; here’s an example of one:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="14.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <IncludeCompositeObjects>True</IncludeCompositeObjects>
    <TargetDatabaseName>MySqlDatabase</TargetDatabaseName>
    <DeployScriptFileName>MySqlDatabase.sql</DeployScriptFileName>
    <TargetConnectionString>Data Source=TLAPTOP\PCM2014;Integrated Security=True;Persist Security Info=False;Pooling=False;MultipleActiveResultSets=False;Connect Timeout=60;Encrypt=False;TrustServerCertificate=True</TargetConnectionString>
    <ProfileVersionNumber>1</ProfileVersionNumber>
  </PropertyGroup>
</Project>

You can get Visual Studio to generate this for you, by selecting to “Deploy…” the database, and then selecting “Save Profile As…”:

Database Connection

Now that we’ve deployed the database, the next step is to connect. One way of doing this is to configure the connection string in the app.config of your test project:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <connectionStrings>
    <add name="MySqlDatabase" 
         connectionString="Data Source=TLAPTOP\PCM2014; Initial Catalog=MySqlDatabase; Integrated Security=true" />
  </connectionStrings>
</configuration>

You can then connect using the following method:


ConnectionStringSettings connectionString = ConfigurationManager.ConnectionStrings["MySqlDatabase"];
 
_sqlConnection = new SqlConnection(connectionString.ConnectionString);
_sqlConnection.Open();

This sort of functionality could form the basis of a base test class; for example:

[TestClass]
public class BaseTest
{
    protected SqlConnection _sqlConnection;
 
    [TestInitialize]        
    public virtual void SetupTest()
    {
        ConnectionStringSettings connectionString = ConfigurationManager.ConnectionStrings["MySqlDatabase"];
        _sqlConnection = new SqlConnection(connectionString.ConnectionString);
        _sqlConnection.Open();
    }
 
    [TestCleanup]
    public virtual void TearDownTest()
    {
        _sqlConnection.Close();
    }
}

Transactions

So, we now have a deployment task, and a connection, the next step is to run the tests in a way in which they are repeatable. The key here is to use transactions. Going back to the base class, we can wrap this functionality into a method that can simply be inherited by all unit tests.

public class BaseTest
{
    protected SqlConnection _sqlConnection;
    protected SqlTransaction _sqlTransaction;
 
 
    [TestInitialize]
    public virtual void SetupTest()
    {
        ConnectionStringSettings connectionString = ConfigurationManager.ConnectionStrings["MySqlDatabase"];
        _sqlConnection = new SqlConnection(connectionString.ConnectionString);
        _sqlConnection.Open();
        _sqlTransaction = _sqlConnection.BeginTransaction();
    }
 
    [TestCleanup]
    public virtual void TearDownTest()
    {
        _sqlTransaction.Rollback();
        _sqlConnection.Close();
    }
}

Refactor The Base Class

Let’s put all this together, and remove some parts that can be separated into a common helper class:

public class ConnectionHelper
{
    SqlConnection _sqlConnection;
    SqlTransaction _sqlTransaction;
 
    public SqlConnection OpenTestConnection()
    {
        ConnectionStringSettings connectionString = ConfigurationManager.ConnectionStrings["MySqlDatabase"];
 
        _sqlConnection = new SqlConnection(connectionString.ConnectionString);
        _sqlConnection.Open();
        _sqlTransaction = _sqlConnection.BeginTransaction();
 
        return _sqlConnection;
    }
 
    public SqlCommand UseNewTestCommand()
    {
        SqlCommand sqlCommand = _sqlConnection.CreateCommand();
        sqlCommand.Transaction = _sqlTransaction;
        return sqlCommand;
    }
 
    public void CloseTestConnection()
    {
        _sqlTransaction.Rollback();
        _sqlConnection.Close();
    }
}

The base test now looks like this:


[TestClass]
public class BaseTest
{
    protected ConnectionHelper _connectionHelper;
 
    [ClassInitialize]
    public virtual void SetupTestClass()
    {
        DatabaseDeployment.DeployDatabase(@"..\MySqlDatabase\MySqlDatabase.sqlproj");
    }
 
    [TestInitialize]
    public virtual void SetupTest()
    {
        
        _connectionHelper = new ConnectionHelper();
        _connectionHelper.OpenTestConnection();
    }
 
    [TestCleanup]
    public virtual void TearDownTest()
    {
        _connectionHelper.CloseTestConnection();
    }
}

In Summary

We now have a base test class that will deploy the database, establish a new connection, and transaction; and then, on completion of the test, will roll back the transaction. Here’s what the above test now looks like:

[TestClass]
public class UnitTest2 : BaseTest
{
    [TestMethod]
    public void CheckTotalAmount3()
    {
 
        // Arrange
        using (SqlCommand sqlCommand = _connectionHelper.UseNewTestCommand())
        {
            sqlCommand.CommandText =
                "INSERT INTO SalesOrder (Id, ProductCode, NetAmount) " +
                "VALUES (2, 'test', 10)";
            sqlCommand.ExecuteNonQuery();
        }
 
        // Act
        using (SqlCommand sqlCommand = _connectionHelper.UseNewTestCommand())
        {                
            sqlCommand.CommandText = $"SELECT TotalAmount FROM SalesOrder WHERE Id = 2";
            decimal result = decimal.Parse(sqlCommand.ExecuteScalar().ToString());

            // Assert
            Assert.AreEqual(12, result);
        }
    }
}

Debugging Unit Tests

The idea behind the framework described above is that the data is never committed to the database; as a consequence of this, the tests are repeatable, because nothing ever changes. The unfortunate side-effect here is that debugging the test is made more difficult as, if it fails, it is not possible to see directly which changes have been made. There’s a couple of ways around this. One of which is to simply debug the test, and then manually fire a commit, look at the data and continue. However, a SQL expert recently introduced me to a concept of “Dirty Reads”.

Dirty Reads

Dirty reads are achieved by issuing the following command the SQL Server:

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

This allows you to see changes in the database which are still pending (that is, they have yet to be committed). What this means is that you can see the state of the data as it currently is, it also doesn’t place a lock on the data. One of the big issues with using this methodology is that you can see half committed transactions; of course, in this instance, that’s exactly what you want! Let’s debug our unit test:

Now let’s have a look at the SalesOrder table:

Not only does this not return anything, it doesn’t return at all. We’ve locked the table, and held it in a transaction. Let’s apply our dirty read and see what happens:

Instantly, we get the SalesOrder. If we now complete the test and run the query again, the data is gone:

References

https://pragprog.com/magazines/2012-01/unit-tests-are-first

http://stackoverflow.com/questions/13843990/how-can-i-programatically-publish-a-sql-server-database-project

https://social.msdn.microsoft.com/Forums/vstudio/en-US/ec95c513-f972-45ad-b108-5fcfd27f39bc/how-to-build-a-solution-within-c-net-40-?forum=msbuild

http://stackoverflow.com/questions/10438258/using-microsoft-build-evaluation-to-publish-a-database-project-sqlproj

https://msdn.microsoft.com/en-us/library/microsoft.build.framework.ilogger.aspx

http://stackoverflow.com/questions/10438258/using-microsoft-build-evaluation-to-publish-a-database-project-sqlproj

https://msdn.microsoft.com/en-us/library/hh272681(v=vs.103).aspx

Using MSTest DataRow as a Substitute for NUnit TestCase

I used to believe that Nunit’s TestCase test (that is, an ability to define a test and then simply pass it alternate parameters) was denied MSTest users. It appears that this is, at least now, fallacious.

The following article implies that this is a recent change:

Taking the MSTest Framework forward with “MSTest V2”

This particular example is in a UWP application:

        [DataTestMethod]
        [DataRow(1, 2, 3, 6)]
        [DataRow(8, 2, 3, 13)]
        [DataRow(8, 5, 3, 12)]
        public void AddNumbers(int num1, int num2, int num3, int total)
        {
            Assert.AreEqual(num1 + num2 + num3, total);
        }

Will result in a failing test, and:

        [DataTestMethod]
        [DataRow(1, 2, 3, 6)]
        [DataRow(8, 2, 3, 13)]
        [DataRow(8, 5, 3, 16)]
        public void AddNumbers(int num1, int num2, int num3, int total)
        {
            Assert.AreEqual(num1 + num2 + num3, total);
        }

Results in a passing one.

If you want additional information relating to the test, you can use this syntax:


        [DataTestMethod]
        [DataRow(1, 2, 3, 6, DisplayName = "First test")]
        [DataRow(8, 2, 3, 13, DisplayName = "Second test")]
        [DataRow(8, 5, 3, 15, DisplayName = "This will fail")]
        public void AddNumbers(int num1, int num2, int num3, int total)
        {
            Assert.AreEqual(num1 + num2 + num3, total);
        }

Given the constant problems that I have with finding the correct NUnit test adaptor, and trying to work out which are the right libraries, I think, despite coming late to this party, MS might actually drag people back to MSTest with this.

Intelli-Test (Part 2)

I recently posted an article which morphed into a discovery of the Intelli-Test feature in VS2015.

My initial findings were relating to creating a basic intelli-test, and then having that create a new unit test for me. However, once you’ve created an intelli-test, you can modify it; here’s the original intelli-test that was created for me:

        [PexGenericArguments(typeof(int))]
        [PexMethod]
        internal void ClearClassTest<T>(T classToClear)
        {
            Program.ClearClass<T>(classToClear);
            // TODO: add assertions to method ProgramTest.ClearClassTest(!!0)
        }

When this creates a test, it looks like this:


[TestMethod]
[PexGeneratedBy(typeof(ProgramTest))]
public void ClearClassTest861()
{
    this.ClearClassTest<int>(0);
}

So, I tried adding some additional parameters:


    class TestClass
    {
        public string Test1 { get; set; }
        public string Test2 { get; set; }
    }

    /// <summary>This class contains parameterized unit tests for Program</summary>
    [PexClass(typeof(Program))]
    [PexAllowedExceptionFromTypeUnderTest(typeof(InvalidOperationException))]
    [PexAllowedExceptionFromTypeUnderTest(typeof(ArgumentException), AcceptExceptionSubtypes = true)]
    [TestClass]
    public partial class ProgramTest
    {
        /// <summary>Test stub for ClearClass(!!0)</summary>
        [PexGenericArguments(typeof(int))]
        [PexGenericArguments(typeof(string))]
        [PexGenericArguments(typeof(float))]
        [PexGenericArguments(typeof(TestClass))]
        [PexMethod]
        internal void ClearClassTest<T>(T classToClear)
        {
            Program.ClearClass<T>(classToClear);
            // TODO: add assertions to method ProgramTest.ClearClassTest(!!0)
        }
    }

Just re-select “Run intelli-test” and it updates the ProgramTest.ClearClassTest.g.cs, generating 6 new tests. To be honest, this was a bit disappointing. I had expected an “intelligent” test – that is, one that tests several outcomes. To simplify what was happening, I tried creating a simple method:


        public static int TestAdd(int num1, int num2)
        {
            return num1 + num2;
        }

Creating the test for this resulted in this:


        [PexMethod]
        internal int TestAddTest(int num1, int num2)
        {
            int result = Program.TestAdd(num1, num2);
            return result;
            // TODO: add assertions to method ProgramTest.TestAddTest(Int32, Int32)
        }

And then to:


[TestMethod]
[PexGeneratedBy(typeof(ProgramTest))]
public void TestAddTest989()
{
    int i;
    i = this.TestAddTest(0, 0);
    Assert.AreEqual<int>(0, i);
}

So, then I considered what this was actually doing. The purpose of it seems to be to execute code; that is, code coverage; so what happens if I create a method like this:


        public static int TestAdd(int num1, int num2)
        {
            if (num1 > num2)
                return num1 + num2;
            else
                return num2 - num1;
        }

And finally, you see the usefulness of this technology. If it only creates a single test, passing in 0,0 then it only tests one code path. The minimum test to cover all code paths is 0,0 and 1,0. That’s exactly what it does; the generated test looks the same:


        /// <summary>Test stub for TestAdd(Int32, Int32)</summary>
        [PexMethod]
        internal int TestAddTest(int num1, int num2)
        {
            int result = Program.TestAdd(num1, num2);
            return result;
            // TODO: add assertions to method ProgramTest.TestAddTest(Int32, Int32)
        }

But the run intelli-test creates two methods:


[TestMethod]
[PexGeneratedBy(typeof(ProgramTest))]
public void TestAddTest989()
{
    int i;
    i = this.TestAddTest(0, 0);
    Assert.AreEqual<int>(0, i);
}

[TestMethod]
[PexGeneratedBy(typeof(ProgramTest))]
public void TestAddTest365()
{
    int i;
    i = this.TestAddTest(1, 0);
    Assert.AreEqual<int>(1, i);
}

In an effort to confuse the system, I changed the base function:


        public static int TestAdd(int num1, int num2)
        {
            num1++;

            if (num1 == num2)
            {
                throw new Exception("cannot be the same");
            }

            if (num1 > num2)
                return num1 + num2;
            else
                return num2 - num1;
            
        }

And re-run the test; it resulted in 3 tests:

intellitest2

Not sure where the numbers came from, but this tests every code path. The exception results in a fail, but you can mark that as a pass by simply right clicking and selecting “Allow”; which changes the test to look like this:


[TestMethod]
[PexGeneratedBy(typeof(ProgramTest))]
[ExpectedException(typeof(Exception))]
public void TestAddTestThrowsException26601()
{
    int i;
    i = this.TestAddTest(502, 503);
}

Summary

This is basically the opposite of the modern testing philosophy, which is test interactions and behaviour; what this does is establish every code path in a function and run it; so it should establish that the code doesn’t crash, but it obviously can’t establish that it does what it’s intended to do, nor can it establish whether it is sensible to pass, for example, a null value. Having said that, if an object can be null, and there’s no defensive code path to deal with it being passed null, then this will be pointed out.

In general, this is intended for people dealing with, and refactoring, legacy code so that they can adopt the “Stangling Vines” development pattern in order to be sure that the code still does what it did before.

Generic Method to Clear a Class and Intelli-Test

Recently, I published this article on copying a class dynamically. I then found that I could use the same approach to clear a class. Here’s the method:

        private static void ClearClass<T>(T classToClear)
        {
            if (classToClear == null)
                throw new Exception("Must not specify null parameters");

            var properties = classToClear.GetType().GetProperties();

            foreach (var p in properties.Where(prop => prop.CanWrite))
            {
                p.SetValue(classToClear, GetDefault(p.PropertyType));
            }
        }

        /// <summary>
        /// Taken from http://stackoverflow.com/questions/474841/dynamically-getting-default-of-a-parameter-type
        /// </summary>
        /// <param name="type"></param>
        /// <returns></returns>
        public static object GetDefault(Type type)
        {
            return type.IsValueType ? Activator.CreateInstance(type) : null;
        }

As you can see, I had a little help from Jon Skeet with this one. Once I’d written it, I thought I’d have a play with the IntelliTest feature: if you right click the method and select “Create IntelliTest”, you’re presented with this:

IT1

IT2

It generated this:


    /// <summary>This class contains parameterized unit tests for Program</summary>
    [PexClass(typeof(Program))]
    [PexAllowedExceptionFromTypeUnderTest(typeof(InvalidOperationException))]
    [PexAllowedExceptionFromTypeUnderTest(typeof(ArgumentException), AcceptExceptionSubtypes = true)]
    [TestClass]
    public partial class ProgramTest
    {
        /// <summary>Test stub for ClearClass(!!0)</summary>
        [PexGenericArguments(typeof(int))]
        [PexMethod]
        internal void ClearClassTest<T>(T classToClear)
        {
            Program.ClearClass<T>(classToClear);
            // TODO: add assertions to method ProgramTest.ClearClassTest(!!0)
        }
    }

The interesting thing about this, is that it can’t be found as a test. What actually happens is this creates an intelli-test, which, as far as I can see, you have to right-click on the created test and select “Run Intelli-test”. This then creates you an actual unit test:

IT3

It looks something like this:


namespace ConsoleApplication13.Tests
{
    public partial class ProgramTest
    {

[TestMethod]
[PexGeneratedBy(typeof(ProgramTest))]
public void ClearClassTest861()
{
    this.ClearClassTest<int>(0);
}
    }
}

That then can be found and run:

IT4

Obviously, looking at the unit test, it’s not a particularly good one; it effectively tests that your code doesn’t crash, so it increases your code coverage, but doesn’t really test anything per-se.

Testing by Interface

Given a standard interface, how to retrieve all implementing classes and run the interface methods.

Unit testing through test driven development is definitely a good idea; but what if you have a number of methods that all effectively do the same thing; that is, each method might do something completely different, but as far as it’s interface goes, it’s identical.

Imagine, for example, that you have a method that calls to the DB, and accepts a number of parameters in, and returns a given parameter. In a unit testing scenario, the DB would be mocked out, and the method called directly from the unit test. Okay, so in this case, you may want some test coverage that your methods call a mocked out DB function, don’t crash, accept a given object, accept null, etc…

Facing the same problem, it occurred to me that it should be possible to write a single test method that would test every existing and future implementation of this, without having to laboriously re-create the test each time I create a method; what’s more, as soon as I create my method name that implements the interface, I get a failing test.

Below is an interface and a test class; it is entirely for the purpose of illustration:

    public class ModelClass
    {
        public string TestProperty { get; set; }
    }


    public interface ITest
    {
        void method1();

        void method2(ModelClass model, int i);
    }

    public class Class1 : ITest
    {
        public void method1()
        {
        }

        public void method2(ModelClass model, int i)
        {
            if (model == null) throw new Exception("test");
            //if (string.IsNullOrWhiteSpace(model.TestProperty)) throw new Exception("Doh");
        }
    }

    public class Class2 : ITest
    {
        public void method1()
        {
            //throw new NotImplementedException();
        }

        public void method2(ModelClass model, int i )
        {
            
        }
    }

    public class Class3
    {
        public void  NonInterfaceMethod(ModelClass model)
        {
            throw new Exception("Doh!");
        }
    }

    public class Class4 : ITest
    {
        public void method1()
        {
            //throw new NotImplementedException();
        }

        public void method2(ModelClass model, int i)
        {
            
        }

        public void test()
        {

        }
    }

 

As you can see, there are a number of interface, and non-interface methods here. There’s nothing particularly interesting, although have a look at Class1.method2(), which should do nothing, but switching the statements should cause a runtime error, if my method works. Also, have a look at Class3.NonInterfaceMethod() – this should never be called, but will throw an exception if it is.

The following is the test code:

        [TestMethod]
        public void TestITestImplementations()
        {
            // Use reflection to get the available methods for the interface
            Type desiredType = typeof(ITest);
            Assembly assembly = desiredType.Assembly;
            var interfaceMethods = desiredType.GetMethods();
            
            // Iterate through each implementation of the interface
            foreach (Type type in assembly.GetTypes())
            {
                if (desiredType.IsAssignableFrom(type) && !type.IsInterface)
                {
                    // Where an implementation is found, instantiate it 
                    // and build a list of available methods to call
                    var classInstance = Activator.CreateInstance(type, null);
                    var methods = type.GetMethods()
                        .Where(m => interfaceMethods.Any(i => i.Name == m.Name)
                            && m.IsPublic
                            && !m.DeclaringType.Equals(typeof(object)));
                    foreach(var method in methods)
                    {
                        // Establish the available parameters and pass them to the call
                        var p = method.GetParameters();
                        object[] p2 = p.Select(a => Activator.CreateInstance(a.ParameterType)).ToArray();

                        try
                        {
                            // Call the method and, where a value should be returned, ensure one is
                            object result = method.Invoke(classInstance, p2);
                            Assert.IsFalse(method.ReturnType != typeof(void) && result == null);
                        }
                        catch(Exception ex)
                        {
                            // Where an error is thrown, print a sensible error
                            Assert.Fail("Call failed: {0}.{1}\nException: {2}", 
                                type.Name, method.Name, ex);
                        }
                    }
                }
            }            
        }

The code above is relatively straight-forward and, if I’m being honest, is only a cursory test. It tests that the methods can be called without throwing an error and, where a value should be returned, checks that it’s not null.

Obviously, it might be perfectly valid that it is null, or an exception might be the desired behaviour. This code is basically just a starting point, but it does provide some very basic test coverage where otherwise, there might be none.

It is also true to say that the code doesn’t deal with overloads, which are not necessary in my particular circumstance.

Unit Tests Are Not Discoverable

I recently had a situation where I loaded a solution containing a suite of NUnit tests, but the test explorer would not recognise them. The following is a series of checks to make that may cause unit tests to be not visible. Most of these are applicable to all tests:

1. Tests must be declared as public. For example:

        public void MyTestMethod()
        {

2. Tests must be decorated with a test attribute.

For NUnit this is is [Test]:

	        [Test]
	        public void MyNUnitTest()
	        {
	

Or [TestCase]:

	        [TestCase(1)]
	        [TestCase(2)]
	        public void MyParameterisedTest(int testNum)
	        {
	
	

For MSTest this is [TestMethod]:


	        [TestMethod]
	        public void MyTestMethod()
	        {
	

3. If using NUnit – check that the correct version is installed (remember that v3 is not an official release yet – that is, at the time of writing).

The current release test adapter is here

The test adaptor for NUnit3 is here

As usual, this is more for my own reference, but if it helps anyone else then all to the good. Also, if you think of or encounter another then please let me know in the comments and I’ll add it on.

Coded UI Test – Recording mouse clicks

Coded UI Tests are something I’ve always thought would be useful, but have only recently actually used. This post is details of the first issue that I came up against.

The Problem

When you record a Coded UI test, you tell VS you want to do so, run whatever application that you want to test, and then record a series of tests; usually this means clicking the screen. Here’s what one of the UI projects looks like after you finish recording:

CodedUIStructure

The designer file looks something like this for a series of clicks:

        public void RunBuildAnalysisAndClose()
        {
            #region Variable Declarations
            WpfButton uIValidateURIButton = this.UIWpfWindow.UIValidateURIButton;
            WpfButton uIBuildAnalysisButton = this.UIWpfWindow.UIBuildAnalysisButton;
            WinButton uICloseButton = this.UIBuildAnalysisWindow1.UICloseButton;
            WinButton uICloseButton1 = this.UIItemWindow.UICloseButton;
            #endregion

            // Click 'Validate URI' button
            Mouse.Click(uIValidateURIButton, new Point(677, 6));

            // Click 'Build Analysis' button
            Mouse.Click(uIBuildAnalysisButton, new Point(619, 9));

            // Click 'Close' button
            Mouse.Click(uICloseButton, new Point(16, 5));

            // Click 'Close' button
            Mouse.Click(uICloseButton1, new Point(30, 9));
        }

The recording that generated this was a quick test of a codeplex project of mine called TFS Utils.

My initial thought about this was: what happens when the buttons move around the screen, or the resolution changes. So I tried both of those things… and it still works. The points above are relative; what’s more: they are optional. The following is the same code, copied over to UIMap.cs and the points removed.

        public void RunBuildAnalysisAndCloseManual()
        {
            #region Variable Declarations
            WpfButton uIValidateURIButton = this.UIWpfWindow.UIValidateURIButton;
            WpfButton uIBuildAnalysisButton = this.UIWpfWindow.UIBuildAnalysisButton;
            WinButton uICloseButton = this.UIBuildAnalysisWindow1.UICloseButton;
            WinButton uICloseButton1 = this.UIItemWindow.UICloseButton;
            #endregion

            // Click 'Validate URI' button
            Mouse.Click(uIValidateURIButton);

            // Click 'Build Analysis' button
            Mouse.Click(uIBuildAnalysisButton);

            // Click 'Close' button
            Mouse.Click(uICloseButton);

            // Click 'Close' button
            Mouse.Click(uICloseButton1);
        }

Quick Point on the Points

Since this Point seemed effectively pointless, I had a look around, and it turns out there is a reason; which is for menu buttons, where you need to specify a particular point within the button; rather than just anywhere. I would argue that the default recording should be what’s above, as it seems cleaner. I’m not strictly sure who I would argue that with though.

Disabled Controls

If you look at the menu in the project referenced, you’ll see that the menu items enable only when the URI is validated. This took me a while to work out; because if it’s not available, it just breaks the test:

        public void RunBuildAnalysisAndCloseManual()
        {
            #region Variable Declarations
            WpfButton uIValidateURIButton = this.UIWpfWindow.UIValidateURIButton;
            WpfButton uIBuildAnalysisButton = this.UIWpfWindow.UIBuildAnalysisButton;
            WinButton uICloseButton = this.UIBuildAnalysisWindow1.UICloseButton;
            WinButton uICloseButton1 = this.UIItemWindow.UICloseButton;
            #endregion

            // Click 'Validate URI' button
            Mouse.Click(uIValidateURIButton);

            // Wait as long as 30 seconds for this control to become enabled
            uIBuildAnalysisButton.WaitForControlEnabled(30000);

            // Click 'Build Analysis' button
            Mouse.Click(uIBuildAnalysisButton);
            
            // Click 'Close' button
            Mouse.Click(uICloseButton);

            // Click 'Close' button
            Mouse.Click(uICloseButton1);
        }

It’s not difficult to see how this could be extracted into a helper method:

        private void ClickButton(WpfButton button)
        {
            button.WaitForControlEnabled(30000);                        
            Mouse.Click(button);
        }

Conclusion

From the above investigation, I’ve come to a few conclusions:
1. Don’t rely on the designer file (and don’t bother changing it, as it’s generated code)
2. Hand roll tests – you get far more control

Unit Testing Methods With Random Elements (in MVVM Cross)

Okay, quick spoiler for this: you can’t. You can’t, not really; obviously, you can write the test, but unit tests should be predictive, and a random element should not.

Solution

I imagine there are a few ways of solving this. The way shown in this post is specific to MVVM cross, but should work with any system that uses an IoC container. In brief, we’re simply going to mock out the system Random class.

How?

Well, since System.Random is the domain of Microsoft, we’ll start with a wrapper; and since this is MVVM Cross, we’ll make it a service:

    class RandomService : IRandomService
    {
        private static Random _rnd = null;

        public virtual int SelectRandomNumber(int max)
        {
            if (_rnd == null)
            {
                _rnd = new Random();
            }

            return _rnd.Next(max);
        }
    }

Couple of notes on this:
1. I haven’t posted the interface but it’s just the one method.
2. The reason for the Random class being static is that the random seed is taken from the system clock, meaning that if you call this in quick succession, there is a possibility that you would get the same number returned.
3. This is not thread safe.

Okay – all that out of the way, the code is pretty basic. Now let’s call it:

        public static T SelectRandomElement<T>(this IEnumerable<T> enumeration)
        {
            var service = Mvx.Resolve<IRandomService>();            
            int idx = service.SelectRandomNumber(enumeration.Count() + 1);

            return enumeration.ElementAt(idx);
        }

Right, so you’ll recognise the extension method from the last post, but now it retrieves the instance of the random service; here’s where we register that:

        protected override void InitViewModel()
        {
            Mvx.ConstructAndRegisterSingleton<IRandomService, RandomService>();
        }

You can actually register it anywhere you like… before it’s actually called.

Okay, so now we should have unchanged functionality; everything works as before.

The Unit Tests

The first task here is to create the mock RNG:

    class MockRandomService : IRandomService
    {
        static int _lastNumber = 0;

        public int SelectRandomNumber(int max)
        {            
            if (_lastNumber < max)
                return ++_lastNumber;
            else
            {
                _lastNumber = 0;
                return _lastNumber;
            }                
        }
    }
&#91;/sourcecode&#93;

This not allows me to determine what the next random number will be.

<strong>MVVM Cross Unit Testing</strong>

To set-up a test for MVVM Cross using the IoC container, you need to add some additional libraries to the test project first:

<a href="http://pmichaels.net/wp-content/uploads/2014/07/mvvmtest.png"><img src="http://pmichaels.net/wp-content/uploads/2014/07/mvvmtest.png?w=300" alt="mvvmtest" width="300" height="32" class="alignnone size-medium wp-image-618" /></a>


This will add Cirrious.MccmCross.Test.Core:

<a href="http://pmichaels.net/wp-content/uploads/2014/07/refs.png"><img src="http://pmichaels.net/wp-content/uploads/2014/07/refs.png" alt="refs" width="239" height="211" class="alignnone size-full wp-image-620" /></a>

And that is important, because it allows you to declare your test class as follows:


    [TestClass]
    public class ExtensionMethodTests : MvxIoCSupportingTest
    {

Inheriting from MvxIoCSupportingTest allows you to call base.Setup(), which prevents the IoC container from crashing when you call it in a test. Here’s the full unit test code:

[TestClass]
public class ExtensionMethodTests : MvxIoCSupportingTest
{
[TestMethod]
public void TestSelectRandomElement()
{
base.Setup();

Mvx.ConstructAndRegisterSingleton();

List testCollection = new List();

testCollection.Add(1);
testCollection.Add(3);
testCollection.Add(5);
testCollection.Add(7);
testCollection.Add(9);

// Cycle through all elements
for (int i = 0; i <= 5; i++) { int e = testCollection.SelectRandomElement(); Assert.AreNotEqual(e, 0); } } [TestMethod] public void TestSelectRandomElementPredicate() { base.Setup(); Mvx.ConstructAndRegisterSingleton();

List testCollection = new List();

testCollection.Add(1);
testCollection.Add(3);
testCollection.Add(5);
testCollection.Add(7);
testCollection.Add(9);

// Cycle through all elements
for (int i = 0; i <= 5; i++) { int e = testCollection.SelectRandomElement(n => n < 2); Assert.AreNotEqual(e, 1); } } } } [/sourcecode] Conclusion

So, I now have a custom RNG and unit tests that will tell me what happens when I call the method for each element. Obviously these tests are not exhaustive, but they are deterministic.