Category Archives: C#

Using Azure Functions to Send an E-mail Alert from a Service Bus

In this post, I discussed creating an Azure service bus that sends an e-mail as an action once a message has expired; and in this post, I covered Azure functions and setting a basic one up.

These two pieces of functionality seem to be crying out to be together. After all, if your functionality to send an e-mail is in the cloud, you don’t have to worry about your server being down (which, if your message has expired, is a real possibility).

Create the Azure Function

The first thing to do is to create the Azure function to send an e-mail. Remember that we’ll be hooking into the service bus, and so we’ll create the function a little differently.

The first few steps are the same, though:

The new function is here:

We’ll create a custom function again:

Although this looks familiar from the last post, the next part does differ slightly. This time, we’ll set up a Service Bus Trigger:

This requires the connection string to your service bus…

As you can see above, the service bus connection is blank, and there are no possible entries… onto App Settings:

App Settings

On the App Settings tab, you can configure settings that pertain to your Azure Function App. Select “Manage App Settings”. Here we can set-up a connection string:

Now, we should be able to see that from the Function:

Does it work?

What does this function do out of the box?

Well, having populated the queue with 50 messages that time out after 30 seconds, the function kicked in and started logging that it was picking up messages after 30 seconds – so that’s a promising sign!

The messages are processed and removed from the dead letter queue. This process happens so quickly, it’s easy (as I did) to interpret this as a bug (i.e. messages are not being dead-lettered). However, as we can see from the function logs – they are.

This did, however, leave me with a concern that the messages were being disposed of before they had been successfully processed. To check this, I changed the function slightly:

So, it crashes correctly:

And here, safe and sound, are 50 freshly dead-lettered messages:

Function Code

Now we have a function, we need to make it send an e-mail… so we’ll need some code. Let’s start with what we created here.


using System;
using System.Threading.Tasks;
using System.Net.Mail;

public static void Run(string myQueueItem, TraceWriter log)
{
    log.Info($"Start C# ServiceBus queue trigger function processed message: {myQueueItem}");

    System.Net.Mail.MailMessage message = new System.Net.Mail.MailMessage();
    message.To.Add("to.address@hotmail.co.uk");
    message.Subject = "Message in queue has expired";
    message.From = new System.Net.Mail.MailAddress("from.address@hotmail.co.uk");
    message.Body = messageText;
    System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("smtp.live.com");
    smtp.Port = 587;
    smtp.UseDefaultCredentials = false;
    smtp.Credentials = new System.Net.NetworkCredential("my.address@hotmail.co.uk", "p@ssw0rd");
    smtp.EnableSsl = true;
    smtp.Send(message);

    log.Info($"End C# ServiceBus queue trigger function processed message: {myQueueItem}");
}


This doesn’t work:

2017-06-27T16:47:56.928 Function started (Id=1188dbdb-4963-4e55-af5c-4be1f71a1ca5)
2017-06-27T16:47:56.928 Start C# ServiceBus queue trigger function processed message: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA32
2017-06-27T16:47:56.928 Function completed (Failure, Id=1188dbdb-4963-4e55-af5c-4be1f71a1ca5, Duration=0ms)
2017-06-27T16:47:57.147 Exception while executing function: Functions.ServiceBusQueueTriggerCSharp1. mscorlib: Exception has been thrown by the target of an invocation. f-ServiceBusQueueTriggerCSharp1__-1971403142: Cannot complete.
2017-06-27T16:47:57.557 Exception while executing function: Functions.ServiceBusQueueTriggerCSharp1. mscorlib: Exception has been thrown by the target of an invocation. f-ServiceBusQueueTriggerCSharp1__-1971403142: Cannot complete.

Debugging Azure

A quick side note on debugging Azure. There are a number of resources with details of how this should work on the web, and I’ll probably have a later post of my own experiences, but it’s a pretty flaky experience, and I ended up using trial and error to determine the issue.

Working code

using System;
using System.Threading.Tasks;

public static void Run(string myQueueItem, TraceWriter log)
{
    log.Info($"Start C# ServiceBus queue trigger function processed message: {myQueueItem}");

    System.Net.Mail.MailMessage message = new System.Net.Mail.MailMessage();
    
    message.To.Add("to.address@hotmail.co.uk");    
    message.Subject = "Message in queue has expired";    
    message.From = new System.Net.Mail.MailAddress("from.address@hotmail.co.uk");
    message.Body = myQueueItem;
        
    System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("smtp.live.com");
    smtp.Port = 587;
    smtp.UseDefaultCredentials = false;
    smtp.Credentials = new System.Net.NetworkCredential("my.address@hotmail.co.uk", "p@ssw0rd");
    smtp.EnableSsl = true;
    smtp.Send(message);

    log.Info($"End C# ServiceBus queue trigger function processed message: {myQueueItem}");
}

So, the problem was just that I was referencing an unknown variable (messageText). I’m unsure exactly why I needed to travel to the mountains of Mordor to determine this – a simple error message in the online text would have sufficed.

The other issue that I faced was a security challenge; however, once I’d persuaded Azure that this really was me, everything sprung into life:

Credit Considerations

Unlike in previous posts where I’ve identified the Azure cost to be negligible, functions are the fastest way to use up credit I have found so far. Especially functions such as I’ve created here. I left the (non-working) function above active, but failing all night, and it used up over £40 worth of credit, continually trying, and failing, to process the dead-letter queue… I think the lights might even have dimmed in Redmond for a split second! The moral of the story is is: be careful when you’re debugging this – you can’t just leave at the end of the night with a function that doesn’t work, but is still active.

Summary

This concept is extremely compelling. I can have a service bus queue that is processed and monitored by an Azure function. If aliens land and steal the entire office, all the servers, dev PCs and programmers, this function will continue to run. There is obviously a mindset shift here, and it doesn’t make sense to move everything into this kind of model, but consider the possibilities; imagine a system that books holidays: it processes the customer request and adds it to a queue; the aeroplane booking system picks that from the queue and books the ticket on the plane, the car hire system takes the message to book a car, once they’re all complete they add respective messages to say so (but remain agnostic of each other), finally, if any one part of the system fails, an Azure function could sit there monitoring and cancel the whole lot. I’ve never worked in this kind of industry, so there’s a lot that I’ve probably not considered, but the essence is that you can have active functionality on (even catastrophic) failure – which is a brand new concept.

References

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus

https://stackoverflow.com/questions/10043219/view-content-of-an-azure-service-bus-queue

Service Bus Explorer:

https://code.msdn.microsoft.com/Service-Bus-Explorer-f2abca5a

http://markheath.net/post/remote-debugging-azure-functions

Sending e-mails:

https://stackoverflow.com/questions/25216202/smtp-live-com-mailbox-unavailable-the-server-response-was-5-7-3-requested-ac

Using BenchmarkDotNet to profile string comparison

Introduction

String comparison and manipulation of strings are some of the slowest and most expensive (in terms of GC) things that you can do in .Net. In my head, I’ve always believed that using String.Compare outperforms string1.ToUpper() == string2.ToUpper(), which I think I once saw on a StackOverflow post.

In this post, I will do some actual testing on the various methods using BenchMarkDotNet (which I have previously written about).

Setting Up BenchmarkDotNet

There’s not much to this – just install a NuGet package:

Install-Package BenchmarkDotNet

Other than that, you just need to decorate your methods with:

[Benchmark]

You can’t (ATM) specify method parameters, but you can decorate a set-up method, or you can specify some parameters in a public variable:


        [Params("test1", "test2", "I am an aardvark")]
        public string _string1;

        [Params("test1", "Test2", "I Am an AARDVARK")]
        public string _string2;

Finally, in the main method, you run the class:


        static void Main(string[] args)
        {
            BenchmarkRunner.Run<StringCompareCaseSensitive>();
        }

Once run, the results are output into the following directory:

bin\Debug\BenchmarkDotNet.Artifacts\results

Comparing strings

Case sensitive

The following are the ways that I can think of to compare a string where the case is known:

string1 == string2

string1.Equals(string2) – with various flags

string.Compare(string1, string2)

string.CompareOrdinal(string1, string2)

string1.CompareTo(string2)

string1.IndexOf(string2) – with various flags

And the results were:

This is definitely not what I expected. String.Compare is actually slower that a straightforward comparison, and not by a small amount.

Case insensitive

The following are the ways that I can think of to compare a string where the case is not known:

String1.ToUpper() == string2.ToUpper()

String1.ToLower() == string2.ToLower()

string1.Equals(string2) – with various flags

string.Compare(string1, string2, true)

string1.IndexOf(string2) -with various flags

Results:

So, it looks like the most efficient string comparison is:

_string1.Equals(_string2, StringComparison.OrdinalIgnoreCase);

But why?

Nobody knows – Looking at the IL

The good thing about .Net, is that if you want to see what your code looks like once it’s “compiled”, you can. It’s not perfect, because you still can’t see the actual, executed code, but it still gives you a good idea of why it’s slow or fast. However, because all of the functions in question are system functions, looking at the IL for the test code is pretty much pointless.

Let’s run ildasm:

(bet you’re glad I included that screenshot)

The string comparison functions are in mscorelib.dll:

Here’s the code in there:

.method public hidebysig static int32  Compare(string strA,
                                               string strB,
                                               valuetype System.StringComparison comparisonType) cil managed
{
  .custom instance void System.Security.SecuritySafeCriticalAttribute::.ctor() = ( 01 00 00 00 ) 
  // Code size       0 (0x0)
} // end of method String::Compare

To be honest, I spent a while burrowing down this particular rabbit hole… but finally decided to see what ILSpy had to say about it… it looks like there is a helper method in the string class that, for some reason, ildasm doesn’t show. Let’s have a look what it does for:

string.Compare(_string1, _string2, true) == 0

The decompiled version is:

[__DynamicallyInvokable]
public static int Compare(string strA, string strB, bool ignoreCase)
{
    if (ignoreCase)
    {
        return CultureInfo.CurrentCulture.CompareInfo.Compare(strA, strB, CompareOptions.IgnoreCase);
    }
    return CultureInfo.CurrentCulture.CompareInfo.Compare(strA, strB, CompareOptions.None);
}

And the static method CompareInfo.Compare:

public virtual int Compare(string string1, string string2, CompareOptions options)
{
    if (options == CompareOptions.OrdinalIgnoreCase)
    {
        return string.Compare(string1, string2, StringComparison.OrdinalIgnoreCase);
    }
    if ((options & CompareOptions.Ordinal) != CompareOptions.None)
    {
        if (options != CompareOptions.Ordinal)
        {
            throw new ArgumentException(Environment.GetResourceString("Argument_CompareOptionOrdinal"), "options");
        }
        return string.CompareOrdinal(string1, string2);
    }
    else
    {
        if ((options & ~(CompareOptions.IgnoreCase | CompareOptions.IgnoreNonSpace | CompareOptions.IgnoreSymbols | CompareOptions.IgnoreKanaType | CompareOptions.IgnoreWidth | CompareOptions.StringSort)) != CompareOptions.None)
        {
            throw new ArgumentException(Environment.GetResourceString("Argument_InvalidFlag"), "options");
        }
        if (string1 == null)
        {
            if (string2 == null)
            {
                return 0;
            }
            return -1;
        }
        else
        {
            if (string2 == null)
            {
                return 1;
            }
            return CompareInfo.InternalCompareString(this.m_dataHandle, this.m_handleOrigin, this.m_sortName, string1, 0, string1.Length, string2, 0, string2.Length, CompareInfo.GetNativeCompareFlags(options));
        }
    }
}

And further:

Well… I couldn’t get further, so I asked Microsoft… the impression is that this function is generated at runtime.

There was a link to some code in this answer, too. While I couldn’t really identify any actual comparison code from this, I did notice that there was a check like this:

#ifndef FEATURE_CORECLR

So… does .NetCore work any better?

Having created a new .Net Core project, and copying the files across (I was going to add them as a link, but InvariantCulture has been removed (or rather, not included) in Core.

Anyway, the results from .Net Core (for case sensitive checks) are:

And case in-sensitive:

Conclusion

So, the clear winner across all tests for case sensitive checks is to use:

string1.Equals(string2)

And .Net Core is slightly faster than 4.6.2.

For case insensitive the clear winner is (by a large margin):

string1.Equals(string2, StringComparison.OrdinalIgnoreCase);

And, again, there’s around a 15 – 20% speed boost using .Net Core.

References

There is a GitHub repository for the code in this post here.

https://msdn.microsoft.com/en-us/library/fbh501kz%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396

https://github.com/dotnet/BenchmarkDotNet/issues/60

http://mattwarren.org/2016/02/17/adventures-in-benchmarking-memory-allocations/

https://www.hanselman.com/blog/BenchmarkingNETCode.aspx

http://pmichaels.net/2016/11/04/message-persistence-in-rabbitmq-and-benchmarkdotnet/

https://blog.codinghorror.com/the-real-cost-of-performance/

https://msdn.microsoft.com/en-us/library/aa309387%28v=vs.71%29.aspx?f=255&MSPPError=-2147217396

http://ilspy.net/

http://stackoverflow.com/questions/9491337/what-is-dllimportqcall

Seriliasing Interfaces in JSON (or using a JsonConverter in JSON.NET)

Imagine that you have the following interface:

    public interface IProduct
    {
        int Id { get; set; }
        decimal UnitPrice { get; set; }
    }

This is an interface, and so may have a number of implementations; however, we know that every implementation will contain at least 2 fields, and what type they will be. If we wanted to serialise this, we’d probably write something like this:

        private static string SerialiseProduct(IProduct product)
        {
            string json = JsonConvert.SerializeObject(product);
            return json;
        }

If you were to call this from a console app, it would work fine:


        static void Main(string[] args)
        {
            IProduct product = new Product()
            {
                Id = 1,
                UnitPrice = 12.3m
            };

            string json = SerialiseProduct(product);
            Console.WriteLine(json);

Okay, so far so good. Now, let’s deserialise:


        private static IProduct DeserialiseProduct(string json)
        {
            IProduct product = JsonConvert.DeserializeObject<IProduct>(json);

            return product;
        }

And let’s call it:


        static void Main(string[] args)
        {
            IProduct product = new Product()
            {
                Id = 1,
                UnitPrice = 12.3m
            };

            string json = SerialiseProduct(product);
            Console.WriteLine(json);

            IProduct product2 = DeserialiseProduct(json);
            Console.WriteLine(product2.Id);
            
            Console.ReadLine();

        }

So, that runs fine:

Newtonsoft.Json.JsonSerializationException: ‘Could not create an instance of type SerialiseInterfaceJsonNet.IProduct. Type is an interface or abstract class and cannot be instantiated.

No.

Why?

The reason is that you can’t create an interface; for example:

That doesn’t even compile, but effectively, that’s what’s happening behind the scenes.

Converters

Json.Net allows the use of something called a converter. What that means is that I can inject functionality into the deserialisation process that tells Json.Net what to do with this interface. Here’s a possible converter for our class:


    class ProductConverter : JsonConverter
    {
        public override bool CanConvert(Type objectType)
        {
            return (objectType == typeof(IProduct));
        }

        public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
        {
            return serializer.Deserialize(reader, typeof(Product));
        }

        public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
        {
            serializer.Serialize(writer, value, typeof(Product));
        }
    }

It’s a relatively simple interface, you tell it how to identify your class, and then how to read and write the Json.

Finally, you just need to tell the converter to use this:


        private static IProduct DeserialiseProduct(string json)
        {
            var settings = new JsonSerializerSettings();
            settings.Converters.Add(new ProductConverter());

            IProduct product = JsonConvert.DeserializeObject<IProduct>(json, settings);

            return product;
        }

By using the settings parameter.

References

http://www.jerriepelser.com/blog/custom-converters-in-json-net-case-study-1/

Console Application Builds, But Will Not Run

While doing some testing recently, I created a new bog standard console application and, on pressing F5, nothing happened.

The project builds fine, but wouldn’t launch the console window.

Why (and how to fix)?

Well, I had installed the Azure Service Bus Client. Other than that, I can’t really say; however, the fix does kind of make sense:

Uncheck the “Prefer 32-bit” checkbox, and it all springs back to life!

NUnit TestCaseSource

While working on this project, I found a need to abstract away a base type that the unit tests use (in this instance, it was a queue type). I was only testing a single type (PriorityQueue); however, I wanted to create a new type, but all the basic tests for the new type are the same as the existing ones. This led me to investigate the TestCaseSource attribute in NUnit.

As a result, I needed a way to re-use the tests. There are definitely multiple ways to do this; the simplest one is probably to create a factory class, and pass in a string parameter. The only thing that put me off this is that you end up with the following test case:

        [TestCase("test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9"]
        [TestCase("a1", "a", "a1", "b", "c", "d", "a"]
        public void Queue_Dequeue_CheckResultOrdering(
            string first, string last, params string[] queueItems)
        {

Becoming:

        [TestCase("PriorityQueue", "test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9"]
        [TestCase("PriorityQueue2", "test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9"]
        [TestCase("PriorityQueue", "a1", "a", "a1", "b", "c", "d", "a"]
        [TestCase("PriorityQueue2", "a1", "a", "a1", "b", "c", "d", "a"]
        public void Queue_Dequeue_CheckResultOrdering(
            string queueType, string first, string last, params string[] queueItems)
        {

This isn’t very scaleable when adding a third or fourth type.

TestCaseSource

It turns out that the (or an least an) answer to this is to use NUnit’s TestCaseSource attribute. The NUnit code base dog foods quite extensively, so that is not a bad place to look for examples of how this works; however, what I couldn’t find was a way to mix and match. To better illustrate the point; here’s the first test that I changed to use TestCaseSource:

        [Test]
        public void Queue_NoEntries_CheckCount()
        {
            // Arrange
            PQueue.PriorityQueue<string> queue = new PQueue.PriorityQueue<string>();

            // Act
            int count = queue.Count();

            // Assert
            Assert.AreEqual(0, count);
        }

Which became:

        [Test, TestCaseSource(typeof(TestableQueueItemFactory), "ReturnQueueTypes")]
        public void Queue_NoEntries_CheckCount(IQueue<string> queue)
        {
            // Arrange


            // Act
            int count = queue.Count();

            // Assert
            Assert.AreEqual(0, count);
        }

(For completeness, the TestableQueueItemFactory is here):

    public static class TestableQueueItemFactory
    {
        public static IEnumerable<IQueue<string>> ReturnQueueTypes()
        {
            yield return new PQueue.PriorityQueue<string>();
        }
    }

However, when you have a TestCase like the one above, there’s a need for the equivalent of this (which doesn’t work):

        [Test, TestCaseSource(typeof(TestableQueueItemFactory), "ReturnQueueTypes")]
        [TestCase("test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9")]
        [TestCase("a1", "a", "a1", "b", "c", "d", "a")]
        public void Queue_Dequeue_CheckResultOrdering(string first, string last, params string[] queueItems)
        {

A quick look at the NUnit code base reveals these attributes to be mutually exclusive.

Compromise

By no means is this a perfect solution, but the one that I settled on was to create a second TestCaseSource helper method, which looks like this (along with the test):

        private static IEnumerable Queue_Dequeue_CheckResultOrdering_TestCase()
        {
            foreach(var queueType in TestableQueueItemFactory.ReturnQueueTypes())
            {
                yield return new object[] { queueType, "test", "test9", new string[] { "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9" } };
                yield return new object[] { queueType, "a1", "a", new string[] { "a1", "b", "c", "d", "a" } };
            }
        }

        [Test, TestCaseSource("Queue_Dequeue_CheckResultOrdering_TestCase")]
        public void Queue_Dequeue_CheckResultOrdering(
            IQueue <string> queue, string first, string last, params string[] queueItems)
        {

As you can see, the second helper method doesn’t really help readability, so it’s certainly not a perfect solution; in fact, with a single queue type, this makes the code more complex and less readable. However, When a second and third queue type are introduced, the test suddenly becomes resilient.

YAGNI

At first glance, this may appear to be an example of YAGNI. However, in this article, Martin Fowler does state:

Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify.

Which, I believe, is what we are doing here.

References

http://www.smaclellan.com/posts/parameterized-tests-made-simple/

http://stackoverflow.com/questions/16346903/how-to-use-multiple-testcasesource-attributes-for-an-n-unit-test

https://github.com/nunit/docs/wiki/TestCaseSource-Attribute

http://dotnetgeek.tumblr.com/post/2851360238/exploiting-nunit-attributes-valuesourceattribute

https://github.com/nunit/docs/wiki/TestCaseSource-Attribute

Testing for Exceptions using the Arrange Act Assert Pattern in C# 7

Unit testing massively benefits from following the Arrange / Act / Assert pattern. I’ve seen tests that are not written in this way, and they can be sprawling and indecipherable, either testing many different things in series, or testing nothing at all except the .Net Framework.

I recently found an issue while trying to test for an exception being thrown, which is that Nunit (and probably other frameworks) test for an exception by accepting a delegate to test. Here’s an example:

        [Test]
        public void Test_ThrowException_ExceptionThrown()
        {
            // Arrange
            TestClass tc = new TestClass();

            // Act / Assert
            Assert.Throws(typeof(Exception), tc.ThrowException);
        }

We’re just testing a dummy class:

    public class TestClass
    {
        public void ThrowException()
        {
            throw new Exception("MyException");
        }
    }

C# 7 – Inline functions

If you look in the references at the bottom, you’ll see something more akin to this approach:

        public void Test_ThrowException_ExceptionThrown2()
        {
            // Arrange
            TestClass tc = new TestClass();

            // Act
            TestDelegate throwException = () => tc.ThrowException();            

            // Assert
            Assert.Throws(typeof(Exception), throwException);
        }

However, since C# 7, the option on a local function has arrived. The following has the same effect:

        [Test]
        public void Test_ThrowException_ExceptionThrown3()
        {
            // Arrange
            TestClass tc = new TestClass();

            // Act
            void CallThrowException()
            {
                tc.ThrowException();
            }

            // Assert
            Assert.Throws(typeof(Exception), CallThrowException);
        }

I think that I, personally, still prefer the anonymous function for this; however, the local function does present some options; for example:


        [Test]
        public void Test_ThrowException_ExceptionThrown4()
        {
            void CallThrowException()
            {
                // Arrange
                TestClass tc = new TestClass();

                // Act
                tc.ThrowException();
            }

            // Assert
            Assert.Throws(typeof(Exception), CallThrowException);
        }

Now I’m not so sure that I still prefer the anonymous function.

References

http://stackoverflow.com/questions/33897323/nunit-3-0-and-assert-throws

https://pmbanugo.wordpress.com/2014/06/16/exception-testing-pattern/

http://stackoverflow.com/questions/24070115/best-approach-towards-applying-the-arrange-act-assert-pattern-when-expecting-exc

WPF Performance – TextBlock

WPF typically doesn’t have many performance issues and, where it does, this can usually be fixed by virtualisation. Having said that, even if you never need to use this, it’s useful to know that you can eek that last ounce of performance out of the system.

Here’s a basic program that contains a TextBlock:

<Window x:Class="TextBlockTest.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        xmlns:local="clr-namespace:TextBlockTest"
        mc:Ignorable="d"
        Title="MainWindow" Height="350" Width="525"
        x:Name="MainWindowView">
    <Grid>
        <ScrollViewer>
            <ItemsControl ItemsSource="{Binding BigList, ElementName=MainWindowView}" Margin="0,-1,0,1">
                <ItemsControl.ItemTemplate>
                    <DataTemplate>
                        <TextBlock Text="{Binding}"/>
                    </DataTemplate>
                </ItemsControl.ItemTemplate>
            </ItemsControl>
        </ScrollViewer>
    </Grid>
</Window>

Code behind:

using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;

namespace TextBlockTest
{
    /// <summary>
    /// Interaction logic for MainWindow.xaml
    /// </summary>
    public partial class MainWindow : Window
    {
        public ObservableCollection<string> BigList { get; set; }

        public MainWindow()
        {
            BigList = new ObservableCollection<string>();
            for (int i = 0; i <= 10000; i++)
            {
                BigList.Add($"Item {i}");
            }

            InitializeComponent();
        }
    }
}

Let’s, for a minute, imagine this is slow, and profile it:

The layout is taking most of the time. Remember that each control needs to be created, and remember that the TextBlock does slightly more than just display text:

The whole panel took 3.46s. Not terrible, performance, but can it be improved? The answer is: yes, it can! Very, very slightly.

Let’s create a Custom Control:

using System;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;

namespace FastTextBlock
{
   
    public class MyTextBlockTest : Control
    {
        private FormattedText _formattedText;

        static MyTextBlockTest()
        {
            //DefaultStyleKeyProperty.OverrideMetadata(typeof(MyTextBlockTest), new FrameworkPropertyMetadata(typeof(MyTextBlockTest)));
        }

        public static readonly DependencyProperty TextProperty =
             DependencyProperty.Register(
                 "Text", 
                 typeof(string),
                 typeof(MyTextBlockTest), 
                 new FrameworkPropertyMetadata(string.Empty, FrameworkPropertyMetadataOptions.AffectsMeasure,
                    (o, e) => ((MyTextBlockTest)o).TextPropertyChanged((string)e.NewValue)));

        private void TextPropertyChanged(string text)
        {
            var typeface = new Typeface(
                new FontFamily("Times New Roman"),
                FontStyles.Normal, FontWeights.Normal, FontStretches.Normal);

            _formattedText = new FormattedText(
                text, CultureInfo.CurrentCulture,
                FlowDirection.LeftToRight, typeface, 15, Brushes.Black);
        }


        public string Text
        {
            get { return (string)GetValue(TextProperty); }
            set { SetValue(TextProperty, value); }
        }

        protected override void OnRender(DrawingContext drawingContext)
        {
            if (_formattedText != null)
            {
                drawingContext.DrawText(_formattedText, new Point());
            }
        }

        protected override Size MeasureOverride(Size constraint)
        {
            //return base.MeasureOverride(constraint);

            return _formattedText != null
            ? new Size(_formattedText.Width, _formattedText.Height)
            : new Size();
        }
    }
}

Here’s the new XAML:

    <Grid>
        <ScrollViewer>
            <ItemsControl ItemsSource="{Binding BigList, ElementName=MainWindowView}" Margin="0,-1,0,1">
                <ItemsControl.ItemTemplate>
                    <DataTemplate>
                        <!--<TextBlock Text="{Binding}"/>-->
                        <controls:MyTextBlockTest Text="{Binding}" />
                    </DataTemplate>
                </ItemsControl.ItemTemplate>
            </ItemsControl>
        </ScrollViewer>
    </Grid>

Shaves around 10ms off the time:

Even more time can be shaved by moving up an element (that is, inheriting from a more base class than `Control`. In fact, `Control` inherits from `FrameworkElement`:

public class MyTextBlockTest : FrameworkElement

Shaves another 10ms off:

Conclusion

Clearly, this isn’t a huge performance boost, and in 99% of use cases, this would be massively premature optimisation. However, the time that this really comes into its own is where you have a compound control (a control that consists of other controls), and you have lots of them (hundreds). See my next post for details.

References:

https://social.msdn.microsoft.com/Forums/en-US/94ddd25e-7093-4986-b8c8-b647924251f6/manual-rendering-of-a-wpf-user-control?forum=wpf

http://www.codemag.com/article/100023

http://stackoverflow.com/questions/20338044/how-do-i-make-a-custom-uielement-derived-class-that-contains-and-displays-othe

http://stackoverflow.com/questions/42494455/wpf-custom-control-inside-itemscontrol

Using Entity Framework Core with DBFirst

Say what you like about ORM frameworks, but they do decrease time to market. My impression, as someone that has generally had very little exposure to them, is that, whilst they can make it very quick to get something up and running, they make it very easy to shoot yourself in the foot.

With all the hype about .Net Core, I thought I’d give EF Core a go, and this post is a document of my initial set-up which was, by no means, a straight forward process!

The General Idea of an ORM

The purpose of an ORM is to abstract data access, in a manner similar to the following:

The great advantage of this is that you can very quickly make database changes and maintain a layer of abstraction between the DB and the accessing layer. Obviously, the downside is that you don’t have the same level of control over this access.

Pre-requisites

The first step is to install the SDK from here. Like everything else in this post, this web-site is not as straight-forward as it appears. Make sure that you select “Current” and “SDK”:

I used x64. That matters when you get further down.

Database First

In this particular installation, I’m using the “Database First” model. What that means is that I already have a database, and it is sat on an accessible machine. The following project will create classes to access that. For details of how to create a database using a VS project, see this article that I wrote on unit testing databases.

Project set-up

The first step is to create your project.

What is the difference between ASP.NET Core (.NET Core) and ASP.NET Core (.NET Framework)?

In this instance, we’ll go with .Net Core. The difference between the two is that one of them (.NET Framework) references the .NET Framework, and so will not be cross platform. Obviously, picking .NET Core is your smallest footprint, and least functionality.

Pick Web API here, as we’re essentially just writing a service that accesses a DB (see the diagram above).

The next step is to install Entity Framework Core:

https://www.nuget.org/packages/Microsoft.EntityFrameworkCore/

A note on project.json

This is a new file that has been introduced into the .Net Core world, and it is (IMHO) a huge improvement on the flakey confusion of NuGet package management. The idea is that you declare your dependencies, in a similar way that you might declare your variables in a program. When you change this, or when you ask it to, VS will simply go and get these packages for you. This means, for example, that I can paste my project.json in this post, and you, the reader (or future me), can simply paste this directly into yours and VS will do the rest… so:

{
  "dependencies": {
    "Microsoft.ApplicationInsights.AspNetCore": "2.0.0",
    "Microsoft.AspNetCore.Hosting.Abstractions": "1.1.0",
    "Microsoft.AspNetCore.Mvc": "1.1.1",
    "Microsoft.AspNetCore.Routing": "1.1.0",
    "Microsoft.AspNetCore.Server.IISIntegration": "1.1.0",
    "Microsoft.AspNetCore.Server.Kestrel": "1.1.0",
    "Microsoft.EntityFrameworkCore": "1.1.0",
    "Microsoft.EntityFrameworkCore.Design": "1.1.0",
    "Microsoft.EntityFrameworkCore.Relational.Design": "1.1.0",
    "Microsoft.EntityFrameworkCore.SqlServer": "1.1.0",
    "Microsoft.EntityFrameworkCore.Tools.DotNet": "1.1.0-preview4-final",
    "Microsoft.Extensions.Configuration.Json": "1.1.0",
    "Microsoft.Extensions.Logging": "1.1.0",
    "Microsoft.Extensions.Logging.Console": "1.1.0",
    "Microsoft.Extensions.Logging.Debug": "1.1.0",
    "Microsoft.Extensions.Options.ConfigurationExtensions": "1.1.0",
    "Microsoft.NETCore.App": {
      "version": "1.1.0",
      "type": "platform"
    },
    "NETStandard.Library": "1.6.1",
    "System.Collections.NonGeneric": "4.3.0",
 
    "Microsoft.EntityFrameworkCore.SqlServer.Design": {
      "version": "1.1.0",
      "type": "build"
    },
    "Microsoft.EntityFrameworkCore.Tools": {
      "version": "1.1.0-preview4-final",
      "type": "build"
    }
  },
 
  "tools": {
    "Microsoft.AspNetCore.Server.IISIntegration.Tools": "1.0.0-preview2-final",
    "Microsoft.EntityFrameworkCore.Tools": "1.1.0-preview4-final"
  },
 
  "frameworks": {
    "netcoreapp1.1": {
      "imports": [
        "dotnet5.6",
        "portable-net45+win8"
      ]
    }
  },
 
  "buildOptions": {
    "emitEntryPoint": true,
    "preserveCompilationContext": true
  },
 
  "runtimeOptions": {
    "configProperties": {
      "System.GC.Server": true
    }
  },
 
  "runtimes": {
    "win10-x64": {}
  },
 
  "publishOptions": {
    "include": [
      "wwwroot",
      "**/*.cshtml",
      "appsettings.json",
      "web.config"
    ]
  },
 
  "scripts": {
    "postpublish": [ "dotnet publish-iis --publish-folder %publish:OutputPath% --framework %publish:FullTargetFramework%" ]
  }
}

If you now simply open a package manager window in VS and type:

dotnet restore

VS should do the rest.

The next thing that you’ll need to update is the appsettings.json:

{
 
  "ConnectionStrings": {
    "JourneyDB": "Data Source=ServerName\\DatabaseInstance; Initial Catalog=MyDatabase; Integrated Security=SSPI"
  },
  
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    }
  }
}

Scaffolding

So, you should now have a project that’s ready to go. I suggestion, unless you’re reading this in around a year from now (2018) when all this has been stabilised, is that you restart VS. In fact, this should be your first response if anything in this post doesn’t do what you expect (it is still in preview, so I’m not judging).

The next step is to call the following command:

Scaffold-DbContext "Server=ServerName\DatabaseInstance;Database=MyDatabase;Trusted_Connection=True;" Microsoft.EntityFrameworkCore.SqlServer -OutputDir Models

What should that do (because it doesn’t)?

Okay, it took me a good while to get this to actually work, so the chance of it working first time is pretty remote; but when it does work, you should get a mirror of your DB in the project:

Using it

As you can see, I have a `JourneyHeader`; here’s the code to get the contents of that table:

namespace JourneyService.Controllers
{
    [Route("api/[controller]")]
    public class JourneyController : Controller
    {
        // GET api/values
        [HttpGet]
        public IEnumerable<JourneyHeader> Get()
        {
            using (JourneyDatabaseContext context = new JourneyDatabaseContext())
            {
                return context.JourneyHeader.ToList();
            }            
        }

And here’s the proof that it works:

References

https://code.msdn.microsoft.com/How-to-using-Entity-1464feea

http://www.dotnetspeak.com/asp-net-core/write-your-first-api-with-asp-net-core-and-entity-framework-core/

https://docs.microsoft.com/en-us/ef/core/get-started/aspnetcore/existing-db

http://michaelcrump.net/getting-started-with-aspnetcore/

https://github.com/dotnet/core/blob/master/release-notes/rc4-download.md

https://blogs.msdn.microsoft.com/dotnet/2016/11/16/announcing-entity-framework-core-1-1/

http://stackoverflow.com/questions/42393977/setting-up-database-first-ef-project-using-scaffold-dbcontext

http://stackoverflow.com/questions/42393977/setting-up-database-first-ef-project-using-scaffold-dbcontext?noredirect=1#comment71956574_42393977

Getting Started With SignalR

SignalR is an open source framework allowing bi-directional communication between client and server. Basically, it uses a stack of technologies; the idea being that the Signalr framework will establish the “best” way to maintain a bi-directional data stream, starting with web sockets, and falling all the way back to simply polling the server.

The following gives the basics of establishing a web site that can accept Signalr, and a console app that can send messages to it.

Create project

Let’s go MVC:

Hubs

Hubs are the way in which the Signalr service communicates with its clients. Obviously, the term service here may not actually represent a service.

To add a hub class, select the project, right-click and “New Item..”:

This adds the file, along with new references:

The code above that gets added is:

public void Hello()
{
    Clients.All.hello();
}

Clients.All returns a dynamic type, so we lose intellisense at this point. It’s important that the signature of this method is exactly correct, and that it is decorated with the name of the hub, and that it is decorated with the name of the hub; so let’s replace with:


[HubName("MyHub1")]
public class MyHub1 : Hub
{
    public void Hello(string message)
    {
        Clients.All.Hello(message);
    }
}

Change Startup.cs:

public partial class Startup
{
    public void Configuration(IAppBuilder app)
    {
        ConfigureAuth(app);
 
        app.MapSignalR();
    }
}

For all this to actually do anything, the next thing to do is hook up the JavaScript:

$(function () {
    // Declare a proxy to reference the hub. 
    var hub = $.connection.MyHub1;
    // Create a function that the hub can call to broadcast messages.
    hub.client.hello = function (message) {
 
        alert("Hello");
    };
 
    
    $.connection.hub.start()
        .done(function () { console.log("MyHub1 Successfully Started"); })
        .fail(function () { console.log("Error: MyHub1 Not Successfully Started"); })
});

Effectively, once we receive a message, we’re just going to display an alert. Once the event handler is wired up, we try to start the hub.

Next, reference the required files in BundleConfig.cs:

bundles.Add(new ScriptBundle("~/bundles/signalr").Include(
    "~/Scripts/jquery-3.1.1.min.js").Include(
    "~/Scripts/jquery.signalR-2.2.1.js"));

These are referenced in _Layout.cshtml; remember also that, because SignalR references Jquery, you’ll need to remove other references to Jquery:


<title>@ViewBag.Title - My ASP.NET Application</title>
@Styles.Render("~/Content/css")
@Scripts.Render("~/bundles/modernizr")    
@Scripts.Render("~/bundles/signalr")    
<script type="text/javascript" src="~/signalr/hubs"></script>
<script type="text/javascript" src="~/Scripts/Notification.js"></script>

. . .

    </div>
    
    @Scripts.Render("~/bundles/bootstrap")
    @RenderSection("scripts", required: false)
</body>

Notes on Bundles

The purpose of bundling is to shrink the size of the bundled files. The idea being that small files make for a speedy web-site.

Console App

The next step is to create an application that can fire a notification to the page. In this case, I’m using a console app, just because I like to see everything working with console apps.

Start with a NuGet Reference:

The code:

class Program
{
    static void Main(string[] args)
    {
        Console.Write("Message: ");
        string message = Console.ReadLine();
 
        HubConnection connection = new HubConnection("http://localhost:4053/");
        IHubProxy hub = connection.CreateHubProxy("myHub1");
                    
        connection.Start().Wait();
        hub.Invoke<string>("Hello", message).Wait();            
 
        Console.WriteLine("Sent");
        Console.ReadLine();
    }
}

And that’s it – you should be able to send a message to the web site from the console app. The examples that are typically given elsewhere on the net are chat rooms, but this clearly has many more uses.

Some abstract notes that I made while researching this.

Adding:

Version 1

protected void Application_Start()
{
    AreaRegistration.RegisterAllAreas();
 
    >RouteTable.Routes.MapHubs(new HubConfiguration());
 
    FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
…

Gives:

Severity Code Description Project File Line Source Suppression State
Error CS0619 ‘SignalRRouteExtensions.MapHubs(RouteCollection, HubConfiguration)’ is obsolete: ‘Use IAppBuilder.MapSignalR in an Owin Startup class. See http://go.microsoft.com/fwlink/?LinkId=320578 for more details.’ SignalRTest3 C:\Users\Paul\documents\visual studio 14\Projects\SignalRTest3\SignalRTest3\Global.asax.cs 18 Build Active

This was for v1 Signal R – superseded in 2.

CORS

During trying to get this working, the prospect of using CORS came up. This enables cross domain requests, which are typically prohibited.

Proxies

The generated Proxy can be viewed (navigate to http://localhost:4053/signalr/hubs):

 $.hubConnection.prototype.createHubProxies = function () {
        var proxies = {};
        this.starting(function () {
            // Register the hub proxies as subscribed
            // (instance, shouldSubscribe)
            registerHubProxies(proxies, true);
this._registerSubscribedHubs();
        }).disconnected(function () {
            // Unsubscribe all hub proxies when we "disconnect".  This is to ensure that we do not re-add functional call backs.
            // (instance, shouldSubscribe)
            registerHubProxies(proxies, false);
        });
proxies['MyHub1'] = this.createHubProxy('MyHub1'); 
        proxies['MyHub1'].client = { };
        proxies['MyHub1'].server = {
            hello: function (message) {
                return proxies['MyHub1'].invoke.apply(proxies['MyHub1'], $.merge(["Hello"], $.makeArray(arguments)));
             }
        };
return proxies;
    };

References:

https://www.asp.net/signalr/overview/guide-to-the-api/hubs-api-guide-javascript-client

https://docs.microsoft.com/en-us/aspnet/signalr/overview/getting-started/tutorial-getting-started-with-signalr

https://docs.microsoft.com/en-us/aspnet/signalr/overview/guide-to-the-api/hubs-api-guide-javascript-client

https://github.com/SignalR/SignalR/wiki/Faq

http://stackoverflow.com/questions/42108193/signalr-test-project-not-working-as-expected

http://www.jeffreyfritz.com/2015/05/where-did-my-asp-net-bundles-go-in-asp-net-5/

Designing and Debugging Database Unit Tests

There are many systems out there in the wild, and some new ones being written, that use database logic extensively. This article discusses how and why these pieces of logic should be tested, along with whether they should exist at all.

In general, for unit tests, it’s worth asking the question of what, exactly, is being tested, before starting. This is especially true in database tests; for example, consider a test where we update a field in a database, and then assert that the field is what it has been set to. Are you testing your trigger logic, or are you simply testing Microsoft SQL Server works?

The second thing to consider is whether or not it makes any sense to use testable database logic in new code. That is, say we have a stored procedure that:
– Takes a product code
– Looks up what the VAT is for that product
– Calculates the total price
– Writes the result, along with the parameter and the price to a new table

Does it make sense for all that logic to be in the stored procedure, or would it make more sense to retrieve the values needed via one stored procedure, do the calculation in a testable server-side function, and call a second procedure to write the data?

FIRST

Unit testing a database is a tricky business. First of all, if you have business logic in the database then it, almost by definition, depends on the state of the data. You obviously can simply run unit tests against the database and change the data, but let’s have a look at the FIRST principles, and see where database tests are inherently difficult.

Fast

It depends exactly what is meant by fast, but in comparison to a unit test that asserts some logic in C# code, database tests are slow (obviously, in comparison to conducting the test manually, they are very fast). Realistically, they are probably going to be sufficiently slow to warrant taking them out of your standard unit test suite. A sensible test project (that is, one that tests some actual code) may contain a good few hundred tests, let’s assume they all take 200ms – that means that 300 tests take a total of 60 seconds!

One thing that conducting DB tests does give you is an idea as to how fast (or slow) they actually are:

Isolated

It’s incredibly difficult to produce a database unit test that is isolated because, by its nature, a database had dependencies. Certainly, if anything you’re testing is dependent on a particular data state (for example, in the case above, the product that we are looking for must exist in a table, and have a VAT rate) then, unless this state is set-up in the test itself, this rule is broken.

Repeatable

Again – this isn’t a small problem with databases. Should I change Column A to test a trigger on the table, am I then able to change it again. What if the data is in a different state when I run the unit tests from the last time – I might get rogue fails, or worse, rogue passes. What happens if the test crashes half way through, how do we revert?

Self-verifying

In my example before, I changed Column A in order to test a trigger, and I’ll maybe check something that is updated by the trigger. Providing that the assertion is inside the test, the test is self-verifying. Obviously, this is easier to do wrong in a database context, because if I do nothing, the data is left in a state that can be externally verified.

Timely

This refers to when a test is written. There’s nothing inherent about database tests that prevent them from being written before, or very shortly after the code is written. However, see the comment above as to whether new code written like this makes sense.

Problems With A Database Test Project

Given what we’ve put above, let’s look at the outstanding issues that realistically need to be solved in order to use database tests:

1. Deployment. Running a standard code test will run the code wherever you are; however, a database test, whichever way you look at it, needs a database before it runs.

2. Rollback. Each test needs to be isolated, and so there needs to be a way to revert to the database state before the tests began.

3. Set-up. Any dependencies that the tests have, must be inside the test; therefore, if a table needs to have three rows in it, we need to add those rows within the test.

4. Assertion. What are we testing, and what makes sense to test; each test needs a defined purpose.

Example Project

In order to explore the various possibilities when setting up a database project, I’m going to use an example project:

Let’s start with some functionality to test. I’m going to do it this way around for two reasons: having code to test better illustrates the problems faced by database tests, and it is my belief that much of the database logic code is legacy and, therefore, already exists.

Here’s a new table, and a trigger that acts upon it:

CREATE TABLE [dbo].[SalesOrder]
(
    [Id] INT NOT NULL PRIMARY KEY, 
    [ProductCode] NCHAR(10) NOT NULL, 
    [NetAmount] DECIMAL(18, 2) NULL, 
    [Tax] DECIMAL(18, 2) NULL, 
    [TotalAmount] DECIMAL(18, 2) NULL, 
    [Comission] DECIMAL(18, 2) NULL
)
GO
 
CREATE TRIGGER SalesOrderAfterInsert ON SalesOrder
AFTER INSERT, UPDATE
AS
BEGIN
	DECLARE @CalcTax Decimal(18,2) 
	DECLARE @CalcComission Decimal(18,2) 
     
	SELECT @CalcTax = INSERTED.NetAmount * 0.20 FROM INSERTED
	SELECT @CalcComission = INSERTED.NetAmount * 0.10 FROM INSERTED
	 
    UPDATE S
    SET S.Tax = @CalcTax,
		S.Comission = @CalcComission,
		S.TotalAmount = S.NetAmount + S.Tax
	FROM INSERTED, SalesOrder S
    WHERE S.Id = INSERTED.Id
END
GO

This is for the purpose of illustration, so obviously, there are things here that might not make sense in real life; however, the logic is very testable. Let’s deploy this to a database, and do a quick manual test:

Once the database is published, we can check and test it in SSMS:

Quick edit the rows:

And test:

At first glance, this seems to work well. Let’s create a test:

[TestMethod]
public void CheckTotalAmount()
{
    using (SqlConnection sqlConnection = new SqlConnection(
        @"Data Source=TLAPTOP\PCM2014;Initial Catalog=MySqlDatabase;Integrated Security=SSPI;"))
    {
        sqlConnection.Open();
        using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
        {
            sqlCommand.CommandText = "INSERT INTO SalesOrder (Id, ProductCode, NetAmount) " +
                "VALUES (2, 'test', 10)";
            sqlCommand.ExecuteNonQuery();
        }
 
        using (SqlCommand sqlCommandCheck = sqlConnection.CreateCommand())
        {
            sqlCommandCheck.CommandText = $"SELECT TotalAmount FROM SalesOrder WHERE Id = 1";
            decimal result = decimal.Parse(sqlCommandCheck.ExecuteScalar().ToString());
 
        }
    }
}

Okay – there are a number of problems with this test, but let’s pretend for a minute that we don’t know what they are; the test passes:

Let’s run it again, just to be sure:

Oops.

Let’s firstly check this against the test principles that we discussed before.
1. Is it fast? 337ms means that we can run 3 of these per second. So that’s a ‘no’.
2. Is it Isolated? Does is have a single reason to fail – and can it live independently? If we accept that the engine itself is a reason to fail, but ignore that, then we can look specifically at the test, which asserts nothing. What’s more, it is doing two separate things to the DB, so both can fail realistically.
3. Is it Repeatable? Clearly not.
4. Is it self-verifying? No – it isn’t, because we have no assertions in it. Although we know that on the first run, both queries worked, we don’t know why.
5. Timely – well, we did write it directly after the code, so that’s probably a tick.

So, we know that the second run didn’t work. A quick look at the DB will tell us why:

Of course, the test committed a transaction to the database, as a result, any subsequent runs will fail.

The Solution

What follows is a suggested solution for this kind of problem, along with the beginnings of a framework for database testing. The tests here are using MSTest, but the exact same concept is easily achievable in Nunit and, I imagine, every other testing framework.

Base Test Class

The first thing is to create a deployment task:

The deployment task might look a little like this:

public static bool DeployDatabase(string projectFile)
{
    ILogger logger = new BasicFileLogger();
 
    Dictionary<string, string> globalProperties = new Dictionary<string, string>()
    {
        { "Configuration", "Debug" },
        { "Platform", "x86" },
        { "SqlPublishProfilePath", @"MySqlDatabase.publish.xml" }
    };
 
    ProjectCollection pc = new ProjectCollection(
        globalProperties, new List<ILogger>() { logger }, ToolsetDefinitionLocations.Registry);
        
    BuildParameters buildParameters = new BuildParameters(pc);            
    BuildRequestData buildRequestData = new BuildRequestData(
        projectFile, globalProperties, null, new string[] { "Build", "Publish" }, null);
 
    BuildResult buildResult = BuildManager.DefaultBuildManager.Build(
        buildParameters, buildRequestData);
 
    return (buildResult.OverallResult == BuildResultCode.Success);
}

Publish Profiles

This uses a publish profile. These are basically XML files that tell the build how to publish your database; here’s an example of one:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="14.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <IncludeCompositeObjects>True</IncludeCompositeObjects>
    <TargetDatabaseName>MySqlDatabase</TargetDatabaseName>
    <DeployScriptFileName>MySqlDatabase.sql</DeployScriptFileName>
    <TargetConnectionString>Data Source=TLAPTOP\PCM2014;Integrated Security=True;Persist Security Info=False;Pooling=False;MultipleActiveResultSets=False;Connect Timeout=60;Encrypt=False;TrustServerCertificate=True</TargetConnectionString>
    <ProfileVersionNumber>1</ProfileVersionNumber>
  </PropertyGroup>
</Project>

You can get Visual Studio to generate this for you, by selecting to “Deploy…” the database, and then selecting “Save Profile As…”:

Database Connection

Now that we’ve deployed the database, the next step is to connect. One way of doing this is to configure the connection string in the app.config of your test project:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <connectionStrings>
    <add name="MySqlDatabase" 
         connectionString="Data Source=TLAPTOP\PCM2014; Initial Catalog=MySqlDatabase; Integrated Security=true" />
  </connectionStrings>
</configuration>

You can then connect using the following method:


ConnectionStringSettings connectionString = ConfigurationManager.ConnectionStrings["MySqlDatabase"];
 
_sqlConnection = new SqlConnection(connectionString.ConnectionString);
_sqlConnection.Open();

This sort of functionality could form the basis of a base test class; for example:

[TestClass]
public class BaseTest
{
    protected SqlConnection _sqlConnection;
 
    [TestInitialize]        
    public virtual void SetupTest()
    {
        ConnectionStringSettings connectionString = ConfigurationManager.ConnectionStrings["MySqlDatabase"];
        _sqlConnection = new SqlConnection(connectionString.ConnectionString);
        _sqlConnection.Open();
    }
 
    [TestCleanup]
    public virtual void TearDownTest()
    {
        _sqlConnection.Close();
    }
}

Transactions

So, we now have a deployment task, and a connection, the next step is to run the tests in a way in which they are repeatable. The key here is to use transactions. Going back to the base class, we can wrap this functionality into a method that can simply be inherited by all unit tests.

public class BaseTest
{
    protected SqlConnection _sqlConnection;
    protected SqlTransaction _sqlTransaction;
 
 
    [TestInitialize]
    public virtual void SetupTest()
    {
        ConnectionStringSettings connectionString = ConfigurationManager.ConnectionStrings["MySqlDatabase"];
        _sqlConnection = new SqlConnection(connectionString.ConnectionString);
        _sqlConnection.Open();
        _sqlTransaction = _sqlConnection.BeginTransaction();
    }
 
    [TestCleanup]
    public virtual void TearDownTest()
    {
        _sqlTransaction.Rollback();
        _sqlConnection.Close();
    }
}

Refactor The Base Class

Let’s put all this together, and remove some parts that can be separated into a common helper class:

public class ConnectionHelper
{
    SqlConnection _sqlConnection;
    SqlTransaction _sqlTransaction;
 
    public SqlConnection OpenTestConnection()
    {
        ConnectionStringSettings connectionString = ConfigurationManager.ConnectionStrings["MySqlDatabase"];
 
        _sqlConnection = new SqlConnection(connectionString.ConnectionString);
        _sqlConnection.Open();
        _sqlTransaction = _sqlConnection.BeginTransaction();
 
        return _sqlConnection;
    }
 
    public SqlCommand UseNewTestCommand()
    {
        SqlCommand sqlCommand = _sqlConnection.CreateCommand();
        sqlCommand.Transaction = _sqlTransaction;
        return sqlCommand;
    }
 
    public void CloseTestConnection()
    {
        _sqlTransaction.Rollback();
        _sqlConnection.Close();
    }
}

The base test now looks like this:


[TestClass]
public class BaseTest
{
    protected ConnectionHelper _connectionHelper;
 
    [ClassInitialize]
    public virtual void SetupTestClass()
    {
        DatabaseDeployment.DeployDatabase(@"..\MySqlDatabase\MySqlDatabase.sqlproj");
    }
 
    [TestInitialize]
    public virtual void SetupTest()
    {
        
        _connectionHelper = new ConnectionHelper();
        _connectionHelper.OpenTestConnection();
    }
 
    [TestCleanup]
    public virtual void TearDownTest()
    {
        _connectionHelper.CloseTestConnection();
    }
}

In Summary

We now have a base test class that will deploy the database, establish a new connection, and transaction; and then, on completion of the test, will roll back the transaction. Here’s what the above test now looks like:

[TestClass]
public class UnitTest2 : BaseTest
{
    [TestMethod]
    public void CheckTotalAmount3()
    {
 
        // Arrange
        using (SqlCommand sqlCommand = _connectionHelper.UseNewTestCommand())
        {
            sqlCommand.CommandText =
                "INSERT INTO SalesOrder (Id, ProductCode, NetAmount) " +
                "VALUES (2, 'test', 10)";
            sqlCommand.ExecuteNonQuery();
        }
 
        // Act
        using (SqlCommand sqlCommand = _connectionHelper.UseNewTestCommand())
        {                
            sqlCommand.CommandText = $"SELECT TotalAmount FROM SalesOrder WHERE Id = 2";
            decimal result = decimal.Parse(sqlCommand.ExecuteScalar().ToString());

            // Assert
            Assert.AreEqual(12, result);
        }
    }
}

Debugging Unit Tests

The idea behind the framework described above is that the data is never committed to the database; as a consequence of this, the tests are repeatable, because nothing ever changes. The unfortunate side-effect here is that debugging the test is made more difficult as, if it fails, it is not possible to see directly which changes have been made. There’s a couple of ways around this. One of which is to simply debug the test, and then manually fire a commit, look at the data and continue. However, a SQL expert recently introduced me to a concept of “Dirty Reads”.

Dirty Reads

Dirty reads are achieved by issuing the following command the SQL Server:

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

This allows you to see changes in the database which are still pending (that is, they have yet to be committed). What this means is that you can see the state of the data as it currently is, it also doesn’t place a lock on the data. One of the big issues with using this methodology is that you can see half committed transactions; of course, in this instance, that’s exactly what you want! Let’s debug our unit test:

Now let’s have a look at the SalesOrder table:

Not only does this not return anything, it doesn’t return at all. We’ve locked the table, and held it in a transaction. Let’s apply our dirty read and see what happens:

Instantly, we get the SalesOrder. If we now complete the test and run the query again, the data is gone:

References

https://pragprog.com/magazines/2012-01/unit-tests-are-first

http://stackoverflow.com/questions/13843990/how-can-i-programatically-publish-a-sql-server-database-project

https://social.msdn.microsoft.com/Forums/vstudio/en-US/ec95c513-f972-45ad-b108-5fcfd27f39bc/how-to-build-a-solution-within-c-net-40-?forum=msbuild

http://stackoverflow.com/questions/10438258/using-microsoft-build-evaluation-to-publish-a-database-project-sqlproj

https://msdn.microsoft.com/en-us/library/microsoft.build.framework.ilogger.aspx

http://stackoverflow.com/questions/10438258/using-microsoft-build-evaluation-to-publish-a-database-project-sqlproj

https://msdn.microsoft.com/en-us/library/hh272681(v=vs.103).aspx