Tag Archives: c#

Azure Functions

Azure functions are Microsoft’s answer to “serverless” architecture. The concept behind Serverless Architecture being that you can create service functionality, but you don’t need to worry about a server. Obviously, there is one: it’s not magic; it’s just not your problem.

How?

Let’s start by creating a new Azure function app:

Once created, search “All resources”; you might need to give it a minute or two:

Next, it asks you to pick function type. In this case, we’re going to pick “Custom function”:

Azure then displays a plethora of options for creating your function. We’re going to go for “Generic Webhook” (and name it):

A Webhook is a http callback; meaning that you can use them in the same way as you would any other HTTP service.

This creates your function (with some default code):

We’ll leave the default code, and run it (because you can’t go wrong with default code – it always does exactly what you need… assuming what you need is what it does):

The right hand panel shows the output from the function. Which means that the function works; so, we now have a web based function that works… well… says hello world (ish). How do we call it?

Using the function

The function has an allocated URL:

Given that we have a service, and a connection URL; the rest is pretty straightforward. Let’s try to connect from a console application:

        static void Main(string[] args)
        {
            HttpClient client = new HttpClient();
            string url = "https://pcm-test.azurewebsites.net/api/pcm_GenericWebhookCSharp1?code=Kk2397soUoaK7hbxQa6qUSMV2S/AvLCvjn508ujAJMMZiita5TsjkQ==";

            var inputObject = new
            {
                first = "pcm-Test-input-first",
                last = "pcm-Test-input-last"
            };
            string param = JsonConvert.SerializeObject(inputObject);
            HttpContent content = new StringContent(param, Encoding.UTF8, "application/json");

            HttpResponseMessage response = client.PostAsync(url, content).Result;
            string results = response.Content.ReadAsStringAsync().Result;

            Console.WriteLine($"results: {results}");
            Console.ReadLine();
        }
    }

When run, this returns:

Conclusion

Let’s think about what we’ve just done here: we have set up a service, connected to that service from a remote source and returned data. Now, let’s think about what we haven’t done: any configuration; that is, other than clicking “Create Function”.

This “serverless” architecture seems to be the nth degree of SOA. If I wish, I can create one of these functions for each of the server activities in my application, they are available to anything with an internet connection. It then becomes Microsoft’s problem if my new website suddenly takes off and millions of people are trying to access it.

References

http://robertmayer.se/2016/04/19/azure-function-app-to-send-emails/

http://www.c-sharpcorner.com/article/azure-functions-create-generic-webhook-trigger/

Azure Service Bus – Send an e-mail on Message Timeout

A message queue has, in its architecture, two main points of failure; the first is the situation where a message is added to a queue, but never read (or at least not read within a specified period of time); this is called a Dead Letter, and it is the subject of this post. The second is where the message is corrupt, or it breaks the reading logic in some way; that is known as a Poison Message.

There are a number of reasons that a message might not get read in the specified time: the service reading and processing the messages might not be keeping up with the supply, it might have crashed, the network connection might have failed.

One possible thing to do at this stage, is to have a process that automatically notifies someone that a message has ended up in the dead letter queue.

Step One – specify a timeout

Here’s how you would specify a timeout on the message specifically:

           BrokeredMessage message = new BrokeredMessage(messageBody)
            {
                MessageId = id,
                TimeToLive = new TimeSpan(0, 5, 0)
            };

Or, you can create a default on the queue from the QueueDescription (typically this would be done when you initially create the queue:

                QueueDescription qd = new QueueDescription("TestQueue")
                {
                    DefaultMessageTimeToLive = new TimeSpan(0, 5, 0)
                };
                nm.CreateQueue(qd);

Should these values differ, the shortest time will be taken.

What happens to the message by default?

I’ve added a message to the queue using the default timeout of 5 minutes; here it is happily sitting in the queue:

Looking at the properties of the queue, we can determine that the “TimeToLive” is, indeed, 5 minutes:

In addition, you can see that, by default, the flag telling Service Bus to move the message to a dead letter queue is not checked. This means that the message will not be moved to the dead letter queue.

5 Minutes later:

Nothing has happened to this queue, except time passing. The message has now been discarded. It seems an odd behaviour; however, as with ReadAndDelete Locks there may be reasons that this behaviour is required.

Step Two – Dead Letters

If you want to actually do something with the expired message, the key is a concept called “Dead Lettering”. The following code will direct the Service Bus to put the offending message into the “Dead Letter Queue”:


                QueueDescription qd = new QueueDescription("TestQueue")
                {
                    DefaultMessageTimeToLive = new TimeSpan(0, 5, 0),
                    EnableDeadLetteringOnMessageExpiration = true
                };
                nm.CreateQueue(qd);

Here’s the result for the same test:

Step Three – Doing something with this…

Okay – so the message hasn’t been processed, and it’s now sat in a queue specially designed for that kind of thing, so what can we do with it? One possible thing is to create a piece of software that monitors this queue. This is an adaptation of the code that I originally created here:

        static void Main(string[] args)
        {
            System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
            sw.Start();

            if (!InitialiseClient())
            {
                Console.WriteLine("Unable to initialise client");
            }
            else
            {
                while (true)
                {
                    string message = ReadMessage("TestQueue/$DeadLetterQueue");

                    if (string.IsNullOrWhiteSpace(message)) break;
                    Console.WriteLine($"{DateTime.Now}: Message received: {message}");
                }
            }

            sw.Stop();
            Console.WriteLine($"Done ({sw.Elapsed.TotalSeconds}) seconds");
            Console.ReadLine();
        }

        private static bool InitialiseClient()
        {
            Uri uri = ServiceManagementHelper.GetServiceUri();
            TokenProvider tokenProvider = ServiceManagementHelper.GetTokenProvider(uri);

            NamespaceManager nm = new NamespaceManager(uri, tokenProvider);
            return nm.QueueExists("TestQueue");
        }

        private static string ReadMessage(string queueName)
        {
            QueueClient client = QueueManagementHelper.GetQueueClient(queueName, true);

            BrokeredMessage message = client.Receive();
            if (message == null) return string.Empty;
            string messageBody = message.GetBody<string>();

            //message.Complete();

            return messageBody;
        }

If this was all that we had to monitor the queue, then somebody’s job would need to be to watch this application. That may make sense, depending on the nature of the business; however, we could simply notify the person in question that there’s a problem. Now, if only the internet had a concept of an offline messaging facility that works something akin to the postal service, only faster…

        static void Main(string[] args)
        {
            System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
            sw.Start();

            if (!InitialiseClient())
            {
                Console.WriteLine("Unable to initialise client");
            }
            else
            {
                while (true)
                {
                    string message = ReadMessage("TestQueue/$DeadLetterQueue");

                    if (string.IsNullOrWhiteSpace(message)) break;
                    Console.WriteLine($"{DateTime.Now}: Message received: {message}");

                    Console.WriteLine($"{DateTime.Now}: Send e-mail");
                    SendEmail(message);
                }
            }

            sw.Stop();
            Console.WriteLine($"Done ({sw.Elapsed.TotalSeconds}) seconds");
            Console.ReadLine();
        }

        private static void SendEmail(string messageText)
        {
            System.Net.Mail.MailMessage message = new System.Net.Mail.MailMessage();
            message.To.Add("notification.address@hotmail.co.uk");
            message.Subject = "Message in queue has expired";
            message.From = new System.Net.Mail.MailAddress("my.address@hotmail.co.uk");
            message.Body = messageText;
            System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("smtp.live.com");
            smtp.Port = 587;
            smtp.UseDefaultCredentials = false;
            smtp.Credentials = new System.Net.NetworkCredential("my.address@hotmail.co.uk", "passw0rd");
            smtp.EnableSsl = true;
            smtp.Send(message);
        }

In order to prevent a torrent of mails, you might want to put a delay in this code, or even maintain some kind of list so that you only send one mail per day.

References

https://docs.microsoft.com/en-us/dotnet/api/microsoft.servicebus.messaging.queuedescription.enabledeadletteringonmessageexpiration?view=azureservicebus-4.0.0#Microsoft_ServiceBus_Messaging_QueueDescription_EnableDeadLetteringOnMessageExpiration

https://www.codit.eu/blog/2015/01/automatically-expire-messages-in-azure-service-bus-how-it-works/

https://stackoverflow.com/questions/9851319/how-to-add-smtp-hotmail-account-to-send-mail

Using BenchmarkDotNet to profile string comparison

Introduction

String comparison and manipulation of strings are some of the slowest and most expensive (in terms of GC) things that you can do in .Net. In my head, I’ve always believed that using String.Compare outperforms string1.ToUpper() == string2.ToUpper(), which I think I once saw on a StackOverflow post.

In this post, I will do some actual testing on the various methods using BenchMarkDotNet (which I have previously written about).

Setting Up BenchmarkDotNet

There’s not much to this – just install a NuGet package:

Install-Package BenchmarkDotNet

Other than that, you just need to decorate your methods with:

[Benchmark]

You can’t (ATM) specify method parameters, but you can decorate a set-up method, or you can specify some parameters in a public variable:


        [Params("test1", "test2", "I am an aardvark")]
        public string _string1;

        [Params("test1", "Test2", "I Am an AARDVARK")]
        public string _string2;

Finally, in the main method, you run the class:


        static void Main(string[] args)
        {
            BenchmarkRunner.Run<StringCompareCaseSensitive>();
        }

Once run, the results are output into the following directory:

bin\Debug\BenchmarkDotNet.Artifacts\results

Comparing strings

Case sensitive

The following are the ways that I can think of to compare a string where the case is known:

string1 == string2

string1.Equals(string2) – with various flags

string.Compare(string1, string2)

string.CompareOrdinal(string1, string2)

string1.CompareTo(string2)

string1.IndexOf(string2) – with various flags

And the results were:

This is definitely not what I expected. String.Compare is actually slower that a straightforward comparison, and not by a small amount.

Case insensitive

The following are the ways that I can think of to compare a string where the case is not known:

String1.ToUpper() == string2.ToUpper()

String1.ToLower() == string2.ToLower()

string1.Equals(string2) – with various flags

string.Compare(string1, string2, true)

string1.IndexOf(string2) -with various flags

Results:

So, it looks like the most efficient string comparison is:

_string1.Equals(_string2, StringComparison.OrdinalIgnoreCase);

But why?

Nobody knows – Looking at the IL

The good thing about .Net, is that if you want to see what your code looks like once it’s “compiled”, you can. It’s not perfect, because you still can’t see the actual, executed code, but it still gives you a good idea of why it’s slow or fast. However, because all of the functions in question are system functions, looking at the IL for the test code is pretty much pointless.

Let’s run ildasm:

(bet you’re glad I included that screenshot)

The string comparison functions are in mscorelib.dll:

Here’s the code in there:

.method public hidebysig static int32  Compare(string strA,
                                               string strB,
                                               valuetype System.StringComparison comparisonType) cil managed
{
  .custom instance void System.Security.SecuritySafeCriticalAttribute::.ctor() = ( 01 00 00 00 ) 
  // Code size       0 (0x0)
} // end of method String::Compare

To be honest, I spent a while burrowing down this particular rabbit hole… but finally decided to see what ILSpy had to say about it… it looks like there is a helper method in the string class that, for some reason, ildasm doesn’t show. Let’s have a look what it does for:

string.Compare(_string1, _string2, true) == 0

The decompiled version is:

[__DynamicallyInvokable]
public static int Compare(string strA, string strB, bool ignoreCase)
{
    if (ignoreCase)
    {
        return CultureInfo.CurrentCulture.CompareInfo.Compare(strA, strB, CompareOptions.IgnoreCase);
    }
    return CultureInfo.CurrentCulture.CompareInfo.Compare(strA, strB, CompareOptions.None);
}

And the static method CompareInfo.Compare:

public virtual int Compare(string string1, string string2, CompareOptions options)
{
    if (options == CompareOptions.OrdinalIgnoreCase)
    {
        return string.Compare(string1, string2, StringComparison.OrdinalIgnoreCase);
    }
    if ((options & CompareOptions.Ordinal) != CompareOptions.None)
    {
        if (options != CompareOptions.Ordinal)
        {
            throw new ArgumentException(Environment.GetResourceString("Argument_CompareOptionOrdinal"), "options");
        }
        return string.CompareOrdinal(string1, string2);
    }
    else
    {
        if ((options & ~(CompareOptions.IgnoreCase | CompareOptions.IgnoreNonSpace | CompareOptions.IgnoreSymbols | CompareOptions.IgnoreKanaType | CompareOptions.IgnoreWidth | CompareOptions.StringSort)) != CompareOptions.None)
        {
            throw new ArgumentException(Environment.GetResourceString("Argument_InvalidFlag"), "options");
        }
        if (string1 == null)
        {
            if (string2 == null)
            {
                return 0;
            }
            return -1;
        }
        else
        {
            if (string2 == null)
            {
                return 1;
            }
            return CompareInfo.InternalCompareString(this.m_dataHandle, this.m_handleOrigin, this.m_sortName, string1, 0, string1.Length, string2, 0, string2.Length, CompareInfo.GetNativeCompareFlags(options));
        }
    }
}

And further:

Well… I couldn’t get further, so I asked Microsoft… the impression is that this function is generated at runtime.

There was a link to some code in this answer, too. While I couldn’t really identify any actual comparison code from this, I did notice that there was a check like this:

#ifndef FEATURE_CORECLR

So… does .NetCore work any better?

Having created a new .Net Core project, and copying the files across (I was going to add them as a link, but InvariantCulture has been removed (or rather, not included) in Core.

Anyway, the results from .Net Core (for case sensitive checks) are:

And case in-sensitive:

Conclusion

So, the clear winner across all tests for case sensitive checks is to use:

string1.Equals(string2)

And .Net Core is slightly faster than 4.6.2.

For case insensitive the clear winner is (by a large margin):

string1.Equals(string2, StringComparison.OrdinalIgnoreCase);

And, again, there’s around a 15 – 20% speed boost using .Net Core.

References

There is a GitHub repository for the code in this post here.

https://msdn.microsoft.com/en-us/library/fbh501kz%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396

https://github.com/dotnet/BenchmarkDotNet/issues/60

http://mattwarren.org/2016/02/17/adventures-in-benchmarking-memory-allocations/

https://www.hanselman.com/blog/BenchmarkingNETCode.aspx

http://pmichaels.net/2016/11/04/message-persistence-in-rabbitmq-and-benchmarkdotnet/

https://blog.codinghorror.com/the-real-cost-of-performance/

https://msdn.microsoft.com/en-us/library/aa309387%28v=vs.71%29.aspx?f=255&MSPPError=-2147217396

http://ilspy.net/

http://stackoverflow.com/questions/9491337/what-is-dllimportqcall

Seriliasing Interfaces in JSON (or using a JsonConverter in JSON.NET)

Imagine that you have the following interface:

    public interface IProduct
    {
        int Id { get; set; }
        decimal UnitPrice { get; set; }
    }

This is an interface, and so may have a number of implementations; however, we know that every implementation will contain at least 2 fields, and what type they will be. If we wanted to serialise this, we’d probably write something like this:

        private static string SerialiseProduct(IProduct product)
        {
            string json = JsonConvert.SerializeObject(product);
            return json;
        }

If you were to call this from a console app, it would work fine:


        static void Main(string[] args)
        {
            IProduct product = new Product()
            {
                Id = 1,
                UnitPrice = 12.3m
            };

            string json = SerialiseProduct(product);
            Console.WriteLine(json);

Okay, so far so good. Now, let’s deserialise:


        private static IProduct DeserialiseProduct(string json)
        {
            IProduct product = JsonConvert.DeserializeObject<IProduct>(json);

            return product;
        }

And let’s call it:


        static void Main(string[] args)
        {
            IProduct product = new Product()
            {
                Id = 1,
                UnitPrice = 12.3m
            };

            string json = SerialiseProduct(product);
            Console.WriteLine(json);

            IProduct product2 = DeserialiseProduct(json);
            Console.WriteLine(product2.Id);
            
            Console.ReadLine();

        }

So, that runs fine:

Newtonsoft.Json.JsonSerializationException: ‘Could not create an instance of type SerialiseInterfaceJsonNet.IProduct. Type is an interface or abstract class and cannot be instantiated.

No.

Why?

The reason is that you can’t create an interface; for example:

That doesn’t even compile, but effectively, that’s what’s happening behind the scenes.

Converters

Json.Net allows the use of something called a converter. What that means is that I can inject functionality into the deserialisation process that tells Json.Net what to do with this interface. Here’s a possible converter for our class:


    class ProductConverter : JsonConverter
    {
        public override bool CanConvert(Type objectType)
        {
            return (objectType == typeof(IProduct));
        }

        public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
        {
            return serializer.Deserialize(reader, typeof(Product));
        }

        public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
        {
            serializer.Serialize(writer, value, typeof(Product));
        }
    }

It’s a relatively simple interface, you tell it how to identify your class, and then how to read and write the Json.

Finally, you just need to tell the converter to use this:


        private static IProduct DeserialiseProduct(string json)
        {
            var settings = new JsonSerializerSettings();
            settings.Converters.Add(new ProductConverter());

            IProduct product = JsonConvert.DeserializeObject<IProduct>(json, settings);

            return product;
        }

By using the settings parameter.

References

http://www.jerriepelser.com/blog/custom-converters-in-json-net-case-study-1/

NUnit TestCaseSource

While working on this project, I found a need to abstract away a base type that the unit tests use (in this instance, it was a queue type). I was only testing a single type (PriorityQueue); however, I wanted to create a new type, but all the basic tests for the new type are the same as the existing ones. This led me to investigate the TestCaseSource attribute in NUnit.

As a result, I needed a way to re-use the tests. There are definitely multiple ways to do this; the simplest one is probably to create a factory class, and pass in a string parameter. The only thing that put me off this is that you end up with the following test case:

        [TestCase("test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9"]
        [TestCase("a1", "a", "a1", "b", "c", "d", "a"]
        public void Queue_Dequeue_CheckResultOrdering(
            string first, string last, params string[] queueItems)
        {

Becoming:

        [TestCase("PriorityQueue", "test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9"]
        [TestCase("PriorityQueue2", "test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9"]
        [TestCase("PriorityQueue", "a1", "a", "a1", "b", "c", "d", "a"]
        [TestCase("PriorityQueue2", "a1", "a", "a1", "b", "c", "d", "a"]
        public void Queue_Dequeue_CheckResultOrdering(
            string queueType, string first, string last, params string[] queueItems)
        {

This isn’t very scaleable when adding a third or fourth type.

TestCaseSource

It turns out that the (or an least an) answer to this is to use NUnit’s TestCaseSource attribute. The NUnit code base dog foods quite extensively, so that is not a bad place to look for examples of how this works; however, what I couldn’t find was a way to mix and match. To better illustrate the point; here’s the first test that I changed to use TestCaseSource:

        [Test]
        public void Queue_NoEntries_CheckCount()
        {
            // Arrange
            PQueue.PriorityQueue<string> queue = new PQueue.PriorityQueue<string>();

            // Act
            int count = queue.Count();

            // Assert
            Assert.AreEqual(0, count);
        }

Which became:

        [Test, TestCaseSource(typeof(TestableQueueItemFactory), "ReturnQueueTypes")]
        public void Queue_NoEntries_CheckCount(IQueue<string> queue)
        {
            // Arrange


            // Act
            int count = queue.Count();

            // Assert
            Assert.AreEqual(0, count);
        }

(For completeness, the TestableQueueItemFactory is here):

    public static class TestableQueueItemFactory
    {
        public static IEnumerable<IQueue<string>> ReturnQueueTypes()
        {
            yield return new PQueue.PriorityQueue<string>();
        }
    }

However, when you have a TestCase like the one above, there’s a need for the equivalent of this (which doesn’t work):

        [Test, TestCaseSource(typeof(TestableQueueItemFactory), "ReturnQueueTypes")]
        [TestCase("test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9")]
        [TestCase("a1", "a", "a1", "b", "c", "d", "a")]
        public void Queue_Dequeue_CheckResultOrdering(string first, string last, params string[] queueItems)
        {

A quick look at the NUnit code base reveals these attributes to be mutually exclusive.

Compromise

By no means is this a perfect solution, but the one that I settled on was to create a second TestCaseSource helper method, which looks like this (along with the test):

        private static IEnumerable Queue_Dequeue_CheckResultOrdering_TestCase()
        {
            foreach(var queueType in TestableQueueItemFactory.ReturnQueueTypes())
            {
                yield return new object[] { queueType, "test", "test9", new string[] { "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9" } };
                yield return new object[] { queueType, "a1", "a", new string[] { "a1", "b", "c", "d", "a" } };
            }
        }

        [Test, TestCaseSource("Queue_Dequeue_CheckResultOrdering_TestCase")]
        public void Queue_Dequeue_CheckResultOrdering(
            IQueue <string> queue, string first, string last, params string[] queueItems)
        {

As you can see, the second helper method doesn’t really help readability, so it’s certainly not a perfect solution; in fact, with a single queue type, this makes the code more complex and less readable. However, When a second and third queue type are introduced, the test suddenly becomes resilient.

YAGNI

At first glance, this may appear to be an example of YAGNI. However, in this article, Martin Fowler does state:

Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify.

Which, I believe, is what we are doing here.

References

http://www.smaclellan.com/posts/parameterized-tests-made-simple/

http://stackoverflow.com/questions/16346903/how-to-use-multiple-testcasesource-attributes-for-an-n-unit-test

https://github.com/nunit/docs/wiki/TestCaseSource-Attribute

http://dotnetgeek.tumblr.com/post/2851360238/exploiting-nunit-attributes-valuesourceattribute

https://github.com/nunit/docs/wiki/TestCaseSource-Attribute

WPF Performance – TextBlock

WPF typically doesn’t have many performance issues and, where it does, this can usually be fixed by virtualisation. Having said that, even if you never need to use this, it’s useful to know that you can eek that last ounce of performance out of the system.

Here’s a basic program that contains a TextBlock:

<Window x:Class="TextBlockTest.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        xmlns:local="clr-namespace:TextBlockTest"
        mc:Ignorable="d"
        Title="MainWindow" Height="350" Width="525"
        x:Name="MainWindowView">
    <Grid>
        <ScrollViewer>
            <ItemsControl ItemsSource="{Binding BigList, ElementName=MainWindowView}" Margin="0,-1,0,1">
                <ItemsControl.ItemTemplate>
                    <DataTemplate>
                        <TextBlock Text="{Binding}"/>
                    </DataTemplate>
                </ItemsControl.ItemTemplate>
            </ItemsControl>
        </ScrollViewer>
    </Grid>
</Window>

Code behind:

using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;

namespace TextBlockTest
{
    /// <summary>
    /// Interaction logic for MainWindow.xaml
    /// </summary>
    public partial class MainWindow : Window
    {
        public ObservableCollection<string> BigList { get; set; }

        public MainWindow()
        {
            BigList = new ObservableCollection<string>();
            for (int i = 0; i <= 10000; i++)
            {
                BigList.Add($"Item {i}");
            }

            InitializeComponent();
        }
    }
}

Let’s, for a minute, imagine this is slow, and profile it:

The layout is taking most of the time. Remember that each control needs to be created, and remember that the TextBlock does slightly more than just display text:

The whole panel took 3.46s. Not terrible, performance, but can it be improved? The answer is: yes, it can! Very, very slightly.

Let’s create a Custom Control:

using System;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;

namespace FastTextBlock
{
   
    public class MyTextBlockTest : Control
    {
        private FormattedText _formattedText;

        static MyTextBlockTest()
        {
            //DefaultStyleKeyProperty.OverrideMetadata(typeof(MyTextBlockTest), new FrameworkPropertyMetadata(typeof(MyTextBlockTest)));
        }

        public static readonly DependencyProperty TextProperty =
             DependencyProperty.Register(
                 "Text", 
                 typeof(string),
                 typeof(MyTextBlockTest), 
                 new FrameworkPropertyMetadata(string.Empty, FrameworkPropertyMetadataOptions.AffectsMeasure,
                    (o, e) => ((MyTextBlockTest)o).TextPropertyChanged((string)e.NewValue)));

        private void TextPropertyChanged(string text)
        {
            var typeface = new Typeface(
                new FontFamily("Times New Roman"),
                FontStyles.Normal, FontWeights.Normal, FontStretches.Normal);

            _formattedText = new FormattedText(
                text, CultureInfo.CurrentCulture,
                FlowDirection.LeftToRight, typeface, 15, Brushes.Black);
        }


        public string Text
        {
            get { return (string)GetValue(TextProperty); }
            set { SetValue(TextProperty, value); }
        }

        protected override void OnRender(DrawingContext drawingContext)
        {
            if (_formattedText != null)
            {
                drawingContext.DrawText(_formattedText, new Point());
            }
        }

        protected override Size MeasureOverride(Size constraint)
        {
            //return base.MeasureOverride(constraint);

            return _formattedText != null
            ? new Size(_formattedText.Width, _formattedText.Height)
            : new Size();
        }
    }
}

Here’s the new XAML:

    <Grid>
        <ScrollViewer>
            <ItemsControl ItemsSource="{Binding BigList, ElementName=MainWindowView}" Margin="0,-1,0,1">
                <ItemsControl.ItemTemplate>
                    <DataTemplate>
                        <!--<TextBlock Text="{Binding}"/>-->
                        <controls:MyTextBlockTest Text="{Binding}" />
                    </DataTemplate>
                </ItemsControl.ItemTemplate>
            </ItemsControl>
        </ScrollViewer>
    </Grid>

Shaves around 10ms off the time:

Even more time can be shaved by moving up an element (that is, inheriting from a more base class than `Control`. In fact, `Control` inherits from `FrameworkElement`:

public class MyTextBlockTest : FrameworkElement

Shaves another 10ms off:

Conclusion

Clearly, this isn’t a huge performance boost, and in 99% of use cases, this would be massively premature optimisation. However, the time that this really comes into its own is where you have a compound control (a control that consists of other controls), and you have lots of them (hundreds). See my next post for details.

References:

https://social.msdn.microsoft.com/Forums/en-US/94ddd25e-7093-4986-b8c8-b647924251f6/manual-rendering-of-a-wpf-user-control?forum=wpf

http://www.codemag.com/article/100023

http://stackoverflow.com/questions/20338044/how-do-i-make-a-custom-uielement-derived-class-that-contains-and-displays-othe

http://stackoverflow.com/questions/42494455/wpf-custom-control-inside-itemscontrol

Getting Started With SignalR

SignalR is an open source framework allowing bi-directional communication between client and server. Basically, it uses a stack of technologies; the idea being that the Signalr framework will establish the “best” way to maintain a bi-directional data stream, starting with web sockets, and falling all the way back to simply polling the server.

The following gives the basics of establishing a web site that can accept Signalr, and a console app that can send messages to it.

Create project

Let’s go MVC:

Hubs

Hubs are the way in which the Signalr service communicates with its clients. Obviously, the term service here may not actually represent a service.

To add a hub class, select the project, right-click and “New Item..”:

This adds the file, along with new references:

The code above that gets added is:

public void Hello()
{
    Clients.All.hello();
}

Clients.All returns a dynamic type, so we lose intellisense at this point. It’s important that the signature of this method is exactly correct, and that it is decorated with the name of the hub, and that it is decorated with the name of the hub; so let’s replace with:


[HubName("MyHub1")]
public class MyHub1 : Hub
{
    public void Hello(string message)
    {
        Clients.All.Hello(message);
    }
}

Change Startup.cs:

public partial class Startup
{
    public void Configuration(IAppBuilder app)
    {
        ConfigureAuth(app);
 
        app.MapSignalR();
    }
}

For all this to actually do anything, the next thing to do is hook up the JavaScript:

$(function () {
    // Declare a proxy to reference the hub. 
    var hub = $.connection.MyHub1;
    // Create a function that the hub can call to broadcast messages.
    hub.client.hello = function (message) {
 
        alert("Hello");
    };
 
    
    $.connection.hub.start()
        .done(function () { console.log("MyHub1 Successfully Started"); })
        .fail(function () { console.log("Error: MyHub1 Not Successfully Started"); })
});

Effectively, once we receive a message, we’re just going to display an alert. Once the event handler is wired up, we try to start the hub.

Next, reference the required files in BundleConfig.cs:

bundles.Add(new ScriptBundle("~/bundles/signalr").Include(
    "~/Scripts/jquery-3.1.1.min.js").Include(
    "~/Scripts/jquery.signalR-2.2.1.js"));

These are referenced in _Layout.cshtml; remember also that, because SignalR references Jquery, you’ll need to remove other references to Jquery:


<title>@ViewBag.Title - My ASP.NET Application</title>
@Styles.Render("~/Content/css")
@Scripts.Render("~/bundles/modernizr")    
@Scripts.Render("~/bundles/signalr")    
<script type="text/javascript" src="~/signalr/hubs"></script>
<script type="text/javascript" src="~/Scripts/Notification.js"></script>

. . .

    </div>
    
    @Scripts.Render("~/bundles/bootstrap")
    @RenderSection("scripts", required: false)
</body>

Notes on Bundles

The purpose of bundling is to shrink the size of the bundled files. The idea being that small files make for a speedy web-site.

Console App

The next step is to create an application that can fire a notification to the page. In this case, I’m using a console app, just because I like to see everything working with console apps.

Start with a NuGet Reference:

The code:

class Program
{
    static void Main(string[] args)
    {
        Console.Write("Message: ");
        string message = Console.ReadLine();
 
        HubConnection connection = new HubConnection("http://localhost:4053/");
        IHubProxy hub = connection.CreateHubProxy("myHub1");
                    
        connection.Start().Wait();
        hub.Invoke<string>("Hello", message).Wait();            
 
        Console.WriteLine("Sent");
        Console.ReadLine();
    }
}

And that’s it – you should be able to send a message to the web site from the console app. The examples that are typically given elsewhere on the net are chat rooms, but this clearly has many more uses.

Some abstract notes that I made while researching this.

Adding:

Version 1

protected void Application_Start()
{
    AreaRegistration.RegisterAllAreas();
 
    >RouteTable.Routes.MapHubs(new HubConfiguration());
 
    FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
…

Gives:

Severity Code Description Project File Line Source Suppression State
Error CS0619 ‘SignalRRouteExtensions.MapHubs(RouteCollection, HubConfiguration)’ is obsolete: ‘Use IAppBuilder.MapSignalR in an Owin Startup class. See http://go.microsoft.com/fwlink/?LinkId=320578 for more details.’ SignalRTest3 C:\Users\Paul\documents\visual studio 14\Projects\SignalRTest3\SignalRTest3\Global.asax.cs 18 Build Active

This was for v1 Signal R – superseded in 2.

CORS

During trying to get this working, the prospect of using CORS came up. This enables cross domain requests, which are typically prohibited.

Proxies

The generated Proxy can be viewed (navigate to http://localhost:4053/signalr/hubs):

 $.hubConnection.prototype.createHubProxies = function () {
        var proxies = {};
        this.starting(function () {
            // Register the hub proxies as subscribed
            // (instance, shouldSubscribe)
            registerHubProxies(proxies, true);
this._registerSubscribedHubs();
        }).disconnected(function () {
            // Unsubscribe all hub proxies when we "disconnect".  This is to ensure that we do not re-add functional call backs.
            // (instance, shouldSubscribe)
            registerHubProxies(proxies, false);
        });
proxies['MyHub1'] = this.createHubProxy('MyHub1'); 
        proxies['MyHub1'].client = { };
        proxies['MyHub1'].server = {
            hello: function (message) {
                return proxies['MyHub1'].invoke.apply(proxies['MyHub1'], $.merge(["Hello"], $.makeArray(arguments)));
             }
        };
return proxies;
    };

References:

https://www.asp.net/signalr/overview/guide-to-the-api/hubs-api-guide-javascript-client

https://docs.microsoft.com/en-us/aspnet/signalr/overview/getting-started/tutorial-getting-started-with-signalr

https://docs.microsoft.com/en-us/aspnet/signalr/overview/guide-to-the-api/hubs-api-guide-javascript-client

https://github.com/SignalR/SignalR/wiki/Faq

http://stackoverflow.com/questions/42108193/signalr-test-project-not-working-as-expected

http://www.jeffreyfritz.com/2015/05/where-did-my-asp-net-bundles-go-in-asp-net-5/

Scientist.Net

The purpose of the library is to allow you to try new code in a small sample of production usage – effectively, testing in production. The idea being that if you’re refactoring an important part of the system, you can re-write, and then call your new code on occasion; it’s logged and, should it reveal a major issue, can be simply switched off.

The first port of call is the GitHub repository:

Which adds this:

The following is some test code; there are two methods, an old, slow method, and a refactored new method:


class LegacyCode
{
    public void OldMethod1()
    {
        System.Threading.Thread.Sleep(1000);
        System.Console.WriteLine("This is old code");
    }
}
class RefactoredCode
{
    public void RefactoredNewMethod()
    {
        System.Console.WriteLine("RefactoredNewMethod called");
    }
}
static void Main(string[] args)
{
    System.Console.WriteLine("Start Test");
 
    for (int i = 1; i <= 100; i++)
    {
        Scientist.Science<bool>("Test", testNewCode =>
        {
            testNewCode.Use(() =>
            {
                new LegacyCode().OldMethod1();
                return true;
            });
            testNewCode.Try(() =>
            {
                new RefactoredCode().RefactoredNewMethod();
                return true;
            });
        });
    }
 
    System.Console.ReadLine();
}

In the code above you’ll notice that the call to Scientist looks a little forced – that’s because it insists on a return value from the experiments (and experiment being a trial of new code).

As you can see, Scientist is managing the calls between the new and old method:

One thing that wasn’t immediately obvious to me here was exactly how / what it does with this; especially given that the Try and Use blocks were not always appearing in a consistent order; the following test revealed it more clearly:

Because the order of the runs are randomly altered, I had assumed that which code was called was also randomly determined; in fact, both code paths are run. This is a hugely important distinction, because if you are changing data in one or the other, you need to factor this in.

Statistics

Scientist collects a number of statistics on the run; to see these, you need to implement an IResultPublisher; for example:

public class ResultPublisher : IResultPublisher
{
    public Task Publish<T, TClean>(Result<T, TClean> result)
    {
        System.Console.WriteLine($"Publishing results for experiment '{result.ExperimentName}'");
        System.Console.WriteLine($"Result: {(result.Matched ? "MATCH" : "MISMATCH")}");
        System.Console.WriteLine($"Control value: {result.Control.Value}");
        System.Console.WriteLine($"Control duration: {result.Control.Duration}");
        foreach (var observation in result.Candidates)
        {
            System.Console.WriteLine($"Candidate name: {observation.Name}");
            System.Console.WriteLine($"Candidate value: {observation.Value}");
            System.Console.WriteLine($"Candidate duration: {observation.Duration}");
        }
 
        return Task.FromResult(0);
    }
}

The code in here is executed for every call:

We’ve clearly sped up the call, but does it still do the same thing?

Matches… and mismatches

There’s a lot of information in the trace above. One thing that Scientist.Net does allow you to do is to compare the results of a function; let’s change the initial experiment a little:

public bool OldMethod1(int test)
{            
    System.Threading.Thread.Sleep(1000);
    System.Console.WriteLine("This is old code");
    return test >= 50;
}

public bool RefactoredNewMethod(int test)
{
    System.Console.WriteLine("RefactoredNewMethod called");
 
    return test >= 50;
}

for (int i = 1; i <= 100; i++)
{
    var result = Scientist.Science<bool>("Test", testNewCode =>
    {
        testNewCode.Use(() =>
        {
            return new LegacyCode().OldMethod1(i);                        
        });
        testNewCode.Try(() =>
        {
            return new RefactoredCode().RefactoredNewMethod(i);                        
        });
    });
}

Now we’re returning a boolean flag to say that the number is greater or equal to 50, and returning that. Finally, we need to change ResultPublisher (otherwise we won’t be able to see the wood for the trees:


public Task Publish<T, TClean>(Result<T, TClean> result)
{
    if (result.Mismatched)
    {
        System.Console.WriteLine($"Publishing results for experiment '{result.ExperimentName}'");
        System.Console.WriteLine($"Result: {(result.Matched ? "MATCH" : "MISMATCH")}");
        System.Console.WriteLine($"Control value: {result.Control.Value}");
        System.Console.WriteLine($"Control duration: {result.Control.Duration}");
        foreach (var observation in result.Candidates)
        {
            System.Console.WriteLine($"Candidate name: {observation.Name}");
            System.Console.WriteLine($"Candidate value: {observation.Value}");
            System.Console.WriteLine($"Candidate duration: {observation.Duration}");
        }
    }
 
    return Task.FromResult(0);
}

If we run that:

Everything is the same. So, let’s break the new code:


public bool RefactoredNewMethod(int test)
{
    System.Console.WriteLine("RefactoredNewMethod called");
 
    return test > 50;
}

Now we have a bug in the new code, so what happens:

We have a mismatch. The old code is now behaving differently, and so Scientist has identified this.

Summary

I came across this on this episode of .Net Rocks with Phil Haack. There are more features here, too – you can control the way the comparison works, categorise the results, and so forth.

References

http://haacked.com/archive/2016/01/20/scientist/

https://visualstudiomagazine.com/articles/2016/11/01/testing-experimental-code.aspx

https://github.com/github/Scientist.net

Building Block Game in Unity 3D

Not sure this qualifies as a game, but it’s a computerised version of the building blocks that you might give to a three-year-old. What can I say, it was a nice way to spend a Sunday afternoon !

Here’s what the finished game / program looks like:

The Script

There is only one script:

public class BehaviourScript : MonoBehaviour
{
    
    private Vector3 screenPoint;
    private Vector3 offset;
 
    void OnMouseDown()
    {
        screenPoint = Camera.main.WorldToScreenPoint(gameObject.transform.position);
        offset = gameObject.transform.position - Camera.main.ScreenToWorldPoint(new Vector3(Input.mousePosition.x, Input.mousePosition.y, screenPoint.z));
    }
 
    void OnMouseDrag()
    {
        Vector3 cursorPoint = new Vector3(Input.mousePosition.x, Input.mousePosition.y, screenPoint.z);
        Vector3 cursorPosition = Camera.main.ScreenToWorldPoint(cursorPoint) + offset;
 
        if (cursorPosition.y > 0)
        {
            transform.position = cursorPosition;
        }
    }
}

The Scene

Basically, the blocks are standard unit cubes with a wood texture, a rigid body and the above script attached:

Microsoft Cognitive Services – Text Recognition

Recently at DDD North I saw a talk on MS cognitive services. This came back and sparked interest in me while I was looking at some TFS APIs (see later posts for why). However, in this post, I’m basically exploring what can be done with these services.

The Hype

  • Language: can detect the language that you pass
  • Topics: can determine the topic being discussed
  • Key Phrases: key points (which I believe may equate to nouns)
  • Sentiment: whether or not what you are saying is good or bad (I must admit, I don’t really understand that – but we can try some phrases to see what it comes up with)

For some reason that I can’t really understand, topics requires over 100 documents, and so I won’t be getting that to work, as I don’t have a text sample big enough. The examples that they give in marketing seem to relate to people booking and reviewing holidays; and it feels a lot like these services are overly skewed toward that particular purpose.

Set-up

Register here:

https://www.microsoft.com/cognitive-services/

Registration is free (although I believe you need a live account).

cog1

Client

The internal name for this at MS is Project Oxford. You don’t have to install the client libraries (because they are just service calls), but you get some objects and helpers if you do:

cog2

Cognition

The following code is largely plagiarised from the links at the bottom of this page:

Here’s the Main function:

var requestDocs = PopulateDocuments();
Console.WriteLine($"-=Requests=-");
foreach (var eachReq in requestDocs.Documents)
{
    Console.WriteLine($"Id: {eachReq.Id} Text: {eachReq.Text}");
}
Console.WriteLine($"-=End Requests=-");
 
string req = JsonConvert.SerializeObject(requestDocs);
 
MakeRequests(req);
Console.WriteLine("Hit ENTER to exit...");
Console.ReadLine();

PopulateDocuments just fills the RequestDocument collection with some test data:


private static LanguageRequest PopulateDocuments()
{
    LanguageRequest requestText = new Microsoft.ProjectOxford.Text.Language.LanguageRequest();
    requestText.Documents.Add(
        new Microsoft.ProjectOxford.Text.Core.Document()
        { Id = "One", Text = "The quick brown fox jumped over the hedge" });
    requestText.Documents.Add(
        new Microsoft.ProjectOxford.Text.Core.Document()
        { Id = "Two", Text = "March is a green month" });
    requestText.Documents.Add(
        new Microsoft.ProjectOxford.Text.Core.Document()
        { Id = "Three", Text = "When I press enter the program crashes" });
    requestText.Documents.Add(
        new Microsoft.ProjectOxford.Text.Core.Document()
        { Id = "4", Text = "Pressing return - the program crashes" });
    requestText.Documents.Add(
        new Microsoft.ProjectOxford.Text.Core.Document()
        { Id = "5", Text = "Los siento, no hablo Enspanol" });
 
    return requestText;
}

As you can see, I dropped some Spanish in there for the language detection. The MakeRequests method and its dependencies:


static async void MakeRequests(string req)
{
    using (var client = new HttpClient())
    {
        client.BaseAddress = new Uri(BaseUrl);
 
        // Request headers.
        client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", AccountKey);
        client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
 
        // Request body. Insert your text data here in JSON format.
        byte[] byteData = Encoding.UTF8.GetBytes(req);
 
        // Detect key phrases:
        var uri = "text/analytics/v2.0/keyPhrases";
        string response = await CallEndpoint(client, uri, byteData);
        Console.WriteLine("Key Phrases");
        Console.WriteLine(ParseResponseGeneric(response));
 
        // Detect language:
        var queryString = HttpUtility.ParseQueryString(string.Empty);
        queryString["numberOfLanguagesToDetect"] = NumLanguages.ToString(CultureInfo.InvariantCulture);
        uri = "text/analytics/v2.0/languages?" + queryString;
        response = await CallEndpoint(client, uri, byteData);
        Console.WriteLine("Detect language");
        Console.WriteLine(ParseResponseLanguage(response));
 
        // Detect topic:
        queryString = HttpUtility.ParseQueryString(string.Empty);
        queryString["minimumNumberOfDocuments"] = "1";
        uri = "text/analytics/v2.0/topics?" + queryString;
        response = await CallEndpoint(client, uri, byteData);
        Console.WriteLine("Detect topic");
        Console.WriteLine(ParseResponseGeneric(response));
 
        // Detect sentiment:
        uri = "text/analytics/v2.0/sentiment";
        response = await CallEndpoint(client, uri, byteData);
        Console.WriteLine("Detect sentiment");
        Console.WriteLine(ParseResponseSentiment(response));
    }
}
private static string ParseResponseSentiment(string response)
{
    if (!string.IsNullOrWhiteSpace(response))
    {
        SentimentResponse resp = JsonConvert.DeserializeObject<SentimentResponse>(response);
        string returnVal = string.Empty;
 
        foreach (var doc in resp.Documents)
        {
            returnVal += Environment.NewLine +
                $"Sentiment: {doc.Id}, Score: {doc.Score}";
        }
 
        return returnVal;
    }
 
    return null;
}
 
private static string ParseResponseLanguage(string response)
{
    if (!string.IsNullOrWhiteSpace(response))
    {
        LanguageResponse resp = JsonConvert.DeserializeObject<LanguageResponse>(response);
        string returnVal = string.Empty;
        foreach(var doc in resp.Documents)
        {
            var detectedLanguage = doc.DetectedLanguages.OrderByDescending(l => l.Score).First();
            returnVal += Environment.NewLine +
                $"Id: {doc.Id}, " +
                $"Language: {detectedLanguage.Name}, " +
                $"Score: {detectedLanguage.Score}";
        }
        return returnVal;
    }
 
    return null;
}
 
private static string ParseResponseGeneric(string response)
{            
    if (!string.IsNullOrWhiteSpace(response))
    {
        return Environment.NewLine + response;                
    }
 
    return null;
}

The subscription key is given when you register (in the screen under “Set-up”). Keep an eye on the requests, too: 5000 seems like a lot, but when you’re testing, you might find you get through them faster than you expect.

Here’s the output:

cog3

Evaluation

So, the 5 phrases that I used were:

The quick brown fox jumped over the hedge

This is a basic sentence indicating an action.

The KeyPhrases API decided that the key points here were “hedge” and “quick brown fox”. It didn’t think that “jumped” was key to this sentence.

The Language API successfully worked out that it’s written in English.

The Sentiment API thought that this was a slightly negative statement.

March is a green month

This was a nonsense statement, but in a valid sentence structure.

The KeyPhrases API identified “green month” as being important, but not March.

The Language API successfully worked out that it’s written in English.

The Sentiment API thought this was a very positive statement.

When I press enter the program crashes

Again, a completely valid sentence, and with a view to my idea ultimate idea for this API.

The KeyPhrases API spotted “program crashes”, but not why. I found this interesting because it seems to conflict with the other phrases, which seemed to identify nouns only.

Again, the Language API knew this was English.

The sentiment API identified that this was a negative statement… which I think I agree with.

Pressing return – the program crashes

The idea here was, it’s basically the same sentence as above, but phrased differently.

The KeyPhrases API wasn’t fooled, and returned the same key phrase – this is good.

Still English, according to the Language API.

This is identified as a negative statement again, but oddly, not as negative as the previous one.

Los siento, no hablo Enspanol

I threw in a Spanish phrase because I felt the Language API hadn’t had much of a run.

The KeyPhrase API pulled out “hablo Espanol”, which based on my very rudimentary Spanish, means the opposite of that was said.

It was correctly identified as Spanish by the Language API.

The Sentiment API identified it as the most negative statement. Perhaps because it has the word “sorry” and “no” in it?

References

Sample code:

https://text-analytics-demo.azurewebsites.net/Home/SampleCode

https://elbruno.com/2016/04/13/cognitiveservices-text-analytics-api-new-operation-detect-key-topics-in-documents/

https://mrfoxsql.wordpress.com/2016/09/13/azure-cognitive-services-apis-with-sql-server-2016-clr/