Monthly Archives: October 2017

Mocking IPrinciple.Identity and Claims in NSubstitute

In ASP.Net, there is a concept of an identity. Built on top of this is an authentication system based on claims; allowing applications to implement a claims based authentication system. That is, I can determine if my user has “Administrator” privileges in the following syntax:

var claim = ClaimsIdentity.FindFirstValue("Administrator");

For more information about how claims work, see this excellent explanation. This post is not really concerned with how claims work, but rather, how to mock them out; which is much more difficult than you might guess.

In the references below, you’ll see a number of different strategies to mock out the claims and principle objects. There also seems to be a loose consensus that even attempting to do this is folly. However, I’ve cobbled together a set of mocks using NSubstitute that work. I’m not claiming that they work in all cases, or that they will work in any situation other than the specific one that I am trying to solve; but it did work for that, and so I thought it useful enough to share.

var myController = new MyController();
 
var mockClaim = new Claim("Administrator", "test");
 
var identity = Substitute.For<ClaimsIdentity>();
identity.Name.Returns("test");
identity.IsAuthenticated.Returns(true);
identity.FindFirst(Arg.Any<string>()).Returns(mockClaim);
 
var claimsPrincipal = Substitute.For<ClaimsPrincipal>();
claimsPrincipal.HasClaim(Arg.Any<string>(), Arg.Any<string>()).Returns(true);
claimsPrincipal.HasClaim(Arg.Any<Predicate<Claim>>()).Returns(true);
claimsPrincipal.Identity.Returns(identity);
 
var httpContext = Substitute.For<HttpContextBase>();            
httpContext.User.Returns(claimsPrincipal);
 
var controllerContext = new ControllerContext(
    httpContext, new System.Web.Routing.RouteData(), myController);           
 
myController.ControllerContext = controllerContext;
 
// Act
var result = myController.TestMethod();
 
// Assert
// . . .

Remember that this is only necessary if you are trying to access claims based on the identity within the `TestMethod()`. Also, I’ll remind the reader that I assert only that this worked in the specific situation that I needed it to, but it’s probably a good starting point for others.

References

https://volaresystems.com/blog/post/2010/08/19/Dont-mock-HttpContext

http://nsubstitute.github.io/help/set-return-value/

https://stackoverflow.com/questions/1389744/testing-controller-action-that-uses-user-identity-name

https://stackoverflow.com/questions/13579519/mock-authenticated-user-using-moq-in-unit-testing

https://stackoverflow.com/questions/14190066/is-there-any-way-i-can-mock-a-claims-principal-in-my-asp-net-mvc-web-application

https://stackoverflow.com/questions/22762338/how-do-i-mock-user-identity-getuserid/23960592

https://dotnetcodr.com/2013/02/11/introduction-to-claims-based-security-in-net4-5-with-c-part-1/

Asynchronous Debugging

Everyone who has spent time debugging errors in code that has multiple threads knows the pain of pressing F10 and seeing the cursor jump to a completely different part of the system (that is, everyone who has ever tried to).

There are a few tools in VS2017 that make this process slightly easier; and this post attempts to provide a brief summary. Obviously the examples in this post are massively contrived.

Errors

Let’s start with an error occurring inside a parallel loop. Here’s some code that will cause the error:

static void Main(string[] args)
{
    Console.WriteLine("Hello World!");
 
    Parallel.For(1, 10, (i) => RunProcess(i));
 
    Console.ReadLine();
}
 
static void RunProcess(int i)
{
    Task.Delay(500).GetAwaiter().GetResult();
 
    Console.WriteLine($"Running {i}");
 
    if (i == 3) throw new Exception("error");
}

For some reason, I get an error when a few of these threads have started. I need a tool that tells me some details about the local variables in the threads specifically. Enter the Parallel Watch Window:

Launch Parallel Watch Window

Figure 1 – Launch Parallel Watch Window

This gives me a familiar interface, and tells me which thread I’m currently on:

Parallel Watch Window

Figure 2 – Parallel Watch Window

However, what I really want to see is the data local to the thread; what if I put “i” in the “Add Watch” cell:

Add a watch

Figure 3 – Add a watch

As you can see, I have a horizontal list of watch expressions, so I can monitor variables in multiple threads at a time.

Flagging a thread

We know there’s an issue with one of these threads, so one possibility is to flag that thread:

Flagging a thread

Figure 4 – Flagging a thread

Then you can select to show only flagged threads:

Filter flagged threads

Figure 5 – Filter flagged threads

Freezing non-relevant threads

The flags help to only trace the threads that you care about, but if you want to only run the threads that you care about, you can freeze the other threads:

Freeze Thread

Figure 6 – Freeze Thread

Once you’ve frozen a thread, a small pause icon appears, and that thread will stop:

Frozen Thread

Figure 7 – Frozen Thread

In order to freeze other threads, simply highlight all the relevant threads (Ctrl-A) and select Freeze.

It’s worth remembering that you can’t freeze a thread that doesn’t exist yet (so your breakpoint in a Parallel.For loop might only show half the threads).

Manual thread hopping

By using freeze, you can stop the debug message from jumping between threads. You can then manually control this process by simply selecting a thread and “Switch To Frame”:

Figure 8 – Switch to Frame

You can switch to a frozen frame, but as soon as you try to progress, you’ll flip back to the first non-frozen frame (unless you thaw it). The consequence of this is that, it is possible to switch to a frozen frame, freeze all other frames and then press F10 – you’re program will then stop dead.

Stack Trace

In a single threaded application (and in a multi-threaded application), you can always view the stack trace of a given line of executing code. There is also a Parallel Stack trace:

Parallel Stacks

Figure 9 – Parallel Stacks

Selecting any given method will give us the active threads, and allow switching:

Active Threads

Figure 10 – Active Threads

Parallel Stack Trace – Task View

The above view gives you a view of the created threads for your program; but most of the time, you won’t care what threads are created; only the tasks that you’ve spawned (they are not necessarily a 1 – 1 relationship. You can simply switch the view in this window to view Tasks instead:

Task View

Figure 11 – Task View

Tasks & Threads Windows

There is a tool that allows you to view all active, blocked and scheduled tasks:

Tasks Window

Figure 12 – Tasks Window

This allows you to freeze an entire task, switch to a given task, and Freeze All But This:

Freeze All But This

Figure 13 – Freeze All But This

There is an equivalent window for Threads. It is broadly the same idea; however, it does have one feature that the Tasks window does not, and that it the ability to rename a thread:

Rename a Thread

Figure 14 – Rename a Thread

Flags

The other killer feature both of these windows have is the flag feature. Simply flag a thread, switch to it, and then select “Show Only Flagged Threads” (little flag icon). If you now remove the breakpoints, you can step through only your thread or task!

Breakpoints

So, what to do where you have a breakpoint that you might only wish to fire for a single thread? Helpfully, the breakpoints window has a filter feature:

Filter breakpoints on thread Id

Figure 15 – Filter Breakpoints

References

https://msdn.microsoft.com/en-us/library/dd554943.aspx

https://stackoverflow.com/questions/5304752/how-to-debug-a-single-thread-in-visual-studio

Short Walks – Instantiating an Object Without calling the Constructor

One of the things that caught my attention at DDD North was the mention of a way to instantiate an object without calling its constructor.

Disclaimer

Typically, classes have code in their constructors that are necessary for their functionality, so you may find that doing this will cause your program to fall over.

System.Runtime.Serialization

The title of the namespace is probably the first thing that betrays the fact that you shouldn’t be doing this; but we’re already halfway down the rabbit hole!

Here’s some code that will create a class using reflection the normal way:

    static void Main(string[] args)
    {
        var test = Activator.CreateInstance<MyTestClass>();
        test.MyMethod();

        Console.WriteLine("Hello World!");
        Console.ReadLine();
    }

    public class MyTestClass
    {
        public MyTestClass()
        {
            Console.WriteLine("MyTestClass Initialise");
        }

        public string test1 { get; set; }

        public void MyMethod()
        {
            Console.WriteLine("Test MyMethod.");
        }
    }

The output is:

And here’s the code that circumvents the constructor:

        static void Main(string[] args)
        {
            var test2 = FormatterServices.GetUninitializedObject(typeof(MyTestClass)) as MyTestClass;
            test2.MyMethod();

            Console.WriteLine("Hello World!");
            Console.ReadLine();
        }

And we haven’t invoked the constructor:

DDD North 2017

DDD North is held every year at around this time. This year, it was on 14th October at Bradford University. In terms of the sessions that I saw, I think it’s probably amongst the best that I’ve attended!

This is a rundown of the sessions and anything I thought I might need to look into further (which, to be fair, is most of it!)

Session 1 – Nathan Gloyn – Microservices

Nathan started this talk by saying that, if you worked in the world of Microservices, you wouldn’t learn anything. This quickly became blatantly untrue (at least for me).

Amongst the talk, he mentioned the need for an external config management system, such as ZooKeeper. The reason being that, should you try to keep the config in a web.config type file (and use transforms to … transform them), you risk having to redeploy an entire system for a configuration update.

That you should log in a system that is running in a server that you have no direct access to is obvious, but he suggested keeping correlation Ids on the messages, so that the logs can trace a message as it travels through the system.

Regarding data updates, the suggestion was to keep the data updates separate from the app updates; and he came up with the intriguing idea of versioning the messages, so that the system can effectively self-update; for example, a message arrives in a subsystem saying it’s a version 1 message; the system realises that the latest version is 2, and so the update script is run then. This allows the system to gradually update itself.

Session 2 – David Whitney – Metaprogramming

This was a bit of a strange talk – as it started off, I couldn’t really work out what to make of it, as it seemed to be a “Have you heard about this thing called reflection?” But as the talk progressed, especially when the creation of a unit test framework was demonstrated in around 20 lines of code, the talk suddenly became very interesting.

One very interesting technique was that you could use reflection to register a set of default implementations for interfaces. Obviously, whether this would work in a particular project depends on why you’re using an interface in the first place, but in most cases, you’ll just want to be able to mock out your interface for a unit test, so you’ll have a one-to-one relationship between interface and implementation.

The other point of note was a mention of a method that allowed you to reflexively create a type without calling a constructor. I noted this down as CreateUnallocatedType, but for the life of me, I can’t find a reference to it anywhere. Please add to the comments or tweet me if you know what this is actually called.

Session 3 – Stephen Haunts – Scaling Agile

I hadn’t realised this until the talk started, but Stephen Haunts was unwittingly the person that got me interested in message queuing (through his Pluralsight course). The talk was effectively a review of how Spotify has scaled an agile team.

Amongst the overview of the specific scrum terms, I was pleased to see that Stephen referred back to the agile manifesto which, IMHO, is something that is frequently forgotten about (or in some cases not even known about) in some companies that claim to implement scrum.

One interesting concept was that of a feature train: the idea being that you work on your feature and, when it’s ready, it gets shipped in the next release. Stephen introduced us to a tool called Launch Darkly – which is a paid for product that allows feature switching.

Other established methods for scaling scrum teams that he mentioned were Scrum Nexus and Scaled Agile.

Session 4 – Stuart Lang – Async

https://speakerdeck.com/slang25/async-in-c-number-the-good-the-bad-and-the-ugly

This was probably my favourite talk of the day although, after dinner is possible the worse spot to have, as everyone is ready for a little nap. However, Stuart kept the subject alive and interesting.

The main focus of the talk was the SynchronisationContext – specifically focusing on deadlocks. The principle being that, while the base implementation of the SynchronisationContext (as used in a console app or a unit test) allows multiple delegates to be executed at any one time, all of the derivitives (WinForms, WPF and ASP.NET) do not. This means that, as much as it might be tempting to try these things out in a console app, or try to cover them with Unit Tests, the behaviour will change as soon as you drop it into your app.

Any app using .Result(), .GetAwaiter.GetResult(), .Wait() will be susceptible to deadlocks.

There were some workarounds that he mentioned; they were:
– SetSynchronisationContext(null)
– .ConfigureAwait(false)
– Task.Run()

They all clear the synchronisation context, preventing a deadlock.

There is an Async analyser in Rosyln called Async006.

There is an open source impementation of a Context Free Task. It can be installed from NuGet, but at the time of writing, is still in a very early release.

The star of the show here is Microsoft.VisualStudio.Threading. It provides access to a JoinableTaskFactory, which implements the behaviour that you would expect if you did call something like .Result(). That is, it executes the asynchronous code, synchronously. This means, not only do you not block yourself, but you don’t spin up a load of pointless threads in the process.

Finally, ASP.Net Core does not have this problem, because of the way that it implements the SynchronisationContext.

Session 5 – Zinat Wali – Alexa

The last session was a talk on Alexa, and how you might write a routine that will report, and then predict the pollen count. The data training model demonstrated would be familiar to people that have seen Azure’s version of the same thing.

Adding to an Existing Azure Blob

In this post I briefly cover the concept of Storage Accounts and Blob Storage; however, there are more to blobs than this simple use case. In this post, I’ll explore creating a blob file from a text stream, and then adding to that file.

As is stated in the post referenced above, Azure provides a facility for storing files in, what are known as, Azure Blobs.

In order to upload a file to a blob, you need a storage account, and a container. Setting these up is a relatively straightforward process and, again, is covered in the post above.

Our application here will take the form of a simple console app that will prompt the user for some text, and then add it to the file in Azure.

Set-up

Once you’ve set-up your console app, you’ll need the Azure NuGet Storage package.

Also, add the connection string to your storage account into the app.config:

<connectionStrings>
    <add name="Storage" connectionString="DefaultEndpointsProtocol=https;AccountName=testblob;AccountKey=wibble/dslkdsjdljdsoicj/rkDL7Ocs+aBuq3hpUnUQ==;EndpointSuffix=core.windows.net"/>
</connectionStrings>

Here’s the basic code for the console app:

static void Main(string[] args)
{
    Console.Write("Please enter text to add to the blob: ");
    string text = Console.ReadLine();
 
    UploadNewText(text);
 
    Console.WriteLine("Done");
    Console.ReadLine();
}

I’ll bet you’re glad I posted that, otherwise you’d have been totally lost. The following snippets are possible implementations of the method UploadNewText().

Uploading to BlockBlob

The following code will upload a file to a blob container:

string connection = ConfigurationManager.ConnectionStrings["Storage"].ConnectionString;
string fileName = "test.txt";
string containerString = "mycontainer";
 
using (MemoryStream stream = new MemoryStream())
using (StreamWriter sw = new StreamWriter(stream))
{
    sw.Write(text);
    sw.Flush();
    stream.Position = 0;
 
    CloudStorageAccount storage = CloudStorageAccount.Parse(connection);
    CloudBlobClient client = storage.CreateCloudBlobClient();
    CloudBlobContainer container = client.GetContainerReference(containerString);
    CloudBlockBlob blob = container.GetBlockBlobReference(fileName);
    blob.UploadFromStream(stream);
}

(note that the name of the container in this code is case sensitive)

If we have a look at the storage account, a text file has, indeed been created:

New Blob

But, what if we want to add to that? Well, running the same code again will work, but it will replace the existing file. To prove that, I’ve changed the text to “Test data 2” and run it again:

Test Data

So, how do we update the file? Given that we can update it, one possibility is to download the existing file, add to it and upload it again; that would look something like this:

string connection = ConfigurationManager.ConnectionStrings["Storage"].ConnectionString;
string fileName = "test.txt";
string containerString = "mycontainer";
 
CloudStorageAccount storage = CloudStorageAccount.Parse(connection);
CloudBlobClient client = storage.CreateCloudBlobClient();
CloudBlobContainer container = client.GetContainerReference(containerString);
CloudBlockBlob blob = container.GetBlockBlobReference(fileName);
 
using (MemoryStream stream = new MemoryStream())
{
    blob.DownloadToStream(stream);
 
    using (StreamWriter sw = new StreamWriter(stream))
    {
        sw.Write(text);
        sw.Flush();
        stream.Position = 0;
 
        blob.UploadFromStream(stream);
    }
}

This obviously means two round trips to the server, which isn’t the best thing in the world. Another possible option is to use the Append Blob…

Azure Append Blob Storage

There is a blob type that allows you to add to it without actually touching it; for example:

string connection = ConfigurationManager.ConnectionStrings["Storage"].ConnectionString;
string fileName = "testAppend.txt";
string containerString = "mycontainer";
 
CloudStorageAccount storage = CloudStorageAccount.Parse(connection);
CloudBlobClient client = storage.CreateCloudBlobClient();
CloudBlobContainer container = client.GetContainerReference(containerString);
CloudAppendBlob blob = container.GetAppendBlobReference(fileName);
if (!blob.Exists()) blob.CreateOrReplace();
 
using (MemoryStream stream = new MemoryStream())
using (StreamWriter sw = new StreamWriter(stream))
{
    sw.Write("Test data 4");
    sw.Flush();
    stream.Position = 0;
 
    blob.AppendFromStream(stream);                
}

There are a few things to note here:

  • The reason that I changed the name of the blob is that you can’t append to a BlockBlob (at least not using an AppendBlob); so it has to have been created for the purpose of appending.
  • While UploadFromStream will just create the file if it doesn’t exist, with the AppendBlob, you need to do it explicitly.

PutBlock

The final alternative here is to use PutBlock. This can bridge the gap, by allowing the addition of blocks into an existing block blob. However, you either need to maintain the Block ID list manually, or download the existing block list; here’s an example of creating, or adding to a file using the PutBlock method:

string connection = ConfigurationManager.ConnectionStrings["Storage"].ConnectionString;
string fileName = "test4.txt";
string containerString = "mycontainer";
 
CloudStorageAccount storage = CloudStorageAccount.Parse(connection);
CloudBlobClient client = storage.CreateCloudBlobClient();
CloudBlobContainer container = client.GetContainerReference(containerString);
CloudBlockBlob blob = container.GetBlockBlobReference(fileName);
 
ShowBlobBlockList(blob);
 
using (MemoryStream stream = new MemoryStream())
using (StreamWriter sw = new StreamWriter(stream))
{
    sw.Write(text);
    sw.Flush();
    stream.Position = 0;
 
    double seconds = (DateTime.Now - new DateTime(2000, 1, 1)).TotalSeconds;
    string blockId = Convert.ToBase64String(
        ASCIIEncoding.ASCII.GetBytes(seconds.ToString()));
 
    Console.WriteLine(blockId);
    //string blockHash = GetMD5HashFromStream(bytes);                
 
    List<string> newList = new List<string>();
    if (blob.Exists())
    {
        IEnumerable<ListBlockItem> blockList = blob.DownloadBlockList();
 
        newList.AddRange(blockList.Select(a => a.Name));
    }
 
    newList.Add(blockId);
 
    blob.PutBlock(blockId, stream, null);
    blob.PutBlockList(newList.ToArray());
}

The code above owes a lot to the advice given on this Stack Overflow question.

In order to avoid conflicts in the Block Ids, I’ve used a count of seconds since an arbitrary date. Obviously, this won’t work in all cases. Further, it’s worth noting that the code above still does two trips to the server (it has to download the block list).

The commented MD5 hash allows you to provide some form of check on the data being valid, should you choose to use it.

What is ShowBlobBlockList(blob)?

The following function will give some details relating to the existing blocks (it is shamelessly plagiarised from here):

public static void ShowBlobBlockList(CloudBlockBlob blockBlob)
{
    if (!blockBlob.Exists()) return;
 
    IEnumerable<ListBlockItem> blockList = blockBlob.DownloadBlockList(BlockListingFilter.All);
    int index = 0;
    foreach (ListBlockItem blockListItem in blockList)
    {
        index++;
        Console.WriteLine("Block# {0}, BlockID: {1}, Size: {2}, Committed: {3}",
            index, blockListItem.Name, blockListItem.Length, blockListItem.Committed);
    }
}

Summary

Despite being an established technology, these methods and techniques are sparsely documented on the web. Obviously, there are Microsoft docs, and they are helpful, but, unfortunately, not exhaustive.

References

https://stackoverflow.com/questions/33088964/append-to-azure-append-blob-using-appendtextasync-results-in-missing-data

https://docs.microsoft.com/en-us/rest/api/storageservices/understanding-block-blobs–append-blobs–and-page-blobs

http://www.c-sharpcorner.com/UploadFile/40e97e/windows-azure-blockblob-putblock-method/

https://docs.microsoft.com/is-is/rest/api/storageservices/put-block

https://www.red-gate.com/simple-talk/cloud/platform-as-a-service/azure-blob-storage-part-4-uploading-large-blobs/

https://stackoverflow.com/questions/46368954/can-putblock-be-used-to-append-to-an-existing-blockblob-in-azure

Azure Recommendations

Azure provides a number of pre-configured machine learning services out of the box. One of these (still in Beta at the time of writing this) is Recommendations. The idea being that it will try to work out what, given a list of items, you would prefer, based knowledge about your habits. There’s a lot of information on line about this, but briefly, it can work out what your preference is based a combination of your past activity, and the past activity of others that have shown an interest in the same item.

Obviously, the “items” could be products, films, aardvarks, or sheep; Azure doesn’t know anything about the content of what its recommending; if you’ve bought* “A” in the past, and 75% of everyone else that has bought “A” has also bought “B” then there’s a chance you’ll want “B”. “A” could be an apple, and “B” could be a pair of sunglasses; so obviously, you need to be careful about the data that you feed it.

Recommendations API

The first thing to note is that the Recommendations API represents an earlier attempt to implement this by Microsoft, and is due to be discontinued early next year (2018). If you try to use this to follow any of the online tutorials then you’ll get into a world of hurt.

Deploy Recommendations

The new method of creating a recommendations service is via a wizard (which, I believe behind the scenes, builds a custom ARM template). This is the start of the deployment, and gives you a screen similar to the following (once you’ve logged in):

As you can see, there’s clearly some re-branding in progress here; anyway, complete the form and create the service.

Another thing that has changed in the new version of this is that the free pricing tier has disappeared:

After a few screens, it starts the deployment process:

The next screen that is displayed shows all the connection strings and keys in one handy reference:

… they are just below this screenshot.

This should create four separate services:

Sample Project

Microsoft provides a sample project that should work out of the box (they actually provide more than one – some of which work better than others). This one uses AutoRest, but there’s another referenced at the bottom of this post.

In this project, open Recommendations.Sample.Program.cs and, at the top of the main function, enter the details that you noted after the creation of your service. If you didn’t note them, then you can still find them. You’ll notice that four separate services were created: AppService, StorageAccount, App Insights and an App Service Plan.

recommendationsEndPointUri

Is found in the URL of the App Service:

apiAdminKey

Is found in the Application Settings of the App Service:

connectionString

Is the connection string of the storage account:

If you run this now, you should find that it will process and score the recommendations:

So, for example, we can see from the results above that people that bought DHF-01159 are recommended to buy DHF-01055 (although it doesn’t seem very convinced).

Footnotes

* The term that Azure uses here is “Purchase”. Different actions have different weighting (configurable), but by default, you would assume that buying something is more important than, for example, clicking on it (“Click” is another action). These actions can mean anything you choose; in the sheep example above, “Purchase” might mean shearing, and “Click” might mean photographing.

References

http://pmichaels.net/2017/08/06/deploying-azure-recommendation-service-using-arm-template/

https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-overview-of-diagnostic-logs

https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-recommendations-ui-intro

https://gallery.cortanaintelligence.com/Tutorial/Recommendations-Solution

https://github.com/Microsoft/Cognitive-Recommendations-Windows

Short Walks – Running an Extension Method on a Null Item

I came across this issue recently, and realised that I didn’t fully understand extension methods. My previous understanding was that an extension method was simply added to the original class (possible in the same manner that weavers work); However, a construct similar to the following code changed my opinion:

class Program
{
    static void Main(string[] args)
    {
        var myList = GetList();            
        var newList = myList.Where(
            a => a.IsKosher());
        var evaluateList = newList.ToList();
 
        foreach(var a in evaluateList)
        {
            Console.WriteLine(a.Testing);
        }
    }
 
    static IEnumerable<TestClass> GetList()
    {
        return new List<TestClass>()
        {
            new TestClass() {Testing = "123"},
            null
        };
    }
}
 
public class TestClass
{
    public string Testing { get; set; }
}
 
public static class ExtensionTest
{
    public static bool IsKosher(this TestClass testClass)
    {
        return (!string.IsNullOrWhiteSpace(testClass.Testing));
    }
}

As you can see from the code, GetList() returns a null class in the collection. If you run this code, you’ll find that it crashes inside the extension method, because testClass is null.

A note on Linq

If you’re investigating this in the wild, you might find it particularly difficult because of the was that Linq works. Even though the call to the extension method is on the line above, the code doesn’t get run until you actually use the result (in this case, via a ToList()).

New understanding

As I now understand it, extension methods are simply a nice syntactical way to use a static method. That is, had I simply declared my IsKosher method as a standard static method, it would behave exactly the same. To verify this, let’s have a look at the IL; here’s the IL for my function above:

IL Code for extension method

And here’s the IL for the same function as a standard static method:

IL code for static method

The only difference is the line at the top of the extension method calling the ExtensionAttribute constructor.

References

https://stackoverflow.com/questions/847209/in-c-what-happens-when-you-call-an-extension-method-on-a-null-object

SendGrid – Azure e-mail functionality

In this post, I discussed the prospect of sending an e-mail from an Azure function in order to alert someone that something had gone wrong. In one of the comments, it was suggested that I should look into a third party tool called “SendGrid”, and this post is the result of that investigation.

Azure Configuration

SendGrid is a third party application, and so the first thing you need to do is to create an account:

The free tier covers you for 25,000 e-mails per month. However, you do get a scary warning that, because this isn’t a Microsoft product, it is not covered by Azure credits.

Anyway, click create and, after a while, you’re new SendGrid Account should be created:

You’ll need to get the API Key: to do this, select Manage:

That takes you to https://app.sendgrid.com, where you can select to create an API Key:

Clearly, you wouldn’t want a full access account to just send an e-mail in real life… but Restricted Access has a form that would take longer to fill in, and I can’t be mythered*… so we’ll go with full access for now.

Once you’ve created it, and given it a name, you should have a key (remember that key – don’t write it down, or copy it, you must remember it!).

Code

Create a new Function App, and add the SendGrid NuGet package:

https://www.nuget.org/packages/Sendgrid/

In this case, let’s create a HttpTrigger function (this will fire when a web address is accessed); the body of which needs to look something like this:

        [FunctionName("SendEmail")]
        public static async Task<HttpResponseMessage> Run(
            [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req, 
            TraceWriter log)
        {
            log.Info("C# HTTP trigger function processed a request.");

            // parse query parameter
            var addresses = req.GetQueryNameValuePairs();
            string[] addressArr = addresses
                .Where(a => a.Key == "address")
                .Select(a => a.Value).ToArray();

            if (addressArr.Count() == 0)
            {
                return req.CreateResponse(HttpStatusCode.BadRequest, 
                    "Please pass an address in the query string");
            }
            else
            {
                HttpStatusCode status = await CallSendEmailAsync(
                    "noreply@test.com", addressArr, "test e-mail",
                    "Once more unto the breach, dear friends, once more;\n" +
                    "Or close the wall up with our English dead.   \n" +
                    "In peace there's nothing so becomes a man     \n" +
                    "As modest stillness and humility:             \n" +
                    "But when the blast of war blows in our ears,  \n" +
                    "Then imitate the action of the tiger;         \n" +
                    "Stiffen the sinews, summon up the blood,");
                switch (status)
                {
                    case HttpStatusCode.OK:
                    case HttpStatusCode.Accepted:
                    {
                        return req.CreateResponse(status, "Mail Successfully Sent");
                    }
                    default:
                    {
                        return req.CreateResponse(status, "Unable to send e-mail");
                    }
                }
            }
        }

The `CallSendEmailAsync` helper method might look like this:

public static async Task<HttpStatusCode> CallSendEmailAsync(string from, string[] recipients, string subject, string body)
{
    EmailAddress fromAddress = new EmailAddress(from);
 
    SendGridMessage message = new SendGridMessage()
    {
        From = fromAddress,
        Subject = subject,
        HtmlContent = body
    };
 
    message.AddTos(recipients.Select(r => { return new EmailAddress(r); }).ToList());
   
    string sendGridApiKey = "AB.keythisisthekey.nkfdhfkjfhkjfd0ei8L9xTyaTCzy_sV5gPJNX-3";
 
    SendGridClient client = new SendGridClient(sendGridApiKey);
    Response response = await client.SendEmailAsync(message);
 
    return response.StatusCode;
}

The key is the string that I said you should remember earlier.

You can just paste the URL into a browser, give it the e-mail addresses in the format:

https://url.com?address=my@email.com&my-second@email.com

As you will see from the link at the bottom of this post, OK (200) signifies that the message is valid, but not queued to be delivered; and Accepted (202) indicates that the message is valid and queued. My guess is that if you get an OK, it means that the mail has already been delivered.

Footnotes

* It probably took longer to type this out than to do it.

References

https://go.sendgrid.com/

https://sendgrid.com/docs/API_Reference/Web_API_v3/Mail/errors.html