Category Archives: C#

What can you do with a logic app? Part One – Send tweets at random intervals based on a defined data set

I thought I’d start another of my patented series’. This one is about finding interesting things that can be done with Azure Logic Apps.

Let’s say, for example, that you have something that you want to say; for example, if you were Richard Dawkins or Ricky Gervais, you might want to repeatedly tell everyone that there is no God; or if you were Google, you might want to tell everyone how .Net runs on your platform; or if you were Microsoft, you might want to tell people how it’s a “Different Microsoft” these days.

The thing that I want to repeatedly tell everyone is that I’ve written some blog posts. For this purpose, I’m going to set-up a logic app that, based on a random interval, sends a tweet from my account (, informing people of one of my posts. It will get this information from a simple Azure storage table; let’s start there: first, we’ll need a storage account:

Then a table:

We’ll enter some data using Storage Explorer:

After entering a few records (three in this case – because the train journey would need to be across Russia or something for me to fill my entire back catalogue in manually – I might come back and see about automatically scraping this data from WordPress one day).

In order to create our logic app, we need a singular piece of custom logic. As you might expect, there’s no randomised action or result, so we’ll have to create that as a function:

For Logic App integration, a Generic WebHook seems to work best:

Here’s the code:

#r "Newtonsoft.Json"
using System;
using System.Net;
using Newtonsoft.Json;
static Random _rnd;

public static async Task<object> Run(HttpRequestMessage req, TraceWriter log)
    log.Info($"Webhook was triggered!");
    if (_rnd == null) _rnd = new Random();
    string rangeStr = req.GetQueryNameValuePairs()
        .FirstOrDefault(q => string.Compare(q.Key, "range", true) == 0)
    int range = int.Parse(rangeStr);
    int num = _rnd.Next(range - 1) + 1; 
    var response = req.CreateResponse(HttpStatusCode.OK, new
        number = num
    return response;

Okay – back to the logic app. Let’s create one:

The logic app that we want will be (currently) a recurrence; let’s start with every hour (if you’re following along then you might need to adjust this while testing – be careful, though, as it will send a tweet every second if you tell it to):

Add the function:

Add the input variables (remember that the parameters read by the function above are passed in via the query):

One thing to realise about Azure functions is they rely heavily on passing JSON around. For this purpose, you’ll use the JSON Parser action a lot. My advice would be to name them sensibly, and not “Parse JSON” and “Parse JSON 2” as I’ve done here:

The JSON Parser action requires a schema – that’s how it knows what your data looks like. You can get the schema by selecting the option to use a sample payload, and just grabbing the output from above (when you tested the function – if you didn’t test the function then you obviously trust me far more than you should and, as a reward, you can simply copy the output from below):

That will then generate a schema for you:

Note: if you get the schema wrong then the run report will error, but it will give you a dump of the JSON that it had – so another approach would be to enter anything and then take the actual JSON from the logs.

Now we’ll add a condition based on the output. Now that we’ve parsed the JSON, “number” (or output from the previous step) is available:

So, we’ll check if the number is 1 – meaning there’s a 1 in 10 chance that the condition will be true. We don’t care if it’s false, but if it’s true then we’ll send a tweet. Before we do that , though – we need to go the data table and find out what to send. Inside the “true” branch, we’ll add an “Azure Table Storage – Get Entities” call:

This asks you for a storage connection (the name is just for you to name the connection to the storage account). Typically, after getting this data, you would call for each to run through the entries. Because there is currently no way to count the entries in the table, we’ll iterate through each entry, but we’ll do it slowly, and we’ll use our random function to ensure that all are not sent.

Let’s start with not sending all items:

All the subsequent logic is inside the true branch. The next thing is to work out how long to delay:

Now we have a number between 1 and 60, we can wait for that length of time:

The next step is to send the tweet, but because we need specific data from the table, it’s back to our old friend: Parse JSON (it looks like every Workflow will contain around 50% of these parse tasks – although, obviously, you could bake this sort of thing into a function).

To get the data for the tweet, we’ll need to parse the JSON for the current item:

Once you’ve done this, you’ll have access to the parts of the record and can add the Tweet action:

And we have a successful run… and some tweets:

Setting up Entity Framework Core for a Console Application – One Error at a Time

Entity Framework can be difficult to get started with: especially if you come from a background of accessing the database directly, it can seem like there are endless meaningless errors that appear. In this post, I try to set-up EF Core using a .Net Core Console application. In order to better understand the errors, we’ll just do the minimum in each step; and be guided by the errors.

The first step is to create a .Net Core Console Application.

NuGet Packages

To use Entity Framework, you’ll first need to install the NuGet packages; to follow this post, you’ll need these two (initially) 1:

PM> Install-Package Microsoft.EntityFrameworkCore
PM> Install-Package Microsoft.EntityFrameworkCore.Tools

Model / Entities

The idea behind Entity Framework is that you represent database entities, or tables as they used to be known, with in memory objects. So the first step is to create a model:

namespace ConsoleApp1.Model
    public class MyData
        public string FieldOne { get; set; }

We’ve created the model, so the next thing is to create the DB:

PM> Update-Database

In the package manager console.

First Error – DbContext

The first error you get is:

No DbContext was found in assembly ‘ConsoleApp1’. Ensure that you’re using the correct assembly and that the type is neither abstract nor generic.

Okay, so let’s create a DbContext. The recommended pattern (as described here) is to inherit from DbContext:

namespace ConsoleApp1
    public class MyDbContext : DbContext

Okay, we’ve created a DbContext – let’s go again:

PM> Update-Database

Second Error – Database Provider

The next error is:

System.InvalidOperationException: No database provider has been configured for this DbContext. A provider can be configured by overriding the DbContext.OnConfiguring method or by using AddDbContext on the application service provider.

So we’ve moved on a little. The next thing we need to do is to configure a provider. Because in this case, I’m using SQL Server, I’ll need another NuGet package:

PM> Install-Package Microsoft.EntityFrameworkCore.SqlServer

Then configure the DbContext to use it:

public class MyDbContext : DbContext
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
        string cn = @"Server=.\SQLEXPRESS;Database=test-db;User Id= . . .";

And again:

PM> Update-Database

Third Error – No Migrations

Strictly speaking this isn’t an actual error. It’s more a sign that nothing has happened:

No migrations were applied. The database is already up to date.

A quick look in SSMS shows that, whilst it has created the DB, it hasn’t created the table:

So we need to add a migration? Well, if we call Add-Migration here, we’ll get this: 2

That’s because we need to tell EF what data we care about. So, in the DbContext, we can let it know that we’re interested in a data set (or, if you like, table) called MyData:

public class MyDbContext : DbContext
    public DbSet<MyData> MyData { get; set; }

Right – now we can call:

PM> Add-Migration FirstMigration

Fourth Error – Primary Key

The next error is more about EF’s inner workings.:

System.InvalidOperationException: The entity type ‘MyData’ requires a primary key to be defined.

Definitely progress. Now we’re being moaned at because EF wants to know what the primary key for the table is, and we haven’t told it (Entity Framework, unlike SQL Server insists on a primary key). That requires a small change to the model:

using System.ComponentModel.DataAnnotations;
namespace ConsoleApp1.Model
    public class MyData
        public string FieldOne { get; set; }

This time,

PM> Add-Migration FirstMigration

Produces this:

    public partial class FirstMigration : Migration
        protected override void Up(MigrationBuilder migrationBuilder)
                name: "MyData",
                columns: table => new
                    FieldOne = table.Column<string>(nullable: false)
                constraints: table =>
                    table.PrimaryKey("PK_MyData", x => x.FieldOne);
        protected override void Down(MigrationBuilder migrationBuilder)
                name: "MyData");

Which looks much more like we’ll get a table – let’s try:

PM> update-database
Applying migration '20180224075857_FirstMigration'.


And it has, indeed, created a table!


Foot Notes

Short Walks – C# Pattern Matching to Match Ranges

Back in 2010, working at the time in a variety of languages, including VB, I asked this question on StackOverflow. In VB, you could put a range inside a switch statement, and I wanted to know how you could do that in C#. The (correct) answer at the time was that you can’t.

Fast forward just eight short years, and suddenly, it’s possible. The new feature of pattern matching in C# 7.0 has made this possible.

You can now write something like this (this is C# 7.1 because of Async Main):

static async Task Main(string[] args)
    for (int i = 0; i <= 20; i++)
        switch (i)
            case var test when test <= 2:
                Console.WriteLine("Less than 2");
            case var test when test > 2 && test < 10:
                Console.WriteLine("Between 2 and 10");
            case var test when test >= 10:
                Console.WriteLine("10 or more");
        await Task.Delay(500);


Creating a Scheduled Azure Function

I’ve previously written about creating Azure functions. I’ve also written about how to react to service bus queues. In this post, I wanted to cover creating a scheduled function. Basically, these allow you to create a scheduled task that executes at a given interval, or at a given time.

Timer Trigger

The first thing to do is create a function with a type of Timer Trigger:

Schedule / CRON format

The next thing is to understand the schedule, or CRON, format. The format is:

{second} {minute} {hour} {day} {month} {day-of-week}

Scheduled Intervals

The example you’ll see when you create this looks like this:

0 */5 * * * *

The notation */[number] means once every number; so */5 would mean once every 5… and then look at the placeholder to work out 5 what; in this case it means once every 5 minutes. So, for example:

*/10 * * * * *

Would be once every 10 seconds.

Scheduled Times

Specifying numbers means the schedule will execute at that time; so:

0 0 0 * * *

Would execute every time the hour, minute and second all hit zero – so once per day at midnight; and:

0 * * * * *

Would execute every time the second hits zero – so once per minute; and:

0 0 * * * 1

Would execute once per hour on a Monday (as the last placeholder is the day of the week).

Time constraints

These can be specified in any column in the format [lower bound]-[upper bound], and they restrict the timer to a range of values; for example:

0 */20 5-10 * * *

Means every 20 minutes between 5 and 10am (as you can see, the different types can be used in conjunction).

Asterisks (*)

You’ll notice above that there are asterisks in every placeholder that a value has not been specified. The askerisk signifies that the schedule will execute at every interval within that placeholder; for example:

* * * * * *

Means every second; and:

0 * * * * *

Means every minute.

Back to the function

Upon starting, the function will detail when the next several executions will take place:

But what if you don’t know what the schedule will be at compile time. As with many of the variables in an Azure Function, you can simply substitute the value for a placeholder:

public static void Run([TimerTrigger("%schedule%")]TimerInfo myTimer, TraceWriter log)
    log.Info($"C# Timer trigger function executed at: {DateTime.Now}");

This value can then be provided inside the local.settings.json:

  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "DefaultEndpointsProto . . .",
    "AzureWebJobsDashboard": "DefaultEndpointsProto . . .",
    "schedule": "0 * * * * *"


Using Unity With Azure Functions

Azure Functions are a relatively new kid on the block when it comes to the Microsoft Azure stack. I’ve previously written about them, and their limitations. One such limitation seems to be that they don’t lend themselves very well to using dependency injection. However, it is certainly not impossible to make them do so.

In this particular post, we’ll have a look at how you might use an IoC container (Unity in this case) in order to leverage DI inside an Azure function.

New Azure Functions Project

I’ve covered this before in previous posts, in Visual Studio, you can now create a new Azure Functions project:

That done, you should have a project that looks something like this:

As you can see, the elephant in the room here is there are no functions; let’s correct that:

Be sure to call your function something descriptive… like “Function1”. For the purposes of this post, it doesn’t matter what kind of function you create, but I’m going to create a “Generic Web Hook”.

Install Unity

The next step is to install Unity (at the time of writing):

Install-Package Unity -Version 5.5.6

Static Variables Inside Functions

It’s worth bearing mind that a static variable works the way you would expect, were the function a locally hosted process. That is, if you write a function such as this:

public static object Run([HttpTrigger(WebHookType = "genericJson")]HttpRequestMessage req, TraceWriter log)
    log.Info($"Webhook was triggered!");
    log.Info($"Index is {test}");
    return req.CreateResponse(HttpStatusCode.OK, new
        greeting = $"Hello {test++}!"

And access it from a web browser, or postman, or both as the same time, you’ll get incrementing numbers:

Whilst the values are shared across the instances, you can’t cause a conflict by updating something in one function while reading it in another (I tried pretty hard to cause this to break). What this means, then, is that we can store an IoC container that will maintain state across function calls. Obviously, this is not intended for persisting state, so you should assume your state could be lost at any time (as indeed it can).

Registering the Unity Container

One method of doing this is to use the Lazy object. This pretty much passed me by in .Net 4 (which is, apparently, when it came out). It basically provides a slightly neater way of doing this kind of thing:

private List<string> _myList;
public List<string> MyList
        if (_myList == null)
            _myList = new List<string>();
        return _myList;

The “lazy” method would be:

public Lazy<List<string>> MyList = new Lazy<List<string>>(() =>
    List<string> newList = new List<string>();
    return newList;

With that in mind, we can do something like this:

public static class Function1
     private static Lazy<IUnityContainer> _container =
         new Lazy<IUnityContainer>(() =>
             IUnityContainer container = InitialiseUnityContainer();
             return container;

InitialiseUnityContainer needs to return a new instance of the container:

public static IUnityContainer InitialiseUnityContainer()
    UnityContainer container = new UnityContainer();
    container.RegisterType<IMyClass1, MyClass1>();
    container.RegisterType<IMyClass2, MyClass2>();
    return container;

After that, you’ll need to resolve the parent dependency, then you can use standard constructor injection; for example, if MyClass1 orchestrates your functionality; you could use:


In Practise

Let’s apply all of that to our Functions App. Here’s two new classes:

public interface IMyClass1
    string GetOutput();
public interface IMyClass2
    void AddString(List<string> strings);
public class MyClass1 : IMyClass1
    private readonly IMyClass2 _myClass2;
    public MyClass1(IMyClass2 myClass2)
        _myClass2 = myClass2;
    public string GetOutput()
        List<string> teststrings = new List<string>();
        for (int i = 0; i <= 10; i++)
        return string.Join(",", teststrings);
public class MyClass2 : IMyClass2
    public void AddString(List<string> strings)

And the calling code looks like this:

public static object Run([HttpTrigger(WebHookType = "genericJson")]HttpRequestMessage req, TraceWriter log)
    log.Info($"Webhook was triggered!");
    string output = _container.Value.Resolve<IMyClass1>().GetOutput();
    return req.CreateResponse(HttpStatusCode.OK, new

Running it, we get an output that we might expect:


Using the Builder Pattern for Validation

When doing validation, There’s a number of options to how you approach it: you could simply have a series of conditional statements testing logical criteria, you could follow the chain of responsibility pattern, use some form of polymorphism with the strategy pattern, or even, as I outline below, try using the builder pattern.

Let’s first break down the options. We’ll start with the strategy pattern, because that’s where I started when I was looking into this. It’s a bit like a screwdriver – it’s usually the first thing you reach for and, if you encounter a nail, you might just just tap it with the blunt end.

Strategy Pattern

The strategy pattern is just a way of implementing polymorphism: the idea being that you implement some form of logic, and then override key parts of it; for example, in the validation case, you might come up with an abstract base class such as this:

public abstract class ValidatorBase<T>
    public ValidationResult Validate(T validateElement)
        ValidationResult result = new ValidationResult();
        if (CheckIsValid(validateElement))
            result = OnIsValid();                
            result = OnIsNotValid();
        return result;
    protected virtual ValidationResult OnIsValid()
        return null;

    . . .

You can inherit from this for each type of validation and then override key parts of the class (such as `CheckIsValid`).

Finally, you can call all the validation in a single function such as this:

public bool Validate()
    bool isValid = true;
    IEnumerable<ValidatorBase> validators = typeof(ValidatorBase)
        .Where(t => t.IsSubclassOf(typeof(ValidatorBase)) && !t.IsAbstract)
        .Select(t => (ValidatorBase)Activator.CreateInstance(t));
    foreach (ValidatorBase validator in validators)
        ValidationResult result = validator.Validate();
        if (!result.IsValid)
            isValid = false;
        if (result.StopValidation)
    return isValid;

There are good and bad sides to this pattern: it’s familiar and well tried; unfortunately, it results in a potential explosion of code volume (if you have 30 validation checks, you’ll need 30 classes), which makes it difficult to read. It also doesn’t deal with the scenario whereby one validation condition depends on the success of another.

So what about the chain of responsibility that we mentioned earlier?

Chain of responsibility

This pattern, as described in the linked article above, works by implementing a link between a class that validates your data, and the class that will validate it next: in other words, a linked list.

This idea does work well, and is relatively easy to implement; however, it can become unwieldy to use; for example, you might have a class like this:

private static bool InvokeValidation(ValidationRule rule)
    bool result = rule.ValidationFunction.Invoke();
    if (result && rule.Successor != null)
        return InvokeValidation(rule.Successor);
    return result;

But to build up the rules, you might have a series of calls such as this:

ValidationRule rule2 = new ValidationRule();
rule.ValidationFunction = () => MyTest();

ValidationRule rule1 = new ValidationRule();
rule.ValidationFunction = () => MyTest();
Rule.Successor = rule2;

As you can see, it doesn’t make for very readable code. Admittedly, with a little imagination, you could probably re-order it a little. What you could also do is use the Builder Pattern…

Builder Pattern

The builder pattern came to fame with Asp.Net Core; where during configuration, you could write something like:


So, the idea behind it is that you call a method that returns an instance of itself, allowing you to repeatedly call methods to build a state. This concept overlays quite neatly onto the concept of validation.

You might have something along the lines of:

public class Validator
    private List<ValidationRule> _logic;        
    public ValidatorEngine AddRule(Func<bool> validationRule)
        ValidatorLogic logic = new ValidatorLogic()
            ValidationFunction = validationRule
        return this;

So now, you can call:

    .AddRule(() => MyTest())
    .AddRule(() => MyTest2())

I think you’ll agree, this makes the code much neater.


Working with Multiple Cloud Providers – Part 3 – Linking Azure and GCP

This is the third and final post in a short series on linking up Azure with GCP (for Christmas). In the first post, I set-up a basic Azure function that updated some data in table storage, and then in the second post, I configured the GCP link from PubSub into BigQuery.

In the post, we’ll square this off by adapting the Azure function to post a message directly to PubSub; then, we’ll call the Azure function with Santa’a data, and watch that appear in BigQuery. At least, that was my plan – but Microsoft had other ideas.

It turns out that Azure functions have a dependency on Newtonsoft Json 9.0.1, and the GCP client libraries require 10+. So instead of being a 10 minute job on Boxing day to link the two, it turned into a mammoth task. Obviously, I spent the first few hours searching for a way around this – surely other people have faced this, and there’s a redirect, setting, or way of banging the keyboard that makes it work? Turns out not.

The next idea was to experiment with contacting the Google server directly, as is described here. Unfortunately, you still need the Auth libraries.

Finally, I swapped out the function for a WebJob. WebJobs give you a little move flexibility, and have no hard dependencies. So, on with the show (albeit a little more involved than expected).


In this post I described how to create a basic WebJob. Here, we’re going to do something similar. In our case, we’re going to listen for an Azure Service Bus Message, and then update the Azure Storage table (as described in the previous post), and call out to GCP to publish a message to PubSub.

Handling a Service Bus Message

We weren’t originally going to take this approach, but I found that WebJobs play much nicer with a Service Bus message, than with trying to get them to fire on a specific endpoint. In terms of scaleability, adding a queue in the middle can only be a good thing. We’ll square off the contactable endpoint at the end with a function that will simply convert the endpoint to a message on the queue. Here’s what the WebJob Program looks like:

public static void ProcessQueueMessage(
    [ServiceBusTrigger("localsantaqueue")] string message,
    TextWriter log,
    [Table("Delivery")] ICollector<TableItem> outputTable)
    // parse query parameter
    TableItem item = Newtonsoft.Json.JsonConvert.DeserializeObject<TableItem>(message);
    if (string.IsNullOrWhiteSpace(item.PartitionKey)) item.PartitionKey = item.childName.First().ToString();
    if (string.IsNullOrWhiteSpace(item.RowKey)) item.RowKey = item.childName;
    log.WriteLine("DeliveryComplete Finished");

Effectively, this is the same logic as the function (obviously, we now have the GCPHelper, and we’ll come to that in a minute. First, here’s the code for the TableItem model:

public class TableItem : TableEntity
    public string childName { get; set; }
    public string present { get; set; }

As you can see, we need to decorate the members with specific serialisation instructions. The reason being that this model is being used by both GCP (which only needs what you see on the screen) and Azure (which needs the inherited properties).


As described here, you’ll need to install the client package for GCP into the Azure Function App that we created in post one of this series (referenced above):

Install-Package Google.Cloud.PubSub.V1 -Pre

Here’s the helper code that I mentioned:

public static class GCPHelper
    public static async Task AddMessageToPubSub(TableItem toSend)
        string jsonMsg = Newtonsoft.Json.JsonConvert.SerializeObject(toSend);
            Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Test-Project-8d8d83hs4hd.json"));
        GrpcEnvironment.SetLogger(new ConsoleLogger());

        PublisherClient publisher = PublisherClient.Create();
        string projectId = "test-project-123456";
        TopicName topicName = new TopicName(projectId, "test");
        SimplePublisher simplePublisher = 
            await SimplePublisher.CreateAsync(topicName);
        string messageId = 
            await simplePublisher.PublishAsync(jsonMsg);
        await simplePublisher.ShutdownAsync(TimeSpan.FromSeconds(15));

I detailed in this post how to create a credentials file; you’ll need to do that to allow the WebJob to be authorised. The Json file referenced above was created using that process.

Azure Config

You’ll need to create an Azure message queue (I’ve called mine localsantaqueue):

I would also download the Service Bus Explorer (I’ll be using it later for testing).

GCP Config

We already have a DataFlow, a PubSub Topic and a BigQuery Database, so GCP should require no further configuration; except to ensure the permissions are correct.

The Service Account user (which I give more details of here needs to have PubSub permissions. For now, we’ll make them an editor, although in this instance, they probably only need publish:


We can do a quick test using the Service Bus Explorer and publish a message to the queue:

The ultimate test is that we can then see this in the BigQuery Table:

Lastly, the Function

This won’t be a completely function free post. The last step is to create a function that adds a message to the queue:

public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Function, "post")]HttpRequestMessage req,             
    TraceWriter log,
    [ServiceBus("localsantaqueue")] ICollector<string> queue)
    log.Info("C# HTTP trigger function processed a request.");
    var parameters = req.GetQueryNameValuePairs();
    string childName = parameters.First(a => a.Key == "childName").Value;
    string present = parameters.First(a => a.Key == "present").Value;
    string json = "{{ 'childName': '{childName}', 'present': '{present}' }} ";            

    return req.CreateResponse(HttpStatusCode.OK);

So now we have an endpoint for our imaginary Xamarin app to call into.


Both GCP and Azure are relatively immature platforms for this kind of interaction. The GCP client libraries seem to be missing functionality (and GCP is still heavily weighted away from .Net). The Azure libraries (especially functions) seem to be in a pickle, too – with strange dependencies that makes it very difficult to communicate outside of Azure. As a result, this task (which should have taken an hour or so) took a great deal of time, and it was completely unnecessary.

Having said that, it is clearly possible to link the two systems, if a little long-winded.


Working with Multiple Cloud Providers – Part 1 – Azure Function

Regular readers (if there are such things to this blog) may have noticed that I’ve recently been writing a lot about two main cloud providers. I won’t link to all the articles, but if you’re interested, a quick search for either Azure or Google Cloud Platform will yield several results.

Since it’s Christmas, I thought I’d do something a bit different and try to combine them. This isn’t completely frivolous; both have advantages and disadvantages: GCP is very geared towards big data, whereas the Azure Service Fabric provides a lot of functionality that might fit well with a much smaller LOB app.

So, what if we had the following scenario:

Santa has to deliver presents to every child in the world in one night. Santa is only one man* and Google tells me there are 1.9B children in the world, so he contracts out a series of delivery drivers. There needs to be around 79M deliveries every hour, let’s assume that each delivery driver can work 24 hours**. Each driver can deliver, say 100 deliveries per hour, that means we need around 790,000 drivers. Every delivery driver has an app that links to their depot; recording deliveries, schedules, etc.

That would be a good app to write in, say, Xamarin, and maybe have an Azure service running it; here’s the obligatory box diagram:

The service might talk to the service bus, might control stock, send e-mails, all kinds of LOB jobs. Now, I’m not saying for a second that Azure can’t cope with this, but what if we suddenly want all of these instances to feed metrics into a single data store. There’s 190*** countries in the world; if each has a depot, then there’s ~416K messages / hour going into each Azure service. But there’s 79M / hour going into a single DB. Because it’s Christmas, let assume that Azure can’t cope with this, or let’s say that GCP is a little cheaper at this scale; or that we have some Hadoop jobs that we’d like to use on the data. In theory, we can link these systems; which might look something like this:

So, we have multiple instances of the Azure architecture, and they all feed into a single GCP service.


At no point during this post will I attempt to publish 79M records / hour to GCP BigQuery. Neither will any Xamarin code be written or demonstrated – you have to use your imagination for that bit.

Proof of Concept

Given the disclaimer I’ve just made, calling this a proof of concept seems a little disingenuous; but let’s imagine that we know that the volumes aren’t a problem and concentrate on how to link these together.

Azure Service

Let’s start with the Azure Service. We’ll create an Azure function that accepts a HTTP message, updates a DB and then posts a message to Google PubSub.


For the purpose of this post, let’s store our individual instance data in Azure Table Storage. I might come back at a later date and work out how and whether it would make sense to use CosmosDB instead.

We’ll set-up a new table called Delivery:

Azure Function

Now we have somewhere to store the data, let’s create an Azure Function App that updates it. In this example, we’ll create a new Function App from VS:

In order to test this locally, change local.settings.json to point to your storage location described above.

And here’s the code to update the table:

    public static class DeliveryComplete
        public static HttpResponseMessage Run(
            [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, 
            TraceWriter log,            
            [Table("Delivery", Connection = "santa_azure_table_storage")] ICollector<TableItem> outputTable)
            log.Info("C# HTTP trigger function processed a request.");
            // parse query parameter
            string childName = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "childName", true) == 0)
            string present = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "present", true) == 0)
            var item = new TableItem()
                childName = childName,
                present = present,                
                RowKey = childName,
                PartitionKey = childName.First().ToString()                
            return req.CreateResponse(HttpStatusCode.OK);
        public class TableItem : TableEntity
            public string childName { get; set; }
            public string present { get; set; }


There are two ways to test this; the first is to just press F5; that will launch the function as a local service, and you can use PostMan or similar to test it; the alternative is to deploy to the cloud. If you choose the latter, then your local.settings.json will not come with you, so you’ll need to add an app setting:

Remember to save this setting, otherwise, you’ll get an error saying that it can’t find your setting, and you won’t be able to work out why – ask me how I know!

Now, if you run a test …

You should be able to see your table updated (shown here using Storage Explorer):


We now have a working Azure function that updates a storage table with some basic information. In the next post, we’ll create a GCP service that pipes all this information into BigTable and then link the two systems.


* Remember, all the guys in Santa suits are just helpers.
** That brandy you leave out really hits the spot!
*** I just Googled this – it seems a bit low to me, too.


Short Walks – Using JoinableTaskFactory

While attending DDD North this year, I attended a talk on avoiding deadlocks in asynchronous programming. During this talk, I was introduced to the JoinableTaskFactory.

This became strangely useful very quickly, when I encountered a problem similar to that described here. There are a couple of solutions to this question, but the least code churn is to simply make the code synchronous; however, if you do that by simply adding


to the end of the async calls, you’re very likely to result in a deadlock.

One possible solution is to wrap the call using the JoinableTaskFactory, in the following way:

var jtf = new JoinableTaskFactory(new JoinableTaskContext());
var result = jtf.Run<DataResult>(() => _myClass.GetDataAsync());

This allows the task to return on the same synchronisation context without causing a deadlock.


Asynchronous Debugging

Everyone who has spent time debugging errors in code that has multiple threads knows the pain of pressing F10 and seeing the cursor jump to a completely different part of the system (that is, everyone who has ever tried to).

There are a few tools in VS2017 that make this process slightly easier; and this post attempts to provide a brief summary. Obviously the examples in this post are massively contrived.


Let’s start with an error occurring inside a parallel loop. Here’s some code that will cause the error:

static void Main(string[] args)
    Console.WriteLine("Hello World!");
    Parallel.For(1, 10, (i) => RunProcess(i));
static void RunProcess(int i)
    Console.WriteLine($"Running {i}");
    if (i == 3) throw new Exception("error");

For some reason, I get an error when a few of these threads have started. I need a tool that tells me some details about the local variables in the threads specifically. Enter the Parallel Watch Window:

Launch Parallel Watch Window

Figure 1 – Launch Parallel Watch Window

This gives me a familiar interface, and tells me which thread I’m currently on:

Parallel Watch Window

Figure 2 – Parallel Watch Window

However, what I really want to see is the data local to the thread; what if I put “i” in the “Add Watch” cell:

Add a watch

Figure 3 – Add a watch

As you can see, I have a horizontal list of watch expressions, so I can monitor variables in multiple threads at a time.

Flagging a thread

We know there’s an issue with one of these threads, so one possibility is to flag that thread:

Flagging a thread

Figure 4 – Flagging a thread

Then you can select to show only flagged threads:

Filter flagged threads

Figure 5 – Filter flagged threads

Freezing non-relevant threads

The flags help to only trace the threads that you care about, but if you want to only run the threads that you care about, you can freeze the other threads:

Freeze Thread

Figure 6 – Freeze Thread

Once you’ve frozen a thread, a small pause icon appears, and that thread will stop:

Frozen Thread

Figure 7 – Frozen Thread

In order to freeze other threads, simply highlight all the relevant threads (Ctrl-A) and select Freeze.

It’s worth remembering that you can’t freeze a thread that doesn’t exist yet (so your breakpoint in a Parallel.For loop might only show half the threads).

Manual thread hopping

By using freeze, you can stop the debug message from jumping between threads. You can then manually control this process by simply selecting a thread and “Switch To Frame”:

Figure 8 – Switch to Frame

You can switch to a frozen frame, but as soon as you try to progress, you’ll flip back to the first non-frozen frame (unless you thaw it). The consequence of this is that, it is possible to switch to a frozen frame, freeze all other frames and then press F10 – you’re program will then stop dead.

Stack Trace

In a single threaded application (and in a multi-threaded application), you can always view the stack trace of a given line of executing code. There is also a Parallel Stack trace:

Parallel Stacks

Figure 9 – Parallel Stacks

Selecting any given method will give us the active threads, and allow switching:

Active Threads

Figure 10 – Active Threads

Parallel Stack Trace – Task View

The above view gives you a view of the created threads for your program; but most of the time, you won’t care what threads are created; only the tasks that you’ve spawned (they are not necessarily a 1 – 1 relationship. You can simply switch the view in this window to view Tasks instead:

Task View

Figure 11 – Task View

Tasks & Threads Windows

There is a tool that allows you to view all active, blocked and scheduled tasks:

Tasks Window

Figure 12 – Tasks Window

This allows you to freeze an entire task, switch to a given task, and Freeze All But This:

Freeze All But This

Figure 13 – Freeze All But This

There is an equivalent window for Threads. It is broadly the same idea; however, it does have one feature that the Tasks window does not, and that it the ability to rename a thread:

Rename a Thread

Figure 14 – Rename a Thread


The other killer feature both of these windows have is the flag feature. Simply flag a thread, switch to it, and then select “Show Only Flagged Threads” (little flag icon). If you now remove the breakpoints, you can step through only your thread or task!


So, what to do where you have a breakpoint that you might only wish to fire for a single thread? Helpfully, the breakpoints window has a filter feature:

Filter breakpoints on thread Id

Figure 15 – Filter Breakpoints