Setting up an e-mail Notification System using Logic Apps

One of the new features of the Microsoft’s Azure offering are Logic Apps: these are basically a workflow system, not totally dis-similar to Windows Workflow (WF so as not to get sued by panda bears). I’ve worked with a number of workflow systems in the past, from standard offerings to completely bespoke versions. The problem always seems to be that, once people start using them, they become the first thing you reach for to solve every problem. Not to say that you can’t solve every problem using a workflow (obviously, it depends which workflow and what you’re doing), but they are not always the best solution. In fact, they tend to be at their best when they are small and simple.

With that in mind, I thought I’d start with a very straightforward e-mail alert system. In fact, all this is going to do is to read an service bus queue and send an e-mail. I discussed a similar idea here, but that was using a custom written function.

Create a Logic App

The first step is to create a new Logic App project:

There are three options here: create a blank logic app, choose from a template (for example, process a service bus message), or define your own with a given trigger. We’ll start from a blank app:


Obviously, for a workflow to make sense, it has to start on an event or a schedule. In our case, we are going to run from a service bus entry, so let’s pick that from the menu that appears:

In this case, we’ll choose Peek-Lock, so that we don’t lose our message if something fails. I can now provide the connection details, or simply pick the service bus from a list that it already knows about:

It’s not immediately obvious, but you have to provide a connection name here:

If you choose Peek-Lock, you’ll be presented with an explanation of what that means, and then a screen such as the following:

In addition to picking the queue name, you can also choose the queue type (as well as listening to the queue itself, you can run your workflow from the dead-letter queue – which is very useful in its own right, and may even be a better use case for this type of workflow). Finally, you can choose how frequently to poll the queue.

If you now pick “New step”, you should be faced with an option:

In our case, let’s provide a condition (so that only queue messages with “e-mail” in the message result in an e-mail):

Before progressing to the next stage – let’s have a look at the output of this (you can do this by running the workflow and viewing the “raw output”):

Clearly the content data here is not what was entered. A quick search revealed that the data is Base64 encoded, so we have to make a small tweak in advanced mode:

Okay – finally, we can add the step that actually sends the e-mail. In this instance, I simply picked, and allowed Azure access to my account:

The last step is to complete the message. Because we only took a “peek-lock”, we now need to manually complete the message. In the designer, we just need to add an action:

Then tell it that we want to use the service bus again. As you can see – that’s one of the options in the list:

Finally, it wants the name of the queue, and asks for the lock token – which it helpfully offers from dynamic content:


To test this, we can add a message to our test queue using the Service Bus Explorer:

I won’t bother with a screenshot of the e-mail, but I will show this:

Which provides a detailed overview of exactly what has happened in the run.


Having a workflow system built into Azure seems like a two edged sword. On the one hand, you could potentially use it to easily augment functionality and quickly plug holes; however, on the other hand, you might find very complex workflows popping up all over the system, creating an indecipherable architecture.

Working with Multiple Cloud Providers – Part 3 – Linking Azure and GCP

This is the third and final post in a short series on linking up Azure with GCP (for Christmas). In the first post, I set-up a basic Azure function that updated some data in table storage, and then in the second post, I configured the GCP link from PubSub into BigQuery.

In the post, we’ll square this off by adapting the Azure function to post a message directly to PubSub; then, we’ll call the Azure function with Santa’a data, and watch that appear in BigQuery. At least, that was my plan – but Microsoft had other ideas.

It turns out that Azure functions have a dependency on Newtonsoft Json 9.0.1, and the GCP client libraries require 10+. So instead of being a 10 minute job on Boxing day to link the two, it turned into a mammoth task. Obviously, I spent the first few hours searching for a way around this – surely other people have faced this, and there’s a redirect, setting, or way of banging the keyboard that makes it work? Turns out not.

The next idea was to experiment with contacting the Google server directly, as is described here. Unfortunately, you still need the Auth libraries.

Finally, I swapped out the function for a WebJob. WebJobs give you a little move flexibility, and have no hard dependencies. So, on with the show (albeit a little more involved than expected).


In this post I described how to create a basic WebJob. Here, we’re going to do something similar. In our case, we’re going to listen for an Azure Service Bus Message, and then update the Azure Storage table (as described in the previous post), and call out to GCP to publish a message to PubSub.

Handling a Service Bus Message

We weren’t originally going to take this approach, but I found that WebJobs play much nicer with a Service Bus message, than with trying to get them to fire on a specific endpoint. In terms of scaleability, adding a queue in the middle can only be a good thing. We’ll square off the contactable endpoint at the end with a function that will simply convert the endpoint to a message on the queue. Here’s what the WebJob Program looks like:

public static void ProcessQueueMessage(
    [ServiceBusTrigger("localsantaqueue")] string message,
    TextWriter log,
    [Table("Delivery")] ICollector<TableItem> outputTable)
    // parse query parameter
    TableItem item = Newtonsoft.Json.JsonConvert.DeserializeObject<TableItem>(message);
    if (string.IsNullOrWhiteSpace(item.PartitionKey)) item.PartitionKey = item.childName.First().ToString();
    if (string.IsNullOrWhiteSpace(item.RowKey)) item.RowKey = item.childName;
    log.WriteLine("DeliveryComplete Finished");

Effectively, this is the same logic as the function (obviously, we now have the GCPHelper, and we’ll come to that in a minute. First, here’s the code for the TableItem model:

public class TableItem : TableEntity
    public string childName { get; set; }
    public string present { get; set; }

As you can see, we need to decorate the members with specific serialisation instructions. The reason being that this model is being used by both GCP (which only needs what you see on the screen) and Azure (which needs the inherited properties).


As described here, you’ll need to install the client package for GCP into the Azure Function App that we created in post one of this series (referenced above):

Install-Package Google.Cloud.PubSub.V1 -Pre

Here’s the helper code that I mentioned:

public static class GCPHelper
    public static async Task AddMessageToPubSub(TableItem toSend)
        string jsonMsg = Newtonsoft.Json.JsonConvert.SerializeObject(toSend);
            Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Test-Project-8d8d83hs4hd.json"));
        GrpcEnvironment.SetLogger(new ConsoleLogger());

        PublisherClient publisher = PublisherClient.Create();
        string projectId = "test-project-123456";
        TopicName topicName = new TopicName(projectId, "test");
        SimplePublisher simplePublisher = 
            await SimplePublisher.CreateAsync(topicName);
        string messageId = 
            await simplePublisher.PublishAsync(jsonMsg);
        await simplePublisher.ShutdownAsync(TimeSpan.FromSeconds(15));

I detailed in this post how to create a credentials file; you’ll need to do that to allow the WebJob to be authorised. The Json file referenced above was created using that process.

Azure Config

You’ll need to create an Azure message queue (I’ve called mine localsantaqueue):

I would also download the Service Bus Explorer (I’ll be using it later for testing).

GCP Config

We already have a DataFlow, a PubSub Topic and a BigQuery Database, so GCP should require no further configuration; except to ensure the permissions are correct.

The Service Account user (which I give more details of here needs to have PubSub permissions. For now, we’ll make them an editor, although in this instance, they probably only need publish:


We can do a quick test using the Service Bus Explorer and publish a message to the queue:

The ultimate test is that we can then see this in the BigQuery Table:

Lastly, the Function

This won’t be a completely function free post. The last step is to create a function that adds a message to the queue:

public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Function, "post")]HttpRequestMessage req,             
    TraceWriter log,
    [ServiceBus("localsantaqueue")] ICollector<string> queue)
    log.Info("C# HTTP trigger function processed a request.");
    var parameters = req.GetQueryNameValuePairs();
    string childName = parameters.First(a => a.Key == "childName").Value;
    string present = parameters.First(a => a.Key == "present").Value;
    string json = "{{ 'childName': '{childName}', 'present': '{present}' }} ";            

    return req.CreateResponse(HttpStatusCode.OK);

So now we have an endpoint for our imaginary Xamarin app to call into.


Both GCP and Azure are relatively immature platforms for this kind of interaction. The GCP client libraries seem to be missing functionality (and GCP is still heavily weighted away from .Net). The Azure libraries (especially functions) seem to be in a pickle, too – with strange dependencies that makes it very difficult to communicate outside of Azure. As a result, this task (which should have taken an hour or so) took a great deal of time, and it was completely unnecessary.

Having said that, it is clearly possible to link the two systems, if a little long-winded.


Getting Started With iOS for a C# Programmer – Part Five – Collision

In the previous post in this series, we covered moving an object around the screen. The next thing to consider to how to shoot the aliens, and how they can defeat the player.


The first stage is to create something to collide with. As with other game objects, our aliens will simply be rectangles at this stage. Let’s start with a familiar looking function:

    func createAlien(point: CGPoint) -> SKShapeNode {
        let size = CGSize(width: 40, height: 30)
        let alien = SKShapeNode(rectOf: size)
        alien.position = point
        alien.strokeColor = SKColor(red: 0.0/255.0, green: 200.0/255.0, blue: 0.0/255.0, alpha: 1.0)
        alien.lineWidth = 4
        alien.physicsBody = SKPhysicsBody(rectangleOf: size)
        alien.physicsBody?.affectedByGravity = false
        alien.physicsBody?.isDynamic = true
        = "alien"
        return alien


So, that will create us a green rectangle – let’s cause them to appear at regular intervals:

    override func didMove(to view: SKView) {

    func createAlienSpawnTimer() {
        var timer = Timer.scheduledTimer(timeInterval: 1.0, target: self, selector: #selector(self.timerUpdate), userInfo: nil, repeats: true)

The scheduleTimer calls self.timerUpdate:

    @objc func timerUpdate() {
        let xSpawn = CGFloat(CGFloat(arc4random_uniform(1334)) - CGFloat(667.0))
        let ySpawn = CGFloat(250)
        print (xSpawn, ySpawn)
        let spawnLocation = CGPoint(x: xSpawn, y: ySpawn)
        let newAlien = createAlien(point: spawnLocation)

So, every second, we’ll get a new alien… But they will just sit there at the minute; let’s get them to try and attack our player:

    override func update(_ currentTime: TimeInterval) {
        // Called before each frame is rendered
        player?.position.x += playerSpeed!
        self.enumerateChildNodes(withName: "bullet") {
            (node, _) in
            node.position.y += 1
    func moveAliens() {
        self.enumerateChildNodes(withName: "alien") {
            (node, _) in
            node.position.y -= 1
            if (node.position.x < (self.player?.position.x)!) {
                node.position.x += CGFloat(arc4random_uniform(5)) - 1 // Veer right
            } else if (node.position.x > (self.player?.position.x)!) {
                node.position.x += CGFloat(arc4random_uniform(5)) - 4 // Veer left


The SpriteKit game engine actually handles most of the logic around collisions for you. There’s a few changes that are needed to our game at this stage, though.


This is the parent class that actually handles the collision logic, so your GameScene class now needs to look more like this:

class GameScene: SKScene, SKPhysicsContactDelegate {

The game engine needs to be told where this SKPhysicsContactDelegate implementation is; in our case, it’s the same class:

    func createScene(){
        self.physicsBody = SKPhysicsBody(edgeLoopFrom: self.frame)
        self.physicsBody?.isDynamic = false
        self.physicsBody?.affectedByGravity = false
        self.physicsWorld.contactDelegate = self
        self.backgroundColor = SKColor(red: 255.0/255.0, green: 255.0/255.0, blue: 255.0/255.0, alpha: 1.0)

Contact and Colision Masks

The next thing, is that you need to tell SpriteKit how these objects interact with each other. There are three concepts here: contact, collision and category.


This allows each object to adhere to a type of behaviour; for example, the aliens need to pass through each other, but not through bullets; likewise, if we introduced a different type of alien (maybe a different graphic), it might need the same collision behaviour as the existing ones.


The idea behind contact is that you get notified when two objects intersect each other; in our case, we’ll need to know when aliens intersect bullets, and when aliens intersect the player.


Collision deals with what happens when the objects intersect. Unlike contact, this isn’t about getting notified, but the physical interaction. Maybe we have a game where blocks are pushed out of the way – in which case, we might only need collision, but not contact; or, in our case, we don’t have any physical interaction, because contact between two opposing entities results in one of them being removed.


So, the result of all that is that we need three new properties setting for each new object:

        alien.physicsBody?.categoryBitMask = collisionAlien
        alien.physicsBody?.collisionBitMask = 0
        alien.physicsBody?.contactTestBitMask = collisionPlayer
        = "alien"

. . .

        bullet.physicsBody?.categoryBitMask = collisionBullet
        bullet.physicsBody?.collisionBitMask = 0
        bullet.physicsBody?.contactTestBitMask = collisionAlien
        = "bullet"

. . .

        player.physicsBody?.categoryBitMask = collisionPlayer
        player.physicsBody?.collisionBitMask = 0
        player.physicsBody?.contactTestBitMask = 0
        = "player"


One this is done, you have access to the didBegin function; which, bizarrely named as it is, is the function that handles contact. Before we actually write any code in here, let’s create a helper method to determine if two nodes have come into contact:

    func AreTwoObjectsTouching(objA: String, nodeA: SKNode, objB: String, nodeB: SKNode, toRemove: String) -> Bool {
        if (objA == && objB == {
            if (toRemove == objA) {
                RemoveGameItem(item: nodeA)
            } else if (toRemove == objB) {
                RemoveGameItem(item: nodeB)
            return true
        } else if (objB == && objA == {
            if (toRemove == objA) {
                RemoveGameItem(item: nodeB)
            } else if (toRemove == objB) {
                RemoveGameItem(item: nodeA)
            return true
        } else {
            return false

Since the accepted standard for iOS is bad naming, I felt it my duty to continue the tradition. This helper method is effectively all the logic that occurs. As you can see, as we don’t know what has touched what, we have reversed checks and then simply remove the stated item (in our case, that is just a game rule). The didBegin function simply calls this:

    func didBegin(_ contact: SKPhysicsContact) {
        print ("bodyA", contact.bodyA.node?.name)
        print ("bodyB", contact.bodyB.node?.name)
        // If player and alien collide then the player dies
        if (AreTwoObjectsTouching(objA: "alien", nodeA: contact.bodyA.node!,
                                  objB: "player", nodeB: contact.bodyB.node!, toRemove: "player")) {
        } else if (AreTwoObjectsTouching(objA: "bullet", nodeA: contact.bodyA.node!,
                                         objB: "alien", nodeB: contact.bodyB.node!, toRemove: "alien")) {
            RemoveGameItem(item: contact.bodyB.node!)
            RemoveGameItem(item: contact.bodyA.node!)

The debug statements at the top are clearly not necessary; however, they do give some insight into what is happening in the game.


We now have a vaguely working game:

You can control the ship, fire, and the aliens are removed when they are hit, along with the player being removed when it is hit. The next stage is to replace the rectangles with images.


Short Walks – GCP Credit Alerts

One of the things that is quite unnerving when you start using GCP is the billing. Unlike Azure (with your MSDN monthly credits), GCP just has a single promotion and that’s it; consequently, they don’t have any real way to have an automatic shut off instead of actually charging you (unlike to regular mails that I get from MS telling me they’ve suspended my account until next month).

When you start messing around with BigTable and BigQuery, you can eat up tens, or even hundreds of pounds very quickly, and you might not even realise you’ve done it.

GCP does have a warning, and you can set it to e-mail you at certain intervals within a spending limit:

However, this doesn’t include credit by default. That is, if Google give you a credit to start with (for example, because you’re trying out GCP or, I imagine, if you load your account up before-hand) then that doesn’t get included in your alerts.

Credit Checkbox

There is a checkbox that allows you to switch this behaviour, so that these credit totals are included:

And now you can see how much of your credit you’ve used:

And even receive e-mail warnings:

Disable Billing

One other thing you can do is to disable billing:

Unfortunately, this works differently from Azure, and effectively suspends you project:

Short Walks – NSubstitute – Subclassing and Partial Substitutions

I’ve had this issue a few times recently. Each time I have it, after I’ve worked out what it was, it makes sense, but I keep running into it. The resulting frustration is this post – that way, it’ll come up the next time I search for it on t’internet.

The Error

The error is as follows:

“NSubstitute.Exceptions.CouldNotSetReturnDueToNoLastCallException: ‘Could not find a call to return from.”

Typically, it seems to occur in one of two circumstances: substituting a concrete class and partially substituting a concrete class; that is:

var myMock = Substitute.ForPartsOf<MyClass>();


var myMock = Substitute.For<MyClass>();


If you were to manually mock out an interface, how would you do that? Well, say you had IMyClass, you’d just do something like this:

public class MyClassMock : IMyClass 
	// New and imaginative behaviour goes here

All’s good – you get a brand new implementation of the interface, and you can do anything with it. But what would you do if you were trying to unit test a method inside MyClass that called another method inside MyClass; for example:

public class MyClass : IMyClass
	public bool Method1()
		int rowCount = ReadNumberOfRowsInFileOnDisk();
		Return rowCount > 10;
	public int ReadNumberOfRowsInFileOnDisk()
		// Opens a file, reads it, and returns the number of rows

(Let’s not get bogged down in how realistic this scenario is, or whether or not this is good practice – it illustrates a point)

If you want to unit test Method1(), but don’t want to actually read a file from the disk, you’ll want to replace ReadNumberOfRowsInFileOnDisk(). The only real way that you can do this is to subclass the class; for example:

public class MyClassMock : MyClass

You can now test the behaviour on MyClass, via MyClassMock; however, you still can’t* override the method ReadNumberOfRowsInFileOnDisk() because it isn’t virtual. If you make the method virtual, you can override it in the subclass.

The same is true with NSubstitute – if you want to partially mock a class in this way, it follows the same rules as you must if you would to roll your own.


* There may, or may not be one or two ways to get around this restriction, but let’s at least agree that they are, at best, unpleasant.

Getting Started With iOS for a C# Programmer – Part Four – Controlling an Object

In this post, we covered how to move an object on the screen; this time, we’re going to control it. However, before we do, let’s change the game orientation to landscape only, as that’s really the only thing that makes sense on an iPhone for this type of game.


The first thing to do is to change the code in the GameViewController to force the orienetation to landscape:

    override var shouldAutorotate: Bool {
        return true

    override var supportedInterfaceOrientations: UIInterfaceOrientationMask {
        if UIDevice.current.userInterfaceIdiom == .phone || UIDevice.current.userInterfaceIdiom == .pad {
            return .landscape
        } else {
            return .all

When you run the simulator, the game will now appear on its side; to rectify that, select Hardware -> Rotate Left*:


As you may have noticed from the previous post, we do have a modicum of control over the rectangle; let’s now change that so that we can press to the left and have it move left, or on the right and have it move right.

Left and Right

It turns out that this is pretty easy once you understand how the co-ordinates work in Swift:

    override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
        for t in touches {
            self.touchDown(atPoint: t.location(in: player!))

And the touchDown function:

func touchDown(atPoint pos : CGPoint) {
        print (pos.x, pos.y)
        var halfWidth = (player?.frame.width)! / 2;
        if (pos.x < -halfWidth) {
            playerSpeed? -= CGFloat(1.0);
        } else if (pos.x > halfWidth) {
            playerSpeed? += CGFloat(1.0);

Finally, we need the update event to do something with the playerSpeed:

    override func update(_ currentTime: TimeInterval) {
        // Called before each frame is rendered
        player?.position.x += playerSpeed!

Now we have a game where we can move the player left and right.

Tidy up

To clean up the screen before we get into firing, it would be nice if the player was lower down the screen, and a slightly different colour:

func CreatePlayer() -> SKShapeNode {
        let playerSize = CGSize(width: 100, height: 10)
        let player = SKShapeNode(rectOf: playerSize)
        print(self.frame.minY, self.frame.maxY, self.frame.minX, self.frame.maxX, self.frame.width, self.frame.height)
        player.position = CGPoint(x:self.frame.midX, y:-150)
        player.strokeColor = SKColor(red: 0.0/255.0, green: 0.0/255.0, blue: 200.0/255.0, alpha: 1.0)
        player.lineWidth = 1
        player.physicsBody = SKPhysicsBody(rectangleOf: playerSize)
        player.physicsBody?.affectedByGravity = false
        player.physicsBody?.isDynamic = true
        return player

Remember, this is a landscape only game.


Okay – so firing is just an extra clause in our touchDown function:

    func touchDown(atPoint pos : CGPoint) {
        print (pos.x, pos.y)
        var halfWidth = (player?.frame.width)! / 2;
        if (pos.x < -halfWidth) {
            playerSpeed? -= CGFloat(1.0);
        } else if (pos.x > halfWidth) {
            playerSpeed? += CGFloat(1.0);
        } else {

    func Fire() {
        let bullet = CreateBullet(point: (player?.position)!)

    func CreateBullet(point : CGPoint) -> SKShapeNode {
        let bulletSize = CGSize(width: 1, height: 10)
        let bullet = SKShapeNode(rectOf: bulletSize)
        //print(self.frame.minY, self.frame.maxY, self.frame.minX, self.frame.maxX, self.frame.width, self.frame.height)
        bullet.position = point
        bullet.strokeColor = SKColor(red: 0.0/255.0, green: 0.0/255.0, blue: 200.0/255.0, alpha: 1.0)
        bullet.lineWidth = 4
        bullet.physicsBody = SKPhysicsBody(rectangleOf: bulletSize)
        bullet.physicsBody?.affectedByGravity = false
        bullet.physicsBody?.isDynamic = true
        = "bullet"
        return bullet

All we’ve actually done here is to create a new rectangle, and sourced it right at the centre of the player. We’ve added it to the self construct, so the automatic rendering will pick it up; next, we need to move it:

    override func update(_ currentTime: TimeInterval) {
        // Called before each frame is rendered
        player?.position.x += playerSpeed!
        self.enumerateChildNodes(withName: "bullet") {
            (node, _) in
            node.position.y += 1


* Remember, when holding an iPhone or iPad, the button should always be on the right hand side – without wishing to judge, anyone that feels different is wrong, and very possibly evil.


Working with Multiple Cloud Providers – Part 2 – Getting Data Into BigQuery

In this post, I described how we might attempt to help Santa and his delivery drivers to deliver presents to every child in the world, using the combined power of Google and Microsoft.

In this, the second part of the series (there will be one more), I’m going to describe how we might set-up a GCP pipeline that feeds that data into BigQuery (Google’s BigData NoSQL warehouse offering). We’ll first set up BigQuery, then the PubSub topic, and finally, we’ll set-up the dataflow, ready for Part 3, which will be joining the two systems together.


Once you navigate to the BigQuery section of the GCP console, you’ll be able to create a Dataset:

You can now set-up a new table. As this is an illustration, we’ll keep it as simple as possible, but you can see that this might be much more complex:

One thing to bear in mind about BigQuery, and cloud data storage in general is that, often, it makes sense to de-normalise your data – storage is often much cheaper than CPU time.


Now we have somewhere to put the data; we could simply have the Azure function write the data into BigQuery. However, we might then run into problems if the data flow suddenly spiked. For this reason, Google recommends the use of PubSub as a shock absorber.

Let’s create a PubSub topic. I’ve written in more detail on this here:


The last piece of the jigsaw is Dataflow. Dataflow can be used for much more complex tasks than to simply take data from one place and put it in another, but in this case, that’s all we need. Before we can set-up a new dataflow job, we’ll need to create a storage bucket:

We’ll create the bucket as Regional for now:

Remember that the bucket name must be unique (so no-one can ever pick pcm-data-flow-bucket again!)

Now, we’ll move onto the DataFlow itself. We get a number of dataflow templates out of the box; and we’ll use one of those. Let’s launch dataflow from the console:

Here we create a new Dataflow job:

We’ll pick “PubSub to BigQuery”:

You’ll then get asked for the name of the topic (which was created earlier) and the storage bucket (again, created earlier); you’re form should look broadly like this when you’re done:

I strongly recommend specifying a maximum number of workers, at least while you’re testing.


Finally, we’ll test it. PubSub allows you to publish a message:

Next, visit the Dataflow to see what’s happening:

Looks interesting! Finally, in BigQuery, we can see the data:


We now have the two separate cloud systems functioning independently. Step three will be to join them together.

Working with Multiple Cloud Providers – Part 1 – Azure Function

Regular readers (if there are such things to this blog) may have noticed that I’ve recently been writing a lot about two main cloud providers. I won’t link to all the articles, but if you’re interested, a quick search for either Azure or Google Cloud Platform will yield several results.

Since it’s Christmas, I thought I’d do something a bit different and try to combine them. This isn’t completely frivolous; both have advantages and disadvantages: GCP is very geared towards big data, whereas the Azure Service Fabric provides a lot of functionality that might fit well with a much smaller LOB app.

So, what if we had the following scenario:

Santa has to deliver presents to every child in the world in one night. Santa is only one man* and Google tells me there are 1.9B children in the world, so he contracts out a series of delivery drivers. There needs to be around 79M deliveries every hour, let’s assume that each delivery driver can work 24 hours**. Each driver can deliver, say 100 deliveries per hour, that means we need around 790,000 drivers. Every delivery driver has an app that links to their depot; recording deliveries, schedules, etc.

That would be a good app to write in, say, Xamarin, and maybe have an Azure service running it; here’s the obligatory box diagram:

The service might talk to the service bus, might control stock, send e-mails, all kinds of LOB jobs. Now, I’m not saying for a second that Azure can’t cope with this, but what if we suddenly want all of these instances to feed metrics into a single data store. There’s 190*** countries in the world; if each has a depot, then there’s ~416K messages / hour going into each Azure service. But there’s 79M / hour going into a single DB. Because it’s Christmas, let assume that Azure can’t cope with this, or let’s say that GCP is a little cheaper at this scale; or that we have some Hadoop jobs that we’d like to use on the data. In theory, we can link these systems; which might look something like this:

So, we have multiple instances of the Azure architecture, and they all feed into a single GCP service.


At no point during this post will I attempt to publish 79M records / hour to GCP BigQuery. Neither will any Xamarin code be written or demonstrated – you have to use your imagination for that bit.

Proof of Concept

Given the disclaimer I’ve just made, calling this a proof of concept seems a little disingenuous; but let’s imagine that we know that the volumes aren’t a problem and concentrate on how to link these together.

Azure Service

Let’s start with the Azure Service. We’ll create an Azure function that accepts a HTTP message, updates a DB and then posts a message to Google PubSub.


For the purpose of this post, let’s store our individual instance data in Azure Table Storage. I might come back at a later date and work out how and whether it would make sense to use CosmosDB instead.

We’ll set-up a new table called Delivery:

Azure Function

Now we have somewhere to store the data, let’s create an Azure Function App that updates it. In this example, we’ll create a new Function App from VS:

In order to test this locally, change local.settings.json to point to your storage location described above.

And here’s the code to update the table:

    public static class DeliveryComplete
        public static HttpResponseMessage Run(
            [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, 
            TraceWriter log,            
            [Table("Delivery", Connection = "santa_azure_table_storage")] ICollector<TableItem> outputTable)
            log.Info("C# HTTP trigger function processed a request.");
            // parse query parameter
            string childName = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "childName", true) == 0)
            string present = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "present", true) == 0)
            var item = new TableItem()
                childName = childName,
                present = present,                
                RowKey = childName,
                PartitionKey = childName.First().ToString()                
            return req.CreateResponse(HttpStatusCode.OK);
        public class TableItem : TableEntity
            public string childName { get; set; }
            public string present { get; set; }


There are two ways to test this; the first is to just press F5; that will launch the function as a local service, and you can use PostMan or similar to test it; the alternative is to deploy to the cloud. If you choose the latter, then your local.settings.json will not come with you, so you’ll need to add an app setting:

Remember to save this setting, otherwise, you’ll get an error saying that it can’t find your setting, and you won’t be able to work out why – ask me how I know!

Now, if you run a test …

You should be able to see your table updated (shown here using Storage Explorer):


We now have a working Azure function that updates a storage table with some basic information. In the next post, we’ll create a GCP service that pipes all this information into BigTable and then link the two systems.


* Remember, all the guys in Santa suits are just helpers.
** That brandy you leave out really hits the spot!
*** I just Googled this – it seems a bit low to me, too.


A C# Programmer’s Guide to Google Cloud Pub Sub Messaging

The Google Cloud Platform provides a Publish / Subscriber system called ‘PubSub’. In this post I wrote a basic guide on setting up RabbitMQ, and here I wrote about ActiveMQ. In this post I wrote about using the Azure messaging system. Here, I’m going to give an introduction to using the GCP PubSub system.


The above systems that I’ve written about in the past are fully featured (yes, including Azure) message bus systems. While the GCP offering is a Message Bus system of sorts, it is definitely lacking some of the features of the other platforms. I suppose this stems from the fact that, in the GCP case, it serves a specific purpose, and is heavily geared toward that purpose.

Other messaging systems do offer the Pub / Sub model. The idea being that you create a topic, and anyone that’s interested can subscribe to the topic. Once you’ve subscribed, you’re guaranteed* to get at least one delivery of the published message. You can also, kind of, simulate a message queue, because more than one subscriber can take message from a single subscription.


If you want to follow along with the post, you’ll need to have a GCP subscription, and a GCP project configured.


In order to set-up a new topic, we’re going to navigate to the PubSub menu in the console (you may be prompted to Enable PubSub when you arrive).

As you can see, you’re inundated with choice here. Let’s go for “Create a topic”:

Cloud Shell

You’ve now created a topic; however, that isn’t the only way that you can do this. Google are big on using the Cloud Shell; and so you can create a topic using that; in order to do so, you select the cloud shell icon:

Once you get the cloud shell, you can use the following command**:

gcloud beta pubsub topics create "test"

Subscriptions and Publishing

You can publish a message now if you like; either from the console:

Or from the Cloud Shell:

gcloud beta pubsub topics publish "test" "message"

Both will successfully publish a message that will get delivered to all subscribers. The problem is that you haven’t created any subscribers yet, so it just dissipates into the ether***.

You can see there are no subscriptions, because the console tells you****:

Let’s create one:

Again, you can create a subscription from the cloud shell:

gcloud beta pubsub topics subscriptions create --topic "test" "mysubscription"

So, we now have a subscription, and a message.

Consuming messages

In order to consume messages in this instance, let’s create a little cloud function. I’ve previously written about creating these here. Instead of creating a HTTP trigger, this time, we’re going to create a function that reacts to something on a cloud Pub/Sub topic:

Select the relevant topic; the default code just writes the test out to the console; so that’ll do:

 * Triggered from a message on a Cloud Pub/Sub topic.
 * @param {!Object} event The Cloud Functions event.
 * @param {!Function} The callback function.
exports.subscribe = function subscribe(event, callback) {
  // The Cloud Pub/Sub Message object.
  const pubsubMessage =;

  // We're just going to log the message to prove that
  // it worked.
  console.log(Buffer.from(, 'base64').toString());

  // Don't forget to call the callback.

So, now we have a subscription:

Let’s see what happens when we artificially push a message to it.

If we now have a look at the Cloud Function, we can see that something has happened:

And if we select “View Logs”, we can see what:

It worked! Next…

Create Console App

Now we have something that will react to a message, let’s try and generate one programmatically, in C# from a console app. Obviously the first thing to do is to install a NuGet package that isn’t past the beta stage yet:

Install-Package Google.Cloud.PubSub.V1 -Pre


In this post I described how you might create a credentials file. You’ll need to do that again here (and, I think anywhere that you want to access GCP from outside of the cloud).

In APIs & Services, select “Create credentials”:

Again, select a JSON file:

The following code publishes a message to the topic:

static async Task Main(string[] args)
        Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "my-credentials-file.json"));
    GrpcEnvironment.SetLogger(new ConsoleLogger());
    // Instantiates a client
    PublisherClient publisher = PublisherClient.Create();
    string projectId = "test-project-123456";
    var topicName = new TopicName(projectId, "test");
    SimplePublisher simplePublisher = await SimplePublisher.CreateAsync(topicName);
    string messageId = await simplePublisher.PublishAsync("test message");
    await simplePublisher.ShutdownAsync(TimeSpan.FromSeconds(15));

And we can see that message in the logs of the cloud function:


Unless you choose otherwise, the service account will look something like this:

The Editor permission that it gets by default is a sort of God permission. This can be fine-grained by removing that, and selecting specific permissions; in this case, Pub/Sub -> Publisher. It’s worth bearing in mind that as soon as you remove all permissions, the account is removed, so try to maintain a single permission (project browser seems to be suitably innocuous).


* Google keeps messages for up to 7 days, so the guarantee has a time limit.

** gcloud may need to be initialised. If it does then:

gcloud init
gcloud components install beta

*** This is a big limitation. Whilst all topic subscriptions in other systems do work like this, in those systems, you have the option of a queue – i.e. a place for messages to live that no-one is listening for.

**** If you create a subscription in the Cloud Shell, it will not show in the console until you F5 (there may be a timeout, but I didn’t wait that long). The problem here is that F5 messes up the shell window.


Google Cloud Datastore – Setting up a new Datastore and accessing it from a console application

Datastore is a NoSql offering from Google. It’s part of their Google Cloud Platform (GCP). The big mind shift, if you’re used to a relational database is to remember that each row (although they aren’t really rows) in a table (they aren’t really tables) can be different. The best way I could think about it was a text document; each line can have a different number of words, numbers and symbols.

However, just because it isn’t relational, doesn’t mean you don’t have to consider the structure; in fact, it actually seems to mean that there is more onus on the designer to consider what and where the data will be used.


In order to follow this post, you’ll need an account on GCP, and a Cloud Platform Project.

Set-up a New Cloud Datastore

The first thing to do is to set-up a new Datastore:


The next step is to select a Zone. The big thing to consider, in terms of cost and speed is to co-locate your data where possible. Specifically with data, you’ll incur egress charges (that is, you’ll be charged as your data leaves its zone), so your zone should be nearby, and co-located with anything that accesses it. Obviously, in this example, you’re accessing the data from where your machine is located, so pick a zone that is close to where you live.

In Britain, we’re in Europe-West-2:

Entities and Properties

The next thing is to set-up new entity. As we said, an entity is loosely analogous to a table.

Now we have an entity, the entity needs some properties. This, again, is loosely analogous to a field; if the fields were not required to be consistent throughout the table. I’m unsure how this works behind the scenes, but it appears to simply null out the columns that have no value; I suspect this may be a visual display thing.

You can set the value (as above), and then query the data, either in a table format (as below):

Or, you can use a SQL like syntax (as below).


In order to access the datastore from outside the GCP, you’ll need a credentials file. You;ll need to start off in the Credentials screen:

In this instance, we’ll set-up a service account key:

This creates the key as a json file:

The file should looks broadly like this:

  "type": "service_account",
  "project_id": "my-project-id",
  "private_key_id": "private_key_id",
  "private_key": "-----BEGIN PRIVATE KEY-----\nkeydata\n-----END PRIVATE KEY-----\n",
  "client_email": "",
  "client_id": "clientid",
  "auth_uri": "",
  "token_uri": "",
  "auth_provider_x509_cert_url": "",
  "client_x509_cert_url": ""

Keep hold of this file, as you’ll need it later.

Client Library

There is a .Net client library provided for accessing this functionality from your website or desktop app. What we’ll do next is access that entity from a console application. The obvious first step is to create one:

Credentials again

Remember that credentials file I said to hang on to; well now you need it. It needs to be accessible from your application; there’s a number of ways to address this problem, and the one that I’m demonstrating here is probably not a sensible solution in real life, but for the purpose of testing, it works fine.

Copy the credentials file into your project directory and include it in the project, then, set the properties to:

Build Action: None
Copy to Output Directory: Copy if Newer

GCP Client Package

You’ll need to install the correct NuGet package:

Install-Package Google.Cloud.Datastore.V1

Your Project ID

As you use the GCP more, you’ll come to appreciate that the project ID is very important. You’ll need to make a note of it (if you can’t find it, simply select Home from the hamburger menu):

The Code

All the pieces are now in place, so let’s write some code to access the datastore:

    Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "my-credentials-file.json"));
GrpcEnvironment.SetLogger(new ConsoleLogger());
// Your Google Cloud Platform project ID
string projectId = "my-project-id";
DatastoreClient datastoreClient = DatastoreClient.Create();
DatastoreDb db = DatastoreDb.Create(projectId, "TestNamespace", datastoreClient);

string kind = "MyTest";

string name = "newentitytest3";
KeyFactory keyFactory = db.CreateKeyFactory(kind);
Key key = keyFactory.CreateKey(name);
var task = new Entity
    Key = key,
    ["test1"] = "Hello, World",
    ["test2"] = "Goodbye, World",
    ["new field"] = "test"
using (DatastoreTransaction transaction = db.BeginTransaction())

If you now check, you should see that your Datastore has been updated:

There’s a few things to note here; the first is that you will need to select the right Namespace and Kind. Namespace defaults to [default], and so you won’t see your new records until you select that.

When things go wrong

The above instructions are deceptively simple; however, getting this example working was, by no means, straight-forward. Fortunately, when you have a problem with GCP and you ask on StackOverflow, you get answered by Jon Skeet. The following is a summary of an error that I encountered.


System.InvalidOperationException: ‘The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See for more information.’

The error occurred on the BeginTransaction line.

The ConsoleLogger above isn’t just there for show, and does give some additional information; in this case:

D1120 17:59:00.519509 Grpc.Core.Internal.UnmanagedLibrary Attempting to load native library “C:\Users\pmichaels.nuget\packages\grpc.core\1.4.0\lib\netstandard1.5../..\runtimes/win/native\grpc_csharp_ext.x64.dll” D1120 17:59:00.600298 Grpc.Core.Internal.NativeExtension gRPC native library loaded successfully. E1120 17:59:02.176461 0 C:\jenkins\workspace\gRPC_build_artifacts\platform\windows\workspace_csharp_ext_windows_x64\src\core\lib\security\credentials\plugin\plugin_credentials.c:74: Getting metadata from plugin failed with error: Exception occured in metadata credentials plugin.

It turns out that the code was failing somewhere in here. Finally, with much help, I managed to track the error down to being a firewall restriction.