Reading Azure Service Bus Queue Names from the Config File

In this post, I wrote about how you might read a message from the service bus queue. However, with Azure Functions (and WebJobs), comes the ability to have Microsoft do some of this plumbing code for you.

I have a queue here (taken from the service bus explorer):

I can read this in an Azure function; let’s create a new Azure Functions App:

This time, we’ll create a Service Bus Queue Triggered function:

Out of the box, that will give you this:

public static class Function1
{
    [FunctionName("Function1")]
    public static void Run([ServiceBusTrigger("testqueue", AccessRights.Listen, Connection = "")]string myQueueItem, TraceWriter log)
    {
        log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
    }
}

There’s a few things that we’ll probably want to change here. The first is “Connection”. We can remove that parameter altogether, and then add a row to the local.settings.json file (which can be overridden later inside Azure). Out of the box, you get AzureWebJobsStorage and AzureWebJobsDashboard, which both accept a connection string to a Azure Storage Account. You can also add AzureWebJobsServiceBus, which accepts a connection string to the service bus:

"Values": {
    "AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=teststorage1…",
    "AzureWebJobsDashboard": "DefaultEndpointsProtocol=https;AccountName=teststorage1…",
    "AzureWebJobsServiceBus": "Endpoint=sb://pcm-servicebustest.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=…"
  }

If you run the job, it will now pick up any outstanding entries in that queue. But, what if you don’t know the queue name; for example, what if you find out the queue name is different. To illustrate the point; here, I’m looking for “testqueue1”, but the queue name (as you saw earlier) is “testqueue”:

public static class Function1
{
    [FunctionName("Function1")]
    public static void Run([ServiceBusTrigger("testqueue1", AccessRights.Listen)]string myQueueItem, TraceWriter log)
    {
        log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
    }
}

Obviously, if you’re looking for a queue that doesn’t exist, bad things happen:

To fix this, I have to change the code… which is broadly speaking a bad thing. What we can do, is configure the queue name in the config file; like this:


"Values": {
    "AzureWebJobsStorage": " . . . ",
    . . .,
    "queue-name":  "testqueue"
  }

And we can have the function look in the config file by changing the queue name:

[FunctionName("Function1")]
public static void Run([ServiceBusTrigger("%queue-name%", AccessRights.Listen)]string myQueueItem, TraceWriter log)
{
    log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}

The pattern of supplying a variable name in the format “%variable-name%” seems to work across other triggers and bindings for Azure Functions.

Deployment

That’s now looking much better, but what happens when the function gets deployed? Let’s see:

We can now see that the function is deployed:

At the minute, it won’t do anything, because it’s looking for a queue name in a setting that only exists locally. Let’s fix that:

Remember to save the changes.

Looking at the logs confirms that this now runs correctly.

Short Walks – C# Pattern Matching to Match Ranges

Back in 2010, working at the time in a variety of languages, including VB, I asked this question on StackOverflow. In VB, you could put a range inside a switch statement, and I wanted to know how you could do that in C#. The (correct) answer at the time was that you can’t.

Fast forward just eight short years, and suddenly, it’s possible. The new feature of pattern matching in C# 7.0 has made this possible.

You can now write something like this (this is C# 7.1 because of Async Main):


static async Task Main(string[] args)
{            
    for (int i = 0; i <= 20; i++)
    {
        switch (i)
        {
            case var test when test <= 2:
                Console.WriteLine("Less than 2");
                break;
 
            case var test when test > 2 && test < 10:
                Console.WriteLine("Between 2 and 10");
                break;
 
            case var test when test >= 10:
                Console.WriteLine("10 or more");
                break;
        }
 
        await Task.Delay(500);
    }
 
    Console.ReadLine();
}

References

https://docs.microsoft.com/en-us/dotnet/csharp/pattern-matching

https://visualstudiomagazine.com/articles/2017/02/01/pattern-matching.aspx

Creating a Scheduled Azure Function

I’ve previously written about creating Azure functions. I’ve also written about how to react to service bus queues. In this post, I wanted to cover creating a scheduled function. Basically, these allow you to create a scheduled task that executes at a given interval, or at a given time.

Timer Trigger

The first thing to do is create a function with a type of Timer Trigger:

Schedule / CRON format

The next thing is to understand the schedule, or CRON, format. The format is:

{second} {minute} {hour} {day} {month} {day-of-week}

Scheduled Intervals

The example you’ll see when you create this looks like this:

0 */5 * * * *

The notation */[number] means once every number; so */5 would mean once every 5… and then look at the placeholder to work out 5 what; in this case it means once every 5 minutes. So, for example:

*/10 * * * * *

Would be once every 10 seconds.

Scheduled Times

Specifying numbers means the schedule will execute at that time; so:

0 0 0 * * *

Would execute every time the hour, minute and second all hit zero – so once per day at midnight; and:

0 * * * * *

Would execute every time the second hits zero – so once per minute; and:

0 0 * * * 1

Would execute once per hour on a Monday (as the last placeholder is the day of the week).

Time constraints

These can be specified in any column in the format [lower bound]-[upper bound], and they restrict the timer to a range of values; for example:

0 */20 5-10 * * *

Means every 20 minutes between 5 and 10am (as you can see, the different types can be used in conjunction).

Asterisks (*)

You’ll notice above that there are asterisks in every placeholder that a value has not been specified. The askerisk signifies that the schedule will execute at every interval within that placeholder; for example:

* * * * * *

Means every second; and:

0 * * * * *

Means every minute.

Back to the function

Upon starting, the function will detail when the next several executions will take place:

But what if you don’t know what the schedule will be at compile time. As with many of the variables in an Azure Function, you can simply substitute the value for a placeholder:

[FunctionName("MyFunc")]
public static void Run([TimerTrigger("%schedule%")]TimerInfo myTimer, TraceWriter log)
{
    log.Info($"C# Timer trigger function executed at: {DateTime.Now}");
}

This value can then be provided inside the local.settings.json:

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "DefaultEndpointsProto . . .",
    "AzureWebJobsDashboard": "DefaultEndpointsProto . . .",
    "schedule": "0 * * * * *"
  }
}

References

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer

http://cronexpressiondescriptor.azurewebsites.net/?expression=1+*+*+*+*+*&locale=en

Using Unity With Azure Functions

Azure Functions are a relatively new kid on the block when it comes to the Microsoft Azure stack. I’ve previously written about them, and their limitations. One such limitation seems to be that they don’t lend themselves very well to using dependency injection. However, it is certainly not impossible to make them do so.

In this particular post, we’ll have a look at how you might use an IoC container (Unity in this case) in order to leverage DI inside an Azure function.

New Azure Functions Project

I’ve covered this before in previous posts, in Visual Studio, you can now create a new Azure Functions project:

That done, you should have a project that looks something like this:

As you can see, the elephant in the room here is there are no functions; let’s correct that:

Be sure to call your function something descriptive… like “Function1”. For the purposes of this post, it doesn’t matter what kind of function you create, but I’m going to create a “Generic Web Hook”.

Install Unity

The next step is to install Unity (at the time of writing):

Install-Package Unity -Version 5.5.6

Static Variables Inside Functions

It’s worth bearing mind that a static variable works the way you would expect, were the function a locally hosted process. That is, if you write a function such as this:

[FunctionName("Function1")]
public static object Run([HttpTrigger(WebHookType = "genericJson")]HttpRequestMessage req, TraceWriter log)
{
    log.Info($"Webhook was triggered!");
    
    System.Threading.Thread.Sleep(10000);
    log.Info($"Index is {test}");
    return req.CreateResponse(HttpStatusCode.OK, new
    {
        greeting = $"Hello {test++}!"
    });
}

And access it from a web browser, or postman, or both as the same time, you’ll get incrementing numbers:

Whilst the values are shared across the instances, you can’t cause a conflict by updating something in one function while reading it in another (I tried pretty hard to cause this to break). What this means, then, is that we can store an IoC container that will maintain state across function calls. Obviously, this is not intended for persisting state, so you should assume your state could be lost at any time (as indeed it can).

Registering the Unity Container

One method of doing this is to use the Lazy object. This pretty much passed me by in .Net 4 (which is, apparently, when it came out). It basically provides a slightly neater way of doing this kind of thing:

private List<string> _myList;
public List<string> MyList
{
    get
    {
        if (_myList == null)
        {
            _myList = new List<string>();
        }
        return _myList;
    }
}

The “lazy” method would be:

public Lazy<List<string>> MyList = new Lazy<List<string>>(() =>
{
    List<string> newList = new List<string>();
    return newList;
});

With that in mind, we can do something like this:

public static class Function1
{
     private static Lazy<IUnityContainer> _container =
         new Lazy<IUnityContainer>(() =>
         {
             IUnityContainer container = InitialiseUnityContainer();
             return container;
         });

InitialiseUnityContainer needs to return a new instance of the container:

public static IUnityContainer InitialiseUnityContainer()
{
    UnityContainer container = new UnityContainer();
    container.RegisterType<IMyClass1, MyClass1>();
    container.RegisterType<IMyClass2, MyClass2>();
    return container;
}

After that, you’ll need to resolve the parent dependency, then you can use standard constructor injection; for example, if MyClass1 orchestrates your functionality; you could use:

_container.Value.Resolve<IMyClass1>().DoStuff();

In Practise

Let’s apply all of that to our Functions App. Here’s two new classes:

public interface IMyClass1
{
    string GetOutput();
}
 
public interface IMyClass2
{
    void AddString(List<string> strings);
}
public class MyClass1 : IMyClass1
{
    private readonly IMyClass2 _myClass2;
 
    public MyClass1(IMyClass2 myClass2)
    {
        _myClass2 = myClass2;
    }
 
    public string GetOutput()
    {
        List<string> teststrings = new List<string>();
 
        for (int i = 0; i <= 10; i++)
        {
            _myClass2.AddString(teststrings);
        }
 
        return string.Join(",", teststrings);
    }
}
public class MyClass2 : IMyClass2
{
    public void AddString(List<string> strings)
    {
        Thread.Sleep(1000);
        strings.Add($"{DateTime.Now}");
    }
}

And the calling code looks like this:

[FunctionName("Function1")]
public static object Run([HttpTrigger(WebHookType = "genericJson")]HttpRequestMessage req, TraceWriter log)
{
    log.Info($"Webhook was triggered!");
 
    string output = _container.Value.Resolve<IMyClass1>().GetOutput();
    return req.CreateResponse(HttpStatusCode.OK, new
    {
        output
    });
}

Running it, we get an output that we might expect:

References

https://github.com/Azure/azure-webjobs-sdk/issues/1206

Short Walks – Entity Framework – Exception calling “SetData” with “2” argument(s)

The full exception is a little more verbose, but not much more helpful:

Exception calling “SetData” with “2” argument(s): “Type ‘Microsoft.VisualStudio.ProjectSystem.VS.Implementation.Package.Automation.OAProject’ in assembly
‘Microsoft.VisualStudio.ProjectSystem.VS.Implementation, Version=15.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’ is not marked as serializable.”
At C:\myapp\packages\EntityFramework.6.2.0\tools\EntityFramework.psm1:722 char:5
+ $domain.SetData(‘startUpProject’, $startUpProject)
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : SerializationException

System.NullReferenceException: Object reference not set to an instance of an object.
at System.Data.Entity.Migrations.Extensions.ProjectExtensions.GetProjectTypes(Project project, Int32 shellVersion)
at System.Data.Entity.Migrations.Extensions.ProjectExtensions.IsWebProject(Project project)
at System.Data.Entity.Migrations.MigrationsDomainCommand.GetFacade(String configurationTypeName, Boolean useContextWorkingDirectory)
at System.Data.Entity.Migrations.UpdateDatabaseCommand.<>c__DisplayClass2.<.ctor>b__0()
at System.Data.Entity.Migrations.MigrationsDomainCommand.Execute(Action command)

When does it happen?

Typically, you get it when you’re trying to do a EF operation, for example:

Update-Database

But your start-up project does not have an app.config or web.config file that has a connection string pointing to that database.

Why does it happen?

EF looks in your start-up project to find your web.config and work out where your database is. Clearly this could be a slightly better worded error.

Short Walks – XUnit Tests Not Appearing in Test Explorer

On occasion, there may be a case where you go into Test Explorer, knowing that you have XUnit tests within the solution; the Xunit tests are in a public class, they are public, and they are decorated correctly (for example, [Fact]). However, they do not appear in the Text Explorer.

If you have MS Test tests, you may find that they do appear in the Test Explorer – only the XUnit tests do not.

Why?

To run Xunit tests from the command line, you’ll need this package.

To run Xunit tests from Visual Studio, you’ll need this package.

References

https://xunit.github.io/docs/nuget-packages.html

Using the Builder Pattern for Validation

When doing validation, There’s a number of options to how you approach it: you could simply have a series of conditional statements testing logical criteria, you could follow the chain of responsibility pattern, use some form of polymorphism with the strategy pattern, or even, as I outline below, try using the builder pattern.

Let’s first break down the options. We’ll start with the strategy pattern, because that’s where I started when I was looking into this. It’s a bit like a screwdriver – it’s usually the first thing you reach for and, if you encounter a nail, you might just just tap it with the blunt end.

Strategy Pattern

The strategy pattern is just a way of implementing polymorphism: the idea being that you implement some form of logic, and then override key parts of it; for example, in the validation case, you might come up with an abstract base class such as this:

public abstract class ValidatorBase<T>
{        
    public ValidationResult Validate(T validateElement)
    {
        ValidationResult result = new ValidationResult();
 
        if (CheckIsValid(validateElement))
        {
            result = OnIsValid();                
        }
        else
        {
            result = OnIsNotValid();
        }
 
        return result;
    }
 
    protected virtual ValidationResult OnIsValid()
    {
        return null;
    }

    . . .

You can inherit from this for each type of validation and then override key parts of the class (such as `CheckIsValid`).

Finally, you can call all the validation in a single function such as this:


public bool Validate()
{
    bool isValid = true;
 
    IEnumerable<ValidatorBase> validators = typeof(ValidatorBase)
        .Assembly.GetTypes()
        .Where(t => t.IsSubclassOf(typeof(ValidatorBase)) && !t.IsAbstract)
        .Select(t => (ValidatorBase)Activator.CreateInstance(t));
 
    foreach (ValidatorBase validator in validators)
    {
        ValidationResult result = validator.Validate();
        if (!result.IsValid)
        {
            isValid = false;
            Errors.AddRange(result.Errors);
        }
 
        if (result.StopValidation)
        {
            break;
        }
    }
 
    return isValid;
}

There are good and bad sides to this pattern: it’s familiar and well tried; unfortunately, it results in a potential explosion of code volume (if you have 30 validation checks, you’ll need 30 classes), which makes it difficult to read. It also doesn’t deal with the scenario whereby one validation condition depends on the success of another.

So what about the chain of responsibility that we mentioned earlier?

Chain of responsibility

This pattern, as described in the linked article above, works by implementing a link between a class that validates your data, and the class that will validate it next: in other words, a linked list.

This idea does work well, and is relatively easy to implement; however, it can become unwieldy to use; for example, you might have a class like this:

private static bool InvokeValidation(ValidationRule rule)
{
    bool result = rule.ValidationFunction.Invoke();
    if (result && rule.Successor != null)
    {
        return InvokeValidation(rule.Successor);
    }
 
    return result;
}

But to build up the rules, you might have a series of calls such as this:

ValidationRule rule2 = new ValidationRule();
rule.ValidationFunction = () => MyTest();

ValidationRule rule1 = new ValidationRule();
rule.ValidationFunction = () => MyTest();
Rule.Successor = rule2;

As you can see, it doesn’t make for very readable code. Admittedly, with a little imagination, you could probably re-order it a little. What you could also do is use the Builder Pattern…

Builder Pattern

The builder pattern came to fame with Asp.Net Core; where during configuration, you could write something like:


services
    .AddMvc()
    .AddTagHelpersAsServices()
    .AddSessionStateTempDataProvider();

So, the idea behind it is that you call a method that returns an instance of itself, allowing you to repeatedly call methods to build a state. This concept overlays quite neatly onto the concept of validation.

You might have something along the lines of:


public class Validator
{
    private List<ValidationRule> _logic;        
 
    public ValidatorEngine AddRule(Func<bool> validationRule)
    {
        ValidatorLogic logic = new ValidatorLogic()
        {
            ValidationFunction = validationRule
        };
        _logic.Add(logic);
 
        return this;
    }

So now, you can call:

myValidator
    .AddRule(() => MyTest())
    .AddRule(() => MyTest2())
    …

I think you’ll agree, this makes the code much neater.

References

http://blogs.tedneward.com/patterns/Builder-CSharp/

http://piotrluksza.com/2016/04/19/chain-of-responsibility-elegant-way-to-handle-complex-validation/

NDC London 2018

As usual with my posts, the target audience is ultimately me. In this case, I’m documenting the talks that I’ve seen so that I can remember to come back and look at some of the technology in more depth. I spent the first two days at a workshop, building a web app in Asp.Net Core 2.0…

Asp.Net Core 2.0 Workshop

These were two intense days with David Fowler and Damien Edwards where we created a conference app (suspiciously similar to the NDC app) from the ground up.

Notable things that were introduced were HTML helper tags, the authentication and authorisation features of .Net Core 2.0. The ability to quickly get this running in Azure.

Day One

Keynote – What is Programming Anyway? – Felienne Hermans

This was mainly relating to learning, and how we learn and teach, how we treat each other as developers, and the nature of programming in general. Oddly, these themes came up again several times during the conference, so it clearly either struck a chord, or it’s something that’s on everyone’s mind at this time.

Sondheim, Seurat and Software: finding art in code – Jon Skeet

Okay, so let’s start with: this is definitely not the kind of talk I would normally go to; however, it was Jon Skeet, so I suppose I thought he’d just talk about C# for an hour and this was just a clever title. There was C# in there – and NodaTime, but not much. It was mainly a talk about the nature of programming. It had the same sort of vibe as the keynote – what is programming, what is good programming (or more accurately, elegant programming). At points throughout the talk, Jon suddenly burst into song; so all in all, one of the more surreal talks I’ve seen.

Authorization is hard! Implementing Authorization in Web Applications and APIs – Brock Allen & Dominick Baier

This was sort of part two of a talk on identity server. They discussed a new open source project that Microsoft have released that allows you to control authorisation; so, you can configure a policy, and within that policy you can have roles and features. What this means (as far as I could tell – and I need to have a play) is that out of the box, you can state that only people with a specific role are able to access, say an API function; or, only people that have roles with a specific feature are able to access an API function.

The example given was using a medical environment: a nurse, a doctor and a patient; whilst they all live in the same system, only the nurse and doctor are able to prescribe medication, and it is then possibly to configure the policy such that the nurse is able to prescribe less.

I’m Pwned. You’re Pwned. We’re All Pwned – Troy Hunt

This was the first of two talks I saw by Troy. This one was on security; although, oddly, the second was not. He did what he normally does which was start tapping around the internet, and showing just how susceptible everyone was to an attack.

He also mentioned that the passwords that he keeps in his database are available to be queried. I believe there’s an API endpoint, too. So the suggestion was that instead of the usual: “Your password much be 30 paragraphs long with a dollar sign, a hash and a semi-colon, and you have to change it every five minutes,” restriction on password entry, it would be better to simply ensure that the password doesn’t exist on that database.

Compositional Uis – the Microservices Last Mile – Jimmy Bogard

The basic premise here is that, whilst many places have a partitioned and Microservice architected back end, most front ends are still just one single application, effectively gluing together all the services. His argument followed that the thing to then do was to think about ways that you could split the font end up. The examples he gave included Amazon, so this isn’t a problem that most people will realistically have to solve; but it’s certainly interesting; especially his suggestion that you could shape the model by introducing a kind of message bus architecture in the front end: so each separate part of the system is polled and in turn “asked” if it had anything to add to the current request; that part of the system would then be responsible for communicating with its service.

C# 7, 7.1 and 7.2 – Jon Skeet and Bill Wagner

This was actually two talks, but they kind of ended up as one, single, two hour talk on all the new C# features. I have previously written about some of the new features of C#7 +. However, there were a few bits that I either overlooked, or just missed: pattern matching being one that I overlooked. The concept of deconstruction was also mentioned: I need to research this more.

Day Two

Building a Raspberry Pi Kubernetes Cluster and running .Net Core – Scott Hanselman & Alex Ellis

This was a fascinating talk where Alex had effectively build a mini-cloud set-up using a Raspberry Pi tower (IIRC six), and using a piece of open source software called Open Faas to orchestrate them.

This is a particularly exciting area of growth in technology: the fact that you can buy a fully functional machine for around £20 – £30 and then chain them together to provide redundancy. The demonstration given was a series of flashing lights; they demonstrated pulling a cable out of one, and the software spotted this and moved the process across to another device.

An Opinionated Approach to Asp.Net Core – Scott Allen

In this talk, Scott presented a series of suggestions for code layout and architecture. There was a lot of ideas; obviously, these all work well for Scott, and there was a lot of stuff in there that made sense; for example, he suggested mirroring the file layout that MS have used in their open source projects.

How to win with Automation and Influence People – Gwen Diagram

Gwen gave a talk about the story of her time at Sky, and how she dealt with various challenges that arose from dealing with disparate technologies and personality traits within her testing team. She frequently referred back to the Dale Carnegie book “How to win friends and influence people” – which presumably inspired the talk.

Hack Your Career – Troy Hunt

It’s strange listening to Troy talk about something that isn’t security related. He basically gave a rundown of how he ended up in the position that he’s in, and the challenges that lie therein.

HTTP: History & Performance – Ana Balica

This was basically a review of the HTTP standards from the early days of the modern internet, to now. Scott Hanselman touched on a similar theme later on, which was that it helps to understand where technology has come from in order to understand why it is like it is.

GitHub Beyond your Browser – Phil Haack

Phil demonstrated some new features of the GitHub client (which is written in Electron). He also demonstrated a new feature of GitHub that allows you to work with a third party on the same code (a little like the feature that VS have introduced recently).

.Net Rocks Live with Jon Skeet and Bill Wagner – Carl Franklin & Richard Cambpell

I suppose if you’re reading this and you don’t know what .Net Rocks is then you should probably stop – or go and listen to an episode and then come back. The interview was based around C#, and the new features. You should look out for the episode and listen to it!

Keynote – The Modern Cloud – Scott Guthrie

Obviously, if you ask Scott to talk about the cloud, he’s going to focus on a specific vendor. I’ll leave this, and come back to it in Scott’s later talk on a similar subject.

Web Apps can’t really do *that* can they? – Steve Sanderson

Steve covered some new areas of web technology here; specifically: Service Workers, Web Assembly, Credential Management and Payment Requests.

The highlight of this talk was when he demonstrated the use of Blazor which basically allows you to write C# and run it in place of Javascript.

The Hello World Show Live with Scott Hanselman, Troy Hunt, Felienne and Jon Skeet

I’d never heard of The Hello World Show before. To make matters worse, it is not the only You Tube program called this. Now I’ve heard of it, I’ll definitely be watching some of the back catalogue.

I think the highlight of the show was Scott’s talk – which pretty much had me in stitches.

Tips & Tricks with Azure – Scott Guthrie

This is the talk that I referred to above. Scott described a series of useful features of Azure that many people weren’t aware of. For example, the Azure Advisor, which gives tailored recommendations for things like security settings, cost management, etc.

Other tips included the Security Centre, Hybrid Use Rights (reduced cost for a VM is you own the Windows license) and Cost Management.

Serverless – the brief past, the bewildering present, and the beautiful (?) future – Mike Roberts

Mike has worked with AWS for a while now, and imparted some of the experience that he had, gave a little history of how it all started, and talked about where it might be going.

Why I’m Not Leaving .Net – Mark Rendle

Mark introduced a series of tools and tricks in response to every reason he could think of that people gave for leaving .Net.

Amongst the useful information that he gave was a sort of ORM tool he’d written called Beeline. Basically, if all you’re doing with your ORM tool is reading from the DB and then serialising it to JSON, then this does that for you, but without populating a series of .Net classes first.

He also talked about CoreRT which allows you to compile .Net. There’s a long way to go with it, but the idea is that you can produce an executable that will run with no runtime libraries.

Getting Started with iOS for a C# Programmer – Part 6 – Graphics

In this post I set out to create a series of posts that would walk through creating a basic game in Swift. In the post preceding this one I covered collision, and now we are moving on to graphics.

Add Assets

The secret to graphics in Swift seems to be creating assets. If you look to the left hand side of your project, you should see an asset store:

This can be added to:

Create a new image here (in fact, we’ll need two):

Then drag and drop your icons. Mine were kindly provided by my daughter:

Use SKSpriteNode

Once you have an image, it’s a straight-forward process to map it to the game sprite (for which we are currently using a rectangle). As you can see in GameScene.swift, very little actually changes:

    func createAlien(point: CGPoint) -> SKSpriteNode {
        let size = CGSize(width: 40, height: 30)
        
        //let alien = SKShapeNode(rectOf: size)
        let alien = SKSpriteNode(imageNamed: "alienImage")
        alien.size = size
        
        //print(self.frame.minY, self.frame.maxY, self.frame.minX, self.frame.maxX, self.frame.width, self.frame.height)
        alien.position = point
        //alien.strokeColor = SKColor(red: 0.0/255.0, green: 200.0/255.0, blue: 0.0/255.0, alpha: 1.0)
        //alien.lineWidth = 4
        
        alien.physicsBody = SKPhysicsBody(rectangleOf: size)
        
        alien.physicsBody?.affectedByGravity = false
        alien.physicsBody?.isDynamic = true
        
        alien.physicsBody?.categoryBitMask = collisionAlien
        alien.physicsBody?.collisionBitMask = 0
        alien.physicsBody?.contactTestBitMask = collisionPlayer
        
        alien.name = "alien"
        
        return alien

    }
    

It’s worth bearing in mind that this will simply replace the existing rectangles with graphics. As you can probably see from the images above, mine are neither straight, trimmed, nor centered, and so it looks a little skewed in the game. Nevertheless, we’re now in the territory of a playable game:

We’re not there yet, though. The next post is the final one, in which we’ll add a score, deal with the overlapping aliens and probably reduce the size of the ship. Also, if you run this, you’ll see that after a short amount of time, it uses a huge amount of memory tracking all the aliens, so we’ll limit the number of aliens.

References

https://www.raywenderlich.com/49695/sprite-kit-tutorial-making-a-universal-app-part-1

https://developer.apple.com/library/content/documentation/Xcode/Reference/xcode_ref-Asset_Catalog_Format/

https://developer.apple.com/documentation/spritekit/skspritenode

Serverless Computing – A Paradigm Shift

In the beginning

When I first started programming (on the Spectrum ZX81), you would typically write a program such as this (apologies if it’s syntactically incorrect, but I don’t have a Speccy interpreter to hand):

10 PRINT "Hello World"
20 GOTO 10

You could type that in to a computer at Tandys and chuckle as the shop assistants tried to work out how to stop the program (sometimes, you might not even use “Hello World”, but something more profane).

However, no matter how long it took them to work out how to stop the program, they only paid for the electricity the computer used while it was on. Further, there will only ever be a finite and, presumably (I never tried), predictable number of “Hello World” messages produced in an hour.

The N-Tier Revolution

Fast forward a few years, and everyone’s talking about N-Tier computing. We’re writing programs that run on servers. Some of those servers are big and expensive, but pretty much the same statement is true. No matter how big and complex your program that you run on the server, it’s your server (well, actually, it probably belongs to a company that you work for in some capacity). For example, if you have a poorly written SQL Server Procedure that scans an entire table, the same two statements are still true: no matter how long it takes to run, the price for execution is consistent, and the amount of time it takes to run is predicable (although, you may decide that if it’s slow, speeding it up might be a better use of your time than calculating exactly how slow).

Using Other People’s Computers

And now we come to cloud computing… or do we? The chronology is a little off on this, and that’s mainly because everyone keeps forgetting what cloud computing actually is. You’re renting time on somebody else’s computer. If I was twenty years older, I might have started this post by saying that “this was what was done back in the 70’s and 80’s in a manner of speaking. But I’m not, so we’ll jump back to the mid to late 90’s: and SETI. Anyone who had a computer back in those days of dial-up connections and 14.4K modems will remember that SETI (search for extra-terrestrial intelligence) were distributing a program for everyone to run on their computer instead of toaster screensavers*.

Wait – isn’t that what cloud computing is?

Yes – but SETI did it in reverse. You would dial-up, download a chunk of data and your machine would process it as a screensaver.

Everyone was happy to do that for free: because we all want to find aliens. But what if there had been a cost associated to each byte of data processed; clearly something similar was in the mind of Amazon when they started with this idea.

Back to the Paradigm Shift

So, the title, and gist of this post is that the two statements that have been consistent right through from programs written on the ZX Spectrum to programs written in Turbo C and Pascal, to programs written in C# running on a dedicated server, has now changed. So, here are the four big changes brought about by the cloud **:

1. Development and Debugging

If you write a program, you can no-longer be sure of the cost, or the execution time. This isn’t a scare post: both are broadly predictable; but the paradigm has now changed. If you’re in the middle of writing an Azure function, or a GCP BigQuery query, and it doesn’t work, you can’t just close your laptop and go for dinner while you have a think, because while you do, nodes are lighting up all over the world trying to complete your task. The lights are dimming in Seattle while your Azure function crashes again and again.

2. Scale and Accessibility

The second big change is the way that your code is scaled. For example, you might be used to parallelising slow code so that you can make use of all available threads; however, in our new world, if you do that, you may actually be making it harder for your cloud platform of choice to scale your code.

Because you pay per minute of computing time (typically this is more expensive than storage), code that is unnecessarily slow or inefficient may not cause your system to slow down – what it will probably do is to cost you more money to run it at the same speed.

In some cases, you might find it’s more efficient to do the opposite of what you have thus-far believed to be accepted industry best practice. For example, it may prove more cost efficient to de-normalise data that you need to access.

3. Logging

This isn’t exactly new: you should always have been logging in your programs – it’s just common sense. However, there’s a new emphasis here: you’re not running this on your own server (you’re not even running it on a customer’s server) – it’s somebody else’s. That means that if it crashes, there’s a limited amount of investigation that you can do. As a result, you need to log profusely.

4. Microservices and Message Busses

IMHO, there are two reason why the Microservice architecture has become so popular in the cloud world: it’s easy – that is, spinning up a new Azure function endpoint takes minutes.

Secondly, it’s more scaleable. Microservices make your code more scaleable because it’s easy for the cloud provider to instantiate two instances of your program, and then three, and then a million. If your program does one small thing, then only that small thing needs to be instantiated. If your program does twenty different things, it can still scale, but it’ll cost more, because it will require more processing power.

Finally, instead of simply calling the service that you need, there is now the option to place a message on a queue; apart from separating your program into definable responsibility sectors, this means that, when your cloud provider of choice scales your service out, all the instances can pick up a message and deal with it.

Summary

So, what’s the conclusion: is cloud computing good, or bad? In five years time, the idea that someone in your organisation will know the spec of the machine your business critical application is running on seems a little far-fetched. The days of having a trusted server that has a load of hugely important stuff on, but nobody really knows what, and has been running since 1973 are numbered.

Obviously, there’s a price to pay for everything. In the case of the cloud, it’s complexity – not of the code, and not really of the combined system, but typically you are introducing dozens of moving parts to a system. If you decide to segregate your database, you might find that you have several databases, tens, or even hundreds of independent processes and endpoints, you could even spread that across multiple cloud providers. It’s not that hard to lose track of where these pieces are living and what they are doing.

Footnotes

* If you don’t get this reference then you’re probably under 30.

** Clearly this is an opinion piece.