Category Archives: Azure Functions

Reading Azure Service Bus Queue Names from the Config File

In this post, I wrote about how you might read a message from the service bus queue. However, with Azure Functions (and WebJobs), comes the ability to have Microsoft do some of this plumbing code for you.

I have a queue here (taken from the service bus explorer):

I can read this in an Azure function; let’s create a new Azure Functions App:

This time, we’ll create a Service Bus Queue Triggered function:

Out of the box, that will give you this:

public static class Function1
{
    [FunctionName("Function1")]
    public static void Run([ServiceBusTrigger("testqueue", AccessRights.Listen, Connection = "")]string myQueueItem, TraceWriter log)
    {
        log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
    }
}

There’s a few things that we’ll probably want to change here. The first is “Connection”. We can remove that parameter altogether, and then add a row to the local.settings.json file (which can be overridden later inside Azure). Out of the box, you get AzureWebJobsStorage and AzureWebJobsDashboard, which both accept a connection string to a Azure Storage Account. You can also add AzureWebJobsServiceBus, which accepts a connection string to the service bus:

"Values": {
    "AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=teststorage1…",
    "AzureWebJobsDashboard": "DefaultEndpointsProtocol=https;AccountName=teststorage1…",
    "AzureWebJobsServiceBus": "Endpoint=sb://pcm-servicebustest.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=…"
  }

If you run the job, it will now pick up any outstanding entries in that queue. But, what if you don’t know the queue name; for example, what if you find out the queue name is different. To illustrate the point; here, I’m looking for “testqueue1”, but the queue name (as you saw earlier) is “testqueue”:

public static class Function1
{
    [FunctionName("Function1")]
    public static void Run([ServiceBusTrigger("testqueue1", AccessRights.Listen)]string myQueueItem, TraceWriter log)
    {
        log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
    }
}

Obviously, if you’re looking for a queue that doesn’t exist, bad things happen:

To fix this, I have to change the code… which is broadly speaking a bad thing. What we can do, is configure the queue name in the config file; like this:


"Values": {
    "AzureWebJobsStorage": " . . . ",
    . . .,
    "queue-name":  "testqueue"
  }

And we can have the function look in the config file by changing the queue name:

[FunctionName("Function1")]
public static void Run([ServiceBusTrigger("%queue-name%", AccessRights.Listen)]string myQueueItem, TraceWriter log)
{
    log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}

The pattern of supplying a variable name in the format “%variable-name%” seems to work across other triggers and bindings for Azure Functions.

Deployment

That’s now looking much better, but what happens when the function gets deployed? Let’s see:

We can now see that the function is deployed:

At the minute, it won’t do anything, because it’s looking for a queue name in a setting that only exists locally. Let’s fix that:

Remember to save the changes.

Looking at the logs confirms that this now runs correctly.

Creating a Scheduled Azure Function

I’ve previously written about creating Azure functions. I’ve also written about how to react to service bus queues. In this post, I wanted to cover creating a scheduled function. Basically, these allow you to create a scheduled task that executes at a given interval, or at a given time.

Timer Trigger

The first thing to do is create a function with a type of Timer Trigger:

Schedule / CRON format

The next thing is to understand the schedule, or CRON, format. The format is:

{second} {minute} {hour} {day} {month} {day-of-week}

Scheduled Intervals

The example you’ll see when you create this looks like this:

0 */5 * * * *

The notation */[number] means once every number; so */5 would mean once every 5… and then look at the placeholder to work out 5 what; in this case it means once every 5 minutes. So, for example:

*/10 * * * * *

Would be once every 10 seconds.

Scheduled Times

Specifying numbers means the schedule will execute at that time; so:

0 0 0 * * *

Would execute every time the hour, minute and second all hit zero – so once per day at midnight; and:

0 * * * * *

Would execute every time the second hits zero – so once per minute; and:

0 0 * * * 1

Would execute once per hour on a Monday (as the last placeholder is the day of the week).

Time constraints

These can be specified in any column in the format [lower bound]-[upper bound], and they restrict the timer to a range of values; for example:

0 */20 5-10 * * *

Means every 20 minutes between 5 and 10am (as you can see, the different types can be used in conjunction).

Asterisks (*)

You’ll notice above that there are asterisks in every placeholder that a value has not been specified. The askerisk signifies that the schedule will execute at every interval within that placeholder; for example:

* * * * * *

Means every second; and:

0 * * * * *

Means every minute.

Back to the function

Upon starting, the function will detail when the next several executions will take place:

But what if you don’t know what the schedule will be at compile time. As with many of the variables in an Azure Function, you can simply substitute the value for a placeholder:

[FunctionName("MyFunc")]
public static void Run([TimerTrigger("%schedule%")]TimerInfo myTimer, TraceWriter log)
{
    log.Info($"C# Timer trigger function executed at: {DateTime.Now}");
}

This value can then be provided inside the local.settings.json:

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "DefaultEndpointsProto . . .",
    "AzureWebJobsDashboard": "DefaultEndpointsProto . . .",
    "schedule": "0 * * * * *"
  }
}

References

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer

http://cronexpressiondescriptor.azurewebsites.net/?expression=1+*+*+*+*+*&locale=en

Using Unity With Azure Functions

Azure Functions are a relatively new kid on the block when it comes to the Microsoft Azure stack. I’ve previously written about them, and their limitations. One such limitation seems to be that they don’t lend themselves very well to using dependency injection. However, it is certainly not impossible to make them do so.

In this particular post, we’ll have a look at how you might use an IoC container (Unity in this case) in order to leverage DI inside an Azure function.

New Azure Functions Project

I’ve covered this before in previous posts, in Visual Studio, you can now create a new Azure Functions project:

That done, you should have a project that looks something like this:

As you can see, the elephant in the room here is there are no functions; let’s correct that:

Be sure to call your function something descriptive… like “Function1”. For the purposes of this post, it doesn’t matter what kind of function you create, but I’m going to create a “Generic Web Hook”.

Install Unity

The next step is to install Unity (at the time of writing):

Install-Package Unity -Version 5.5.6

Static Variables Inside Functions

It’s worth bearing mind that a static variable works the way you would expect, were the function a locally hosted process. That is, if you write a function such as this:

[FunctionName("Function1")]
public static object Run([HttpTrigger(WebHookType = "genericJson")]HttpRequestMessage req, TraceWriter log)
{
    log.Info($"Webhook was triggered!");
    
    System.Threading.Thread.Sleep(10000);
    log.Info($"Index is {test}");
    return req.CreateResponse(HttpStatusCode.OK, new
    {
        greeting = $"Hello {test++}!"
    });
}

And access it from a web browser, or postman, or both as the same time, you’ll get incrementing numbers:

Whilst the values are shared across the instances, you can’t cause a conflict by updating something in one function while reading it in another (I tried pretty hard to cause this to break). What this means, then, is that we can store an IoC container that will maintain state across function calls. Obviously, this is not intended for persisting state, so you should assume your state could be lost at any time (as indeed it can).

Registering the Unity Container

One method of doing this is to use the Lazy object. This pretty much passed me by in .Net 4 (which is, apparently, when it came out). It basically provides a slightly neater way of doing this kind of thing:

private List<string> _myList;
public List<string> MyList
{
    get
    {
        if (_myList == null)
        {
            _myList = new List<string>();
        }
        return _myList;
    }
}

The “lazy” method would be:

public Lazy<List<string>> MyList = new Lazy<List<string>>(() =>
{
    List<string> newList = new List<string>();
    return newList;
});

With that in mind, we can do something like this:

public static class Function1
{
     private static Lazy<IUnityContainer> _container =
         new Lazy<IUnityContainer>(() =>
         {
             IUnityContainer container = InitialiseUnityContainer();
             return container;
         });

InitialiseUnityContainer needs to return a new instance of the container:

public static IUnityContainer InitialiseUnityContainer()
{
    UnityContainer container = new UnityContainer();
    container.RegisterType<IMyClass1, MyClass1>();
    container.RegisterType<IMyClass2, MyClass2>();
    return container;
}

After that, you’ll need to resolve the parent dependency, then you can use standard constructor injection; for example, if MyClass1 orchestrates your functionality; you could use:

_container.Value.Resolve<IMyClass1>().DoStuff();

In Practise

Let’s apply all of that to our Functions App. Here’s two new classes:

public interface IMyClass1
{
    string GetOutput();
}
 
public interface IMyClass2
{
    void AddString(List<string> strings);
}
public class MyClass1 : IMyClass1
{
    private readonly IMyClass2 _myClass2;
 
    public MyClass1(IMyClass2 myClass2)
    {
        _myClass2 = myClass2;
    }
 
    public string GetOutput()
    {
        List<string> teststrings = new List<string>();
 
        for (int i = 0; i <= 10; i++)
        {
            _myClass2.AddString(teststrings);
        }
 
        return string.Join(",", teststrings);
    }
}
public class MyClass2 : IMyClass2
{
    public void AddString(List<string> strings)
    {
        Thread.Sleep(1000);
        strings.Add($"{DateTime.Now}");
    }
}

And the calling code looks like this:

[FunctionName("Function1")]
public static object Run([HttpTrigger(WebHookType = "genericJson")]HttpRequestMessage req, TraceWriter log)
{
    log.Info($"Webhook was triggered!");
 
    string output = _container.Value.Resolve<IMyClass1>().GetOutput();
    return req.CreateResponse(HttpStatusCode.OK, new
    {
        output
    });
}

Running it, we get an output that we might expect:

References

https://github.com/Azure/azure-webjobs-sdk/issues/1206

Working with Multiple Cloud Providers – Part 3 – Linking Azure and GCP

This is the third and final post in a short series on linking up Azure with GCP (for Christmas). In the first post, I set-up a basic Azure function that updated some data in table storage, and then in the second post, I configured the GCP link from PubSub into BigQuery.

In the post, we’ll square this off by adapting the Azure function to post a message directly to PubSub; then, we’ll call the Azure function with Santa’a data, and watch that appear in BigQuery. At least, that was my plan – but Microsoft had other ideas.

It turns out that Azure functions have a dependency on Newtonsoft Json 9.0.1, and the GCP client libraries require 10+. So instead of being a 10 minute job on Boxing day to link the two, it turned into a mammoth task. Obviously, I spent the first few hours searching for a way around this – surely other people have faced this, and there’s a redirect, setting, or way of banging the keyboard that makes it work? Turns out not.

The next idea was to experiment with contacting the Google server directly, as is described here. Unfortunately, you still need the Auth libraries.

Finally, I swapped out the function for a WebJob. WebJobs give you a little move flexibility, and have no hard dependencies. So, on with the show (albeit a little more involved than expected).

WebJob

In this post I described how to create a basic WebJob. Here, we’re going to do something similar. In our case, we’re going to listen for an Azure Service Bus Message, and then update the Azure Storage table (as described in the previous post), and call out to GCP to publish a message to PubSub.

Handling a Service Bus Message

We weren’t originally going to take this approach, but I found that WebJobs play much nicer with a Service Bus message, than with trying to get them to fire on a specific endpoint. In terms of scaleability, adding a queue in the middle can only be a good thing. We’ll square off the contactable endpoint at the end with a function that will simply convert the endpoint to a message on the queue. Here’s what the WebJob Program looks like:

public static void ProcessQueueMessage(
    [ServiceBusTrigger("localsantaqueue")] string message,
    TextWriter log,
    [Table("Delivery")] ICollector<TableItem> outputTable)
{
    Console.WriteLine("test");
 
    log.WriteLine(message);
 
    // parse query parameter
    TableItem item = Newtonsoft.Json.JsonConvert.DeserializeObject<TableItem>(message);
    if (string.IsNullOrWhiteSpace(item.PartitionKey)) item.PartitionKey = item.childName.First().ToString();
    if (string.IsNullOrWhiteSpace(item.RowKey)) item.RowKey = item.childName;
 
    outputTable.Add(item);
 
    GCPHelper.AddMessageToPubSub(item).GetAwaiter().GetResult();
    
    log.WriteLine("DeliveryComplete Finished");
 
}

Effectively, this is the same logic as the function (obviously, we now have the GCPHelper, and we’ll come to that in a minute. First, here’s the code for the TableItem model:


[JsonObject(MemberSerialization.OptIn)]
public class TableItem : TableEntity
{
    [JsonProperty]
    public string childName { get; set; }
 
    [JsonProperty]
    public string present { get; set; }
}

As you can see, we need to decorate the members with specific serialisation instructions. The reason being that this model is being used by both GCP (which only needs what you see on the screen) and Azure (which needs the inherited properties).

GCPHelper

As described here, you’ll need to install the client package for GCP into the Azure Function App that we created in post one of this series (referenced above):

Install-Package Google.Cloud.PubSub.V1 -Pre

Here’s the helper code that I mentioned:

public static class GCPHelper
{
    public static async Task AddMessageToPubSub(TableItem toSend)
    {
        string jsonMsg = Newtonsoft.Json.JsonConvert.SerializeObject(toSend);
        
        Environment.SetEnvironmentVariable(
            "GOOGLE_APPLICATION_CREDENTIALS",
            Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Test-Project-8d8d83hs4hd.json"));
        GrpcEnvironment.SetLogger(new ConsoleLogger());

        PublisherClient publisher = PublisherClient.Create();
        string projectId = "test-project-123456";
        TopicName topicName = new TopicName(projectId, "test");
        SimplePublisher simplePublisher = 
            await SimplePublisher.CreateAsync(topicName);
        string messageId = 
            await simplePublisher.PublishAsync(jsonMsg);
        await simplePublisher.ShutdownAsync(TimeSpan.FromSeconds(15));
    }
 
}

I detailed in this post how to create a credentials file; you’ll need to do that to allow the WebJob to be authorised. The Json file referenced above was created using that process.

Azure Config

You’ll need to create an Azure message queue (I’ve called mine localsantaqueue):

I would also download the Service Bus Explorer (I’ll be using it later for testing).

GCP Config

We already have a DataFlow, a PubSub Topic and a BigQuery Database, so GCP should require no further configuration; except to ensure the permissions are correct.

The Service Account user (which I give more details of here needs to have PubSub permissions. For now, we’ll make them an editor, although in this instance, they probably only need publish:

Test

We can do a quick test using the Service Bus Explorer and publish a message to the queue:

The ultimate test is that we can then see this in the BigQuery Table:

Lastly, the Function

This won’t be a completely function free post. The last step is to create a function that adds a message to the queue:

[FunctionName("Function1")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Function, "post")]HttpRequestMessage req,             
    TraceWriter log,
    [ServiceBus("localsantaqueue")] ICollector<string> queue)
{
    log.Info("C# HTTP trigger function processed a request.");
    var parameters = req.GetQueryNameValuePairs();
    string childName = parameters.First(a => a.Key == "childName").Value;
    string present = parameters.First(a => a.Key == "present").Value;
    string json = "{{ 'childName': '{childName}', 'present': '{present}' }} ";            
    queue.Add(json);
    

    return req.CreateResponse(HttpStatusCode.OK);
}

So now we have an endpoint for our imaginary Xamarin app to call into.

Summary

Both GCP and Azure are relatively immature platforms for this kind of interaction. The GCP client libraries seem to be missing functionality (and GCP is still heavily weighted away from .Net). The Azure libraries (especially functions) seem to be in a pickle, too – with strange dependencies that makes it very difficult to communicate outside of Azure. As a result, this task (which should have taken an hour or so) took a great deal of time, and it was completely unnecessary.

Having said that, it is clearly possible to link the two systems, if a little long-winded.

References

https://blog.falafel.com/rest-google-cloud-pubsub-with-oauth/

https://github.com/Azure/azure-functions-vs-build-sdk/issues/107

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus

https://stackoverflow.com/questions/48092003/adding-to-a-queue-using-an-azure-function-in-c-sharp/48092276#48092276

Working with Multiple Cloud Providers – Part 1 – Azure Function

Regular readers (if there are such things to this blog) may have noticed that I’ve recently been writing a lot about two main cloud providers. I won’t link to all the articles, but if you’re interested, a quick search for either Azure or Google Cloud Platform will yield several results.

Since it’s Christmas, I thought I’d do something a bit different and try to combine them. This isn’t completely frivolous; both have advantages and disadvantages: GCP is very geared towards big data, whereas the Azure Service Fabric provides a lot of functionality that might fit well with a much smaller LOB app.

So, what if we had the following scenario:

Santa has to deliver presents to every child in the world in one night. Santa is only one man* and Google tells me there are 1.9B children in the world, so he contracts out a series of delivery drivers. There needs to be around 79M deliveries every hour, let’s assume that each delivery driver can work 24 hours**. Each driver can deliver, say 100 deliveries per hour, that means we need around 790,000 drivers. Every delivery driver has an app that links to their depot; recording deliveries, schedules, etc.

That would be a good app to write in, say, Xamarin, and maybe have an Azure service running it; here’s the obligatory box diagram:

The service might talk to the service bus, might control stock, send e-mails, all kinds of LOB jobs. Now, I’m not saying for a second that Azure can’t cope with this, but what if we suddenly want all of these instances to feed metrics into a single data store. There’s 190*** countries in the world; if each has a depot, then there’s ~416K messages / hour going into each Azure service. But there’s 79M / hour going into a single DB. Because it’s Christmas, let assume that Azure can’t cope with this, or let’s say that GCP is a little cheaper at this scale; or that we have some Hadoop jobs that we’d like to use on the data. In theory, we can link these systems; which might look something like this:

So, we have multiple instances of the Azure architecture, and they all feed into a single GCP service.

Disclaimer

At no point during this post will I attempt to publish 79M records / hour to GCP BigQuery. Neither will any Xamarin code be written or demonstrated – you have to use your imagination for that bit.

Proof of Concept

Given the disclaimer I’ve just made, calling this a proof of concept seems a little disingenuous; but let’s imagine that we know that the volumes aren’t a problem and concentrate on how to link these together.

Azure Service

Let’s start with the Azure Service. We’ll create an Azure function that accepts a HTTP message, updates a DB and then posts a message to Google PubSub.

Storage

For the purpose of this post, let’s store our individual instance data in Azure Table Storage. I might come back at a later date and work out how and whether it would make sense to use CosmosDB instead.

We’ll set-up a new table called Delivery:

Azure Function

Now we have somewhere to store the data, let’s create an Azure Function App that updates it. In this example, we’ll create a new Function App from VS:

In order to test this locally, change local.settings.json to point to your storage location described above.

And here’s the code to update the table:


    public static class DeliveryComplete
    {
        [FunctionName("DeliveryComplete")]
        public static HttpResponseMessage Run(
            [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, 
            TraceWriter log,            
            [Table("Delivery", Connection = "santa_azure_table_storage")] ICollector<TableItem> outputTable)
        {
            log.Info("C# HTTP trigger function processed a request.");
 
            // parse query parameter
            string childName = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "childName", true) == 0)
                .Value;
 
            string present = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "present", true) == 0)
                .Value;            
 
            var item = new TableItem()
            {
                childName = childName,
                present = present,                
                RowKey = childName,
                PartitionKey = childName.First().ToString()                
            };
 
            outputTable.Add(item);            
 
            return req.CreateResponse(HttpStatusCode.OK);
        }
 
        public class TableItem : TableEntity
        {
            public string childName { get; set; }
            public string present { get; set; }
        }
    }

Testing

There are two ways to test this; the first is to just press F5; that will launch the function as a local service, and you can use PostMan or similar to test it; the alternative is to deploy to the cloud. If you choose the latter, then your local.settings.json will not come with you, so you’ll need to add an app setting:

Remember to save this setting, otherwise, you’ll get an error saying that it can’t find your setting, and you won’t be able to work out why – ask me how I know!

Now, if you run a test …

You should be able to see your table updated (shown here using Storage Explorer):

Summary

We now have a working Azure function that updates a storage table with some basic information. In the next post, we’ll create a GCP service that pipes all this information into BigTable and then link the two systems.

Footnotes

* Remember, all the guys in Santa suits are just helpers.
** That brandy you leave out really hits the spot!
*** I just Googled this – it seems a bit low to me, too.

References

https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings#manage-app-service-settings

https://anthonychu.ca/post/azure-functions-update-delete-table-storage/

https://stackoverflow.com/questions/44961482/how-to-specify-output-bindings-of-azure-function-from-visual-studio-2017-preview

SendGrid – Azure e-mail functionality

In this post, I discussed the prospect of sending an e-mail from an Azure function in order to alert someone that something had gone wrong. In one of the comments, it was suggested that I should look into a third party tool called “SendGrid”, and this post is the result of that investigation.

Azure Configuration

SendGrid is a third party application, and so the first thing you need to do is to create an account:

The free tier covers you for 25,000 e-mails per month. However, you do get a scary warning that, because this isn’t a Microsoft product, it is not covered by Azure credits.

Anyway, click create and, after a while, you’re new SendGrid Account should be created:

You’ll need to get the API Key: to do this, select Manage:

That takes you to https://app.sendgrid.com, where you can select to create an API Key:

Clearly, you wouldn’t want a full access account to just send an e-mail in real life… but Restricted Access has a form that would take longer to fill in, and I can’t be mythered*… so we’ll go with full access for now.

Once you’ve created it, and given it a name, you should have a key (remember that key – don’t write it down, or copy it, you must remember it!).

Code

Create a new Function App, and add the SendGrid NuGet package:

https://www.nuget.org/packages/Sendgrid/

In this case, let’s create a HttpTrigger function (this will fire when a web address is accessed); the body of which needs to look something like this:

        [FunctionName("SendEmail")]
        public static async Task<HttpResponseMessage> Run(
            [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req, 
            TraceWriter log)
        {
            log.Info("C# HTTP trigger function processed a request.");

            // parse query parameter
            var addresses = req.GetQueryNameValuePairs();
            string[] addressArr = addresses
                .Where(a => a.Key == "address")
                .Select(a => a.Value).ToArray();

            if (addressArr.Count() == 0)
            {
                return req.CreateResponse(HttpStatusCode.BadRequest, 
                    "Please pass an address in the query string");
            }
            else
            {
                HttpStatusCode status = await CallSendEmailAsync(
                    "noreply@test.com", addressArr, "test e-mail",
                    "Once more unto the breach, dear friends, once more;\n" +
                    "Or close the wall up with our English dead.   \n" +
                    "In peace there's nothing so becomes a man     \n" +
                    "As modest stillness and humility:             \n" +
                    "But when the blast of war blows in our ears,  \n" +
                    "Then imitate the action of the tiger;         \n" +
                    "Stiffen the sinews, summon up the blood,");
                switch (status)
                {
                    case HttpStatusCode.OK:
                    case HttpStatusCode.Accepted:
                    {
                        return req.CreateResponse(status, "Mail Successfully Sent");
                    }
                    default:
                    {
                        return req.CreateResponse(status, "Unable to send e-mail");
                    }
                }
            }
        }

The `CallSendEmailAsync` helper method might look like this:

public static async Task<HttpStatusCode> CallSendEmailAsync(string from, string[] recipients, string subject, string body)
{
    EmailAddress fromAddress = new EmailAddress(from);
 
    SendGridMessage message = new SendGridMessage()
    {
        From = fromAddress,
        Subject = subject,
        HtmlContent = body
    };
 
    message.AddTos(recipients.Select(r => { return new EmailAddress(r); }).ToList());
   
    string sendGridApiKey = "AB.keythisisthekey.nkfdhfkjfhkjfd0ei8L9xTyaTCzy_sV5gPJNX-3";
 
    SendGridClient client = new SendGridClient(sendGridApiKey);
    Response response = await client.SendEmailAsync(message);
 
    return response.StatusCode;
}

The key is the string that I said you should remember earlier.

You can just paste the URL into a browser, give it the e-mail addresses in the format:

https://url.com?address=my@email.com&my-second@email.com

As you will see from the link at the bottom of this post, OK (200) signifies that the message is valid, but not queued to be delivered; and Accepted (202) indicates that the message is valid and queued. My guess is that if you get an OK, it means that the mail has already been delivered.

Footnotes

* It probably took longer to type this out than to do it.

References

https://go.sendgrid.com/

https://sendgrid.com/docs/API_Reference/Web_API_v3/Mail/errors.html

Function Apps in Azure

With Update 15.3.1 for Visual Studio came the ability to create Function Apps in VS. Functions were previously restricted to writing code in the browser directly on Azure*.

Set-up

The first step is to download and install, or use the Visual Studio Installer to update to the latest version of VS (at the time of writing, this was 15.3.3 – but, as stated above, it’s 15.3.1 has the Function App update).

Once this is done, you need to launch the Visual Studio Installer again

Select the Azure Workload (if you haven’t already):

The Microsoft article, referenced at the bottom of this post, answers the issue of what happens if this doesn’t work on it’s own; it says:

If for some reason the tools don’t get automatically updated from the gallery…

I’ve now done this twice on two separate machines and, on both occasions, the tools have not automatically been updated from the gallery (it also sounds like the author of the article doesn’t really know why this is the case). Assuming that the reader of this article will suffer the same fate, they should update the Azure gallery extension (if you don’t have to do that then please leave a comment – I’m interested to know if it ever works):

Close everything (including the installer) and this appears:

Finally, we see the new app type:

Function Apps

Once you create a new function app, you get an empty project:

To add a new function, you can right click on the solution (as you would for a new class file) and select new function:

New Function

You then, helpfully, get asked what kind of function you would like:

Function Type

Let’s select Generic WebHook:

Generic Web Hook

We now have some template code, so let’s try and run it:

Running it gives this neat little screen that wouldn’t have looked out of place on my BBS in 1995**:

The bottom line gives an address, so we can just type that into a browser:

As you can see, we do get a “WebHook Triggered” message… but things kind of go downhill from there!

There are a couple of reasons for this; the WebHook only deals with a post and, as per the default code, it needs some JSON for the body; let’s use Postman to create a new request:

This looks much better, and the console tells us that we’re firing:

Publish the App

Okay – so the function works locally, which is impressive (debugging on Azure wasn’t the easiest of things). Now we want to push it to the cloud.

This goes away for a while, compiles the app and then deploys it for us:

Your function app should now be in Azure:

Now you’ll need to find it’s URL. As already detailed in this article, you get the function URL from here:

If we switch Postman over to the cloud, we get the same result***:

Footnotes

* Actually, this is probably untrue. It was probably possible to write them in VS and publish them. There were a few add-ons knocking about in the VS gallery that claimed to allow just that.

** It was called The Twilight Zone BBS; although, if I’m being honest, although the ANSI art on it was impressive, it wasn’t my art work.

*** Locally, it wasn’t that fussed about the body format (it could be text), but once it was in the cloud, it insisted on JSON.

References

https://blogs.msdn.microsoft.com/webdev/2017/05/10/azure-function-tools-for-visual-studio-2017/

http://pmichaels.net/2017/07/16/azure-functions/

Using Azure Functions to Send an E-mail Alert from a Service Bus

In this post, I discussed creating an Azure service bus that sends an e-mail as an action once a message has expired; and in this post, I covered Azure functions and setting a basic one up.

These two pieces of functionality seem to be crying out to be together. After all, if your functionality to send an e-mail is in the cloud, you don’t have to worry about your server being down (which, if your message has expired, is a real possibility).

Create the Azure Function

The first thing to do is to create the Azure function to send an e-mail. Remember that we’ll be hooking into the service bus, and so we’ll create the function a little differently.

The first few steps are the same, though:

The new function is here:

We’ll create a custom function again:

Although this looks familiar from the last post, the next part does differ slightly. This time, we’ll set up a Service Bus Trigger:

This requires the connection string to your service bus…

As you can see above, the service bus connection is blank, and there are no possible entries… onto App Settings:

App Settings

On the App Settings tab, you can configure settings that pertain to your Azure Function App. Select “Manage App Settings”. Here we can set-up a connection string:

Now, we should be able to see that from the Function:

Does it work?

What does this function do out of the box?

Well, having populated the queue with 50 messages that time out after 30 seconds, the function kicked in and started logging that it was picking up messages after 30 seconds – so that’s a promising sign!

The messages are processed and removed from the dead letter queue. This process happens so quickly, it’s easy (as I did) to interpret this as a bug (i.e. messages are not being dead-lettered). However, as we can see from the function logs – they are.

This did, however, leave me with a concern that the messages were being disposed of before they had been successfully processed. To check this, I changed the function slightly:

So, it crashes correctly:

And here, safe and sound, are 50 freshly dead-lettered messages:

Function Code

Now we have a function, we need to make it send an e-mail… so we’ll need some code. Let’s start with what we created here.


using System;
using System.Threading.Tasks;
using System.Net.Mail;

public static void Run(string myQueueItem, TraceWriter log)
{
    log.Info($"Start C# ServiceBus queue trigger function processed message: {myQueueItem}");

    System.Net.Mail.MailMessage message = new System.Net.Mail.MailMessage();
    message.To.Add("to.address@hotmail.co.uk");
    message.Subject = "Message in queue has expired";
    message.From = new System.Net.Mail.MailAddress("from.address@hotmail.co.uk");
    message.Body = messageText;
    System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("smtp.live.com");
    smtp.Port = 587;
    smtp.UseDefaultCredentials = false;
    smtp.Credentials = new System.Net.NetworkCredential("my.address@hotmail.co.uk", "p@ssw0rd");
    smtp.EnableSsl = true;
    smtp.Send(message);

    log.Info($"End C# ServiceBus queue trigger function processed message: {myQueueItem}");
}


This doesn’t work:

2017-06-27T16:47:56.928 Function started (Id=1188dbdb-4963-4e55-af5c-4be1f71a1ca5)
2017-06-27T16:47:56.928 Start C# ServiceBus queue trigger function processed message: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA32
2017-06-27T16:47:56.928 Function completed (Failure, Id=1188dbdb-4963-4e55-af5c-4be1f71a1ca5, Duration=0ms)
2017-06-27T16:47:57.147 Exception while executing function: Functions.ServiceBusQueueTriggerCSharp1. mscorlib: Exception has been thrown by the target of an invocation. f-ServiceBusQueueTriggerCSharp1__-1971403142: Cannot complete.
2017-06-27T16:47:57.557 Exception while executing function: Functions.ServiceBusQueueTriggerCSharp1. mscorlib: Exception has been thrown by the target of an invocation. f-ServiceBusQueueTriggerCSharp1__-1971403142: Cannot complete.

Debugging Azure

A quick side note on debugging Azure. There are a number of resources with details of how this should work on the web, and I’ll probably have a later post of my own experiences, but it’s a pretty flaky experience, and I ended up using trial and error to determine the issue.

Working code

using System;
using System.Threading.Tasks;

public static void Run(string myQueueItem, TraceWriter log)
{
    log.Info($"Start C# ServiceBus queue trigger function processed message: {myQueueItem}");

    System.Net.Mail.MailMessage message = new System.Net.Mail.MailMessage();
    
    message.To.Add("to.address@hotmail.co.uk");    
    message.Subject = "Message in queue has expired";    
    message.From = new System.Net.Mail.MailAddress("from.address@hotmail.co.uk");
    message.Body = myQueueItem;
        
    System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("smtp.live.com");
    smtp.Port = 587;
    smtp.UseDefaultCredentials = false;
    smtp.Credentials = new System.Net.NetworkCredential("my.address@hotmail.co.uk", "p@ssw0rd");
    smtp.EnableSsl = true;
    smtp.Send(message);

    log.Info($"End C# ServiceBus queue trigger function processed message: {myQueueItem}");
}

So, the problem was just that I was referencing an unknown variable (messageText). I’m unsure exactly why I needed to travel to the mountains of Mordor to determine this – a simple error message in the online text would have sufficed.

The other issue that I faced was a security challenge; however, once I’d persuaded Azure that this really was me, everything sprung into life:

Credit Considerations

Unlike in previous posts where I’ve identified the Azure cost to be negligible, functions are the fastest way to use up credit I have found so far. Especially functions such as I’ve created here. I left the (non-working) function above active, but failing all night, and it used up over £40 worth of credit, continually trying, and failing, to process the dead-letter queue… I think the lights might even have dimmed in Redmond for a split second! The moral of the story is is: be careful when you’re debugging this – you can’t just leave at the end of the night with a function that doesn’t work, but is still active.

Summary

This concept is extremely compelling. I can have a service bus queue that is processed and monitored by an Azure function. If aliens land and steal the entire office, all the servers, dev PCs and programmers, this function will continue to run. There is obviously a mindset shift here, and it doesn’t make sense to move everything into this kind of model, but consider the possibilities; imagine a system that books holidays: it processes the customer request and adds it to a queue; the aeroplane booking system picks that from the queue and books the ticket on the plane, the car hire system takes the message to book a car, once they’re all complete they add respective messages to say so (but remain agnostic of each other), finally, if any one part of the system fails, an Azure function could sit there monitoring and cancel the whole lot. I’ve never worked in this kind of industry, so there’s a lot that I’ve probably not considered, but the essence is that you can have active functionality on (even catastrophic) failure – which is a brand new concept.

References

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus

https://stackoverflow.com/questions/10043219/view-content-of-an-azure-service-bus-queue

Service Bus Explorer:

https://code.msdn.microsoft.com/Service-Bus-Explorer-f2abca5a

http://markheath.net/post/remote-debugging-azure-functions

Sending e-mails:

https://stackoverflow.com/questions/25216202/smtp-live-com-mailbox-unavailable-the-server-response-was-5-7-3-requested-ac

Azure Functions

Azure functions are Microsoft’s answer to “serverless” architecture. The concept behind Serverless Architecture being that you can create service functionality, but you don’t need to worry about a server. Obviously, there is one: it’s not magic; it’s just not your problem.

How?

Let’s start by creating a new Azure function app:

Once created, search “All resources”; you might need to give it a minute or two:

Next, it asks you to pick function type. In this case, we’re going to pick “Custom function”:

Azure then displays a plethora of options for creating your function. We’re going to go for “Generic Webhook” (and name it):

A Webhook is a http callback; meaning that you can use them in the same way as you would any other HTTP service.

This creates your function (with some default code):

We’ll leave the default code, and run it (because you can’t go wrong with default code – it always does exactly what you need… assuming what you need is what it does):

The right hand panel shows the output from the function. Which means that the function works; so, we now have a web based function that works… well… says hello world (ish). How do we call it?

Using the function

The function has an allocated URL:

Given that we have a service, and a connection URL; the rest is pretty straightforward. Let’s try to connect from a console application:

        static void Main(string[] args)
        {
            HttpClient client = new HttpClient();
            string url = "https://pcm-test.azurewebsites.net/api/pcm_GenericWebhookCSharp1?code=Kk2397soUoaK7hbxQa6qUSMV2S/AvLCvjn508ujAJMMZiita5TsjkQ==";

            var inputObject = new
            {
                first = "pcm-Test-input-first",
                last = "pcm-Test-input-last"
            };
            string param = JsonConvert.SerializeObject(inputObject);
            HttpContent content = new StringContent(param, Encoding.UTF8, "application/json");

            HttpResponseMessage response = client.PostAsync(url, content).Result;
            string results = response.Content.ReadAsStringAsync().Result;

            Console.WriteLine($"results: {results}");
            Console.ReadLine();
        }
    }

When run, this returns:

Conclusion

Let’s think about what we’ve just done here: we have set up a service, connected to that service from a remote source and returned data. Now, let’s think about what we haven’t done: any configuration; that is, other than clicking “Create Function”.

This “serverless” architecture seems to be the nth degree of SOA. If I wish, I can create one of these functions for each of the server activities in my application, they are available to anything with an internet connection. It then becomes Microsoft’s problem if my new website suddenly takes off and millions of people are trying to access it.

References

http://robertmayer.se/2016/04/19/azure-function-app-to-send-emails/

http://www.c-sharpcorner.com/article/azure-functions-create-generic-webhook-trigger/