Category Archives: Azure

Adding to an Existing Azure Blob

In this post I briefly cover the concept of Storage Accounts and Blob Storage; however, there are more to blobs than this simple use case. In this post, I’ll explore creating a blob file from a text stream, and then adding to that file.

As is stated in the post referenced above, Azure provides a facility for storing files in, what are known as, Azure Blobs.

In order to upload a file to a blob, you need a storage account, and a container. Setting these up is a relatively straightforward process and, again, is covered in the post above.

Our application here will take the form of a simple console app that will prompt the user for some text, and then add it to the file in Azure.

Set-up

Once you’ve set-up your console app, you’ll need the Azure NuGet Storage package.

Also, add the connection string to your storage account into the app.config:

<connectionStrings>
    <add name="Storage" connectionString="DefaultEndpointsProtocol=https;AccountName=testblob;AccountKey=wibble/dslkdsjdljdsoicj/rkDL7Ocs+aBuq3hpUnUQ==;EndpointSuffix=core.windows.net"/>
</connectionStrings>

Here’s the basic code for the console app:

static void Main(string[] args)
{
    Console.Write("Please enter text to add to the blob: ");
    string text = Console.ReadLine();
 
    UploadNewText(text);
 
    Console.WriteLine("Done");
    Console.ReadLine();
}

I’ll bet you’re glad I posted that, otherwise you’d have been totally lost. The following snippets are possible implementations of the method UploadNewText().

Uploading to BlockBlob

The following code will upload a file to a blob container:

string connection = ConfigurationManager.ConnectionStrings["Storage"].ConnectionString;
string fileName = "test.txt";
string containerString = "mycontainer";
 
using (MemoryStream stream = new MemoryStream())
using (StreamWriter sw = new StreamWriter(stream))
{
    sw.Write(text);
    sw.Flush();
    stream.Position = 0;
 
    CloudStorageAccount storage = CloudStorageAccount.Parse(connection);
    CloudBlobClient client = storage.CreateCloudBlobClient();
    CloudBlobContainer container = client.GetContainerReference(containerString);
    CloudBlockBlob blob = container.GetBlockBlobReference(fileName);
    blob.UploadFromStream(stream);
}

(note that the name of the container in this code is case sensitive)

If we have a look at the storage account, a text file has, indeed been created:

New Blob

But, what if we want to add to that? Well, running the same code again will work, but it will replace the existing file. To prove that, I’ve changed the text to “Test data 2” and run it again:

Test Data

So, how do we update the file? Given that we can update it, one possibility is to download the existing file, add to it and upload it again; that would look something like this:

string connection = ConfigurationManager.ConnectionStrings["Storage"].ConnectionString;
string fileName = "test.txt";
string containerString = "mycontainer";
 
CloudStorageAccount storage = CloudStorageAccount.Parse(connection);
CloudBlobClient client = storage.CreateCloudBlobClient();
CloudBlobContainer container = client.GetContainerReference(containerString);
CloudBlockBlob blob = container.GetBlockBlobReference(fileName);
 
using (MemoryStream stream = new MemoryStream())
{
    blob.DownloadToStream(stream);
 
    using (StreamWriter sw = new StreamWriter(stream))
    {
        sw.Write(text);
        sw.Flush();
        stream.Position = 0;
 
        blob.UploadFromStream(stream);
    }
}

This obviously means two round trips to the server, which isn’t the best thing in the world. Another possible option is to use the Append Blob…

Azure Append Blob Storage

There is a blob type that allows you to add to it without actually touching it; for example:

string connection = ConfigurationManager.ConnectionStrings["Storage"].ConnectionString;
string fileName = "testAppend.txt";
string containerString = "mycontainer";
 
CloudStorageAccount storage = CloudStorageAccount.Parse(connection);
CloudBlobClient client = storage.CreateCloudBlobClient();
CloudBlobContainer container = client.GetContainerReference(containerString);
CloudAppendBlob blob = container.GetAppendBlobReference(fileName);
if (!blob.Exists()) blob.CreateOrReplace();
 
using (MemoryStream stream = new MemoryStream())
using (StreamWriter sw = new StreamWriter(stream))
{
    sw.Write("Test data 4");
    sw.Flush();
    stream.Position = 0;
 
    blob.AppendFromStream(stream);                
}

There are a few things to note here:

  • The reason that I changed the name of the blob is that you can’t append to a BlockBlob (at least not using an AppendBlob); so it has to have been created for the purpose of appending.
  • While UploadFromStream will just create the file if it doesn’t exist, with the AppendBlob, you need to do it explicitly.

PutBlock

The final alternative here is to use PutBlock. This can bridge the gap, by allowing the addition of blocks into an existing block blob. However, you either need to maintain the Block ID list manually, or download the existing block list; here’s an example of creating, or adding to a file using the PutBlock method:

string connection = ConfigurationManager.ConnectionStrings["Storage"].ConnectionString;
string fileName = "test4.txt";
string containerString = "mycontainer";
 
CloudStorageAccount storage = CloudStorageAccount.Parse(connection);
CloudBlobClient client = storage.CreateCloudBlobClient();
CloudBlobContainer container = client.GetContainerReference(containerString);
CloudBlockBlob blob = container.GetBlockBlobReference(fileName);
 
ShowBlobBlockList(blob);
 
using (MemoryStream stream = new MemoryStream())
using (StreamWriter sw = new StreamWriter(stream))
{
    sw.Write(text);
    sw.Flush();
    stream.Position = 0;
 
    double seconds = (DateTime.Now - new DateTime(2000, 1, 1)).TotalSeconds;
    string blockId = Convert.ToBase64String(
        ASCIIEncoding.ASCII.GetBytes(seconds.ToString()));
 
    Console.WriteLine(blockId);
    //string blockHash = GetMD5HashFromStream(bytes);                
 
    List<string> newList = new List<string>();
    if (blob.Exists())
    {
        IEnumerable<ListBlockItem> blockList = blob.DownloadBlockList();
 
        newList.AddRange(blockList.Select(a => a.Name));
    }
 
    newList.Add(blockId);
 
    blob.PutBlock(blockId, stream, null);
    blob.PutBlockList(newList.ToArray());
}

The code above owes a lot to the advice given on this Stack Overflow question.

In order to avoid conflicts in the Block Ids, I’ve used a count of seconds since an arbitrary date. Obviously, this won’t work in all cases. Further, it’s worth noting that the code above still does two trips to the server (it has to download the block list).

The commented MD5 hash allows you to provide some form of check on the data being valid, should you choose to use it.

What is ShowBlobBlockList(blob)?

The following function will give some details relating to the existing blocks (it is shamelessly plagiarised from here):

public static void ShowBlobBlockList(CloudBlockBlob blockBlob)
{
    if (!blockBlob.Exists()) return;
 
    IEnumerable<ListBlockItem> blockList = blockBlob.DownloadBlockList(BlockListingFilter.All);
    int index = 0;
    foreach (ListBlockItem blockListItem in blockList)
    {
        index++;
        Console.WriteLine("Block# {0}, BlockID: {1}, Size: {2}, Committed: {3}",
            index, blockListItem.Name, blockListItem.Length, blockListItem.Committed);
    }
}

Summary

Despite being an established technology, these methods and techniques are sparsely documented on the web. Obviously, there are Microsoft docs, and they are helpful, but, unfortunately, not exhaustive.

References

https://stackoverflow.com/questions/33088964/append-to-azure-append-blob-using-appendtextasync-results-in-missing-data

https://docs.microsoft.com/en-us/rest/api/storageservices/understanding-block-blobs–append-blobs–and-page-blobs

http://www.c-sharpcorner.com/UploadFile/40e97e/windows-azure-blockblob-putblock-method/

https://docs.microsoft.com/is-is/rest/api/storageservices/put-block

https://www.red-gate.com/simple-talk/cloud/platform-as-a-service/azure-blob-storage-part-4-uploading-large-blobs/

https://stackoverflow.com/questions/46368954/can-putblock-be-used-to-append-to-an-existing-blockblob-in-azure

Azure Recommendations

Azure provides a number of pre-configured machine learning services out of the box. One of these (still in Beta at the time of writing this) is Recommendations. The idea being that it will try to work out what, given a list of items, you would prefer, based knowledge about your habits. There’s a lot of information on line about this, but briefly, it can work out what your preference is based a combination of your past activity, and the past activity of others that have shown an interest in the same item.

Obviously, the “items” could be products, films, aardvarks, or sheep; Azure doesn’t know anything about the content of what its recommending; if you’ve bought* “A” in the past, and 75% of everyone else that has bought “A” has also bought “B” then there’s a chance you’ll want “B”. “A” could be an apple, and “B” could be a pair of sunglasses; so obviously, you need to be careful about the data that you feed it.

Recommendations API

The first thing to note is that the Recommendations API represents an earlier attempt to implement this by Microsoft, and is due to be discontinued early next year (2018). If you try to use this to follow any of the online tutorials then you’ll get into a world of hurt.

Deploy Recommendations

The new method of creating a recommendations service is via a wizard (which, I believe behind the scenes, builds a custom ARM template). This is the start of the deployment, and gives you a screen similar to the following (once you’ve logged in):

As you can see, there’s clearly some re-branding in progress here; anyway, complete the form and create the service.

Another thing that has changed in the new version of this is that the free pricing tier has disappeared:

After a few screens, it starts the deployment process:

The next screen that is displayed shows all the connection strings and keys in one handy reference:

… they are just below this screenshot.

This should create four separate services:

Sample Project

Microsoft provides a sample project that should work out of the box (they actually provide more than one – some of which work better than others). This one uses AutoRest, but there’s another referenced at the bottom of this post.

In this project, open Recommendations.Sample.Program.cs and, at the top of the main function, enter the details that you noted after the creation of your service. If you didn’t note them, then you can still find them. You’ll notice that four separate services were created: AppService, StorageAccount, App Insights and an App Service Plan.

recommendationsEndPointUri

Is found in the URL of the App Service:

apiAdminKey

Is found in the Application Settings of the App Service:

connectionString

Is the connection string of the storage account:

If you run this now, you should find that it will process and score the recommendations:

So, for example, we can see from the results above that people that bought DHF-01159 are recommended to buy DHF-01055 (although it doesn’t seem very convinced).

Footnotes

* The term that Azure uses here is “Purchase”. Different actions have different weighting (configurable), but by default, you would assume that buying something is more important than, for example, clicking on it (“Click” is another action). These actions can mean anything you choose; in the sheep example above, “Purchase” might mean shearing, and “Click” might mean photographing.

References

http://pmichaels.net/2017/08/06/deploying-azure-recommendation-service-using-arm-template/

https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-overview-of-diagnostic-logs

https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-recommendations-ui-intro

https://gallery.cortanaintelligence.com/Tutorial/Recommendations-Solution

https://github.com/Microsoft/Cognitive-Recommendations-Windows

SendGrid – Azure e-mail functionality

In this post, I discussed the prospect of sending an e-mail from an Azure function in order to alert someone that something had gone wrong. In one of the comments, it was suggested that I should look into a third party tool called “SendGrid”, and this post is the result of that investigation.

Azure Configuration

SendGrid is a third party application, and so the first thing you need to do is to create an account:

The free tier covers you for 25,000 e-mails per month. However, you do get a scary warning that, because this isn’t a Microsoft product, it is not covered by Azure credits.

Anyway, click create and, after a while, you’re new SendGrid Account should be created:

You’ll need to get the API Key: to do this, select Manage:

That takes you to https://app.sendgrid.com, where you can select to create an API Key:

Clearly, you wouldn’t want a full access account to just send an e-mail in real life… but Restricted Access has a form that would take longer to fill in, and I can’t be mythered*… so we’ll go with full access for now.

Once you’ve created it, and given it a name, you should have a key (remember that key – don’t write it down, or copy it, you must remember it!).

Code

Create a new Function App, and add the SendGrid NuGet package:

https://www.nuget.org/packages/Sendgrid/

In this case, let’s create a HttpTrigger function (this will fire when a web address is accessed); the body of which needs to look something like this:

        [FunctionName("SendEmail")]
        public static async Task<HttpResponseMessage> Run(
            [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req, 
            TraceWriter log)
        {
            log.Info("C# HTTP trigger function processed a request.");

            // parse query parameter
            var addresses = req.GetQueryNameValuePairs();
            string[] addressArr = addresses
                .Where(a => a.Key == "address")
                .Select(a => a.Value).ToArray();

            if (addressArr.Count() == 0)
            {
                return req.CreateResponse(HttpStatusCode.BadRequest, 
                    "Please pass an address in the query string");
            }
            else
            {
                HttpStatusCode status = await CallSendEmailAsync(
                    "noreply@test.com", addressArr, "test e-mail",
                    "Once more unto the breach, dear friends, once more;\n" +
                    "Or close the wall up with our English dead.   \n" +
                    "In peace there's nothing so becomes a man     \n" +
                    "As modest stillness and humility:             \n" +
                    "But when the blast of war blows in our ears,  \n" +
                    "Then imitate the action of the tiger;         \n" +
                    "Stiffen the sinews, summon up the blood,");
                switch (status)
                {
                    case HttpStatusCode.OK:
                    case HttpStatusCode.Accepted:
                    {
                        return req.CreateResponse(status, "Mail Successfully Sent");
                    }
                    default:
                    {
                        return req.CreateResponse(status, "Unable to send e-mail");
                    }
                }
            }
        }

The `CallSendEmailAsync` helper method might look like this:

public static async Task<HttpStatusCode> CallSendEmailAsync(string from, string[] recipients, string subject, string body)
{
    EmailAddress fromAddress = new EmailAddress(from);
 
    SendGridMessage message = new SendGridMessage()
    {
        From = fromAddress,
        Subject = subject,
        HtmlContent = body
    };
 
    message.AddTos(recipients.Select(r => { return new EmailAddress(r); }).ToList());
   
    string sendGridApiKey = "AB.keythisisthekey.nkfdhfkjfhkjfd0ei8L9xTyaTCzy_sV5gPJNX-3";
 
    SendGridClient client = new SendGridClient(sendGridApiKey);
    Response response = await client.SendEmailAsync(message);
 
    return response.StatusCode;
}

The key is the string that I said you should remember earlier.

You can just paste the URL into a browser, give it the e-mail addresses in the format:

https://url.com?address=my@email.com&my-second@email.com

As you will see from the link at the bottom of this post, OK (200) signifies that the message is valid, but not queued to be delivered; and Accepted (202) indicates that the message is valid and queued. My guess is that if you get an OK, it means that the mail has already been delivered.

Footnotes

* It probably took longer to type this out than to do it.

References

https://go.sendgrid.com/

https://sendgrid.com/docs/API_Reference/Web_API_v3/Mail/errors.html

AutoRest

In the world of WCF and SOAP, if you want to consume a WCF service, you simply ask VS to add a service reference, direct it to the WSDL file, and it generates a client for you. Wouldn’t it be nice if you could do the same for a REST service!

It turns out that you can (although, admittedly, the process is not as seamless). The tool of choice for providing this WSDL emulation is called Swagger. This post isn’t about Swagger, and it assumes that you know your Swagger source. Azure has its own flavour of Swagger, called “Azure Swagger” (seriously). Consuming that is the main focus of this.

AutoRest

AutoRest is the client-side tool that allows you to automatically create proxy classes for your REST interactions, and is available via NuGet… but it doesn’t work at the time of writing this.

Instead, use Node Package Manager (NPM). First, install it, via this link.

Next, install AutoRest. Your friend here is the Package Manager Console:

npm install -g autorest

Update the version (this is, as I understand it, not strictly necassary, as it will install the latest version):

autorest --latest

Finally, check the version that is installed (okay, clearly this isn’t necessary at all, but I like to check that the computer and me both have a synchronised view of events as often as possible):

autorest --list-installed

Next, check where you are:

Then call AutoRest:

AutoRest -Input [SwaggerLocation] -Namespace [Namespace]

A note on Namespaces

It’s tempting to be dismissive of the namespace. Remember that, the namespace that you give will be assigned to classes and used by classes… so, if you were, for example, to give a namespace that matched the class name (so, for example, if you’re creating Recommendations, you’ll have a class called “Recommendations”) then you’ll find you’ll get thousands of strange errors that apparently make no sense. Think carefully about the namespace!

Where is the Swagger Location?

Obviously, the answer is, it depends… however; for Azure, most services have that readily available; let’s take Recommendations:

References

https://github.com/Azure/autorest

https://azure.microsoft.com/en-gb/resources/videos/inside-autorest-with-david-justice/

https://www.nuget.org/packages/autorest/

https://dzimchuk.net/generating-clients-for-your-apis-with-autorest/

https://github.com/Azure/autorest/blob/master/README.md#installing-autorest

Function Apps in Azure

With Update 15.3.1 for Visual Studio came the ability to create Function Apps in VS. Functions were previously restricted to writing code in the browser directly on Azure*.

Set-up

The first step is to download and install, or use the Visual Studio Installer to update to the latest version of VS (at the time of writing, this was 15.3.3 – but, as stated above, it’s 15.3.1 has the Function App update).

Once this is done, you need to launch the Visual Studio Installer again

Select the Azure Workload (if you haven’t already):

The Microsoft article, referenced at the bottom of this post, answers the issue of what happens if this doesn’t work on it’s own; it says:

If for some reason the tools don’t get automatically updated from the gallery…

I’ve now done this twice on two separate machines and, on both occasions, the tools have not automatically been updated from the gallery (it also sounds like the author of the article doesn’t really know why this is the case). Assuming that the reader of this article will suffer the same fate, they should update the Azure gallery extension (if you don’t have to do that then please leave a comment – I’m interested to know if it ever works):

Close everything (including the installer) and this appears:

Finally, we see the new app type:

Function Apps

Once you create a new function app, you get an empty project:

To add a new function, you can right click on the solution (as you would for a new class file) and select new function:

New Function

You then, helpfully, get asked what kind of function you would like:

Function Type

Let’s select Generic WebHook:

Generic Web Hook

We now have some template code, so let’s try and run it:

Running it gives this neat little screen that wouldn’t have looked out of place on my BBS in 1995**:

The bottom line gives an address, so we can just type that into a browser:

As you can see, we do get a “WebHook Triggered” message… but things kind of go downhill from there!

There are a couple of reasons for this; the WebHook only deals with a post and, as per the default code, it needs some JSON for the body; let’s use Postman to create a new request:

This looks much better, and the console tells us that we’re firing:

Publish the App

Okay – so the function works locally, which is impressive (debugging on Azure wasn’t the easiest of things). Now we want to push it to the cloud.

This goes away for a while, compiles the app and then deploys it for us:

Your function app should now be in Azure:

Now you’ll need to find it’s URL. As already detailed in this article, you get the function URL from here:

If we switch Postman over to the cloud, we get the same result***:

Footnotes

* Actually, this is probably untrue. It was probably possible to write them in VS and publish them. There were a few add-ons knocking about in the VS gallery that claimed to allow just that.

** It was called The Twilight Zone BBS; although, if I’m being honest, although the ANSI art on it was impressive, it wasn’t my art work.

*** Locally, it wasn’t that fussed about the body format (it could be text), but once it was in the cloud, it insisted on JSON.

References

https://blogs.msdn.microsoft.com/webdev/2017/05/10/azure-function-tools-for-visual-studio-2017/

http://pmichaels.net/2017/07/16/azure-functions/

Creating a Basic Azure Web Job

In this article, I discussed the use of Azure functions; however, Web Jobs perform a similar task. Azure Functions are effectively an abstraction on top of Web Jobs – meaning that, while you have more control when using Web Jobs, there’s a little more to do when writing them.

This article covers the basics of Web Jobs, and has a walk-through for creating a very simple task using one.

Create a new Web Job

Once you create this project, you’ll need to fill in the following values in the app.config:

<configuration>
  <connectionStrings>
    <!-- The format of the connection string is "DefaultEndpointsProtocol=https;AccountName=NAME;AccountKey=KEY" -->
    <!-- For local execution, the value can be set either in this config file or through environment variables -->
    <add name="AzureWebJobsDashboard" connectionString="" />
    <add name="AzureWebJobsStorage" connectionString="" />
  </connectionStrings>

These can both be the same value, but they refer to where Azure stores it’s data.

AzureWebJobsDashboard

This is the storage account used to store logs.

AzureWebJobsStorage

This is the storage account used to store whatever the application needs to function (for example: queues or tables). In the example below, it’s where the file will go.

Storage accounts can be set-up from the Azure dashboard (more on this later):

A Basic Application

For this example, let’s take a file from a blob storage and parse it, then write out the result in a log. Specifically, we’ll take an XML file, and write the number of nodes into a log; here’s the file:

<test>
    <myNode>
    </myNode>
    <myNode>
    </myNode>
</test>

I think we’ll probably be looking for a figure around 2.

Blob Storage

Before we can do anything with blob storage, we’ll need a new storage area; create a new storage account:

Set the storage kind to “General Storage” (because we’re working with files); other than that, go with your gut.

Uploading

Once you’ve created the account, you’ll need to add a file – otherwise nothing will happen. You can do this in the web portal, or you can do it via a desktop utility that Microsoft provide: Storage Explorer.

I kind of expected this to take me to the web page mentioned… but it doesn’t! You have to navigate there manually:

http://storageexplorer.com

Install it… unless you want to upload your file using the web portal… in which case: don’t.

We can create a new container:

Now, we can see the storage account and any containers:

Now, you can upload a file from here (remember that you can do all this inside the Portal):

Once you’ve created this, go back and update the storage connection string (described above). You may also want to repeat the process for a dashboard storage area (or, as stated above, they can be the same).

Programmatically Downloading

Now we have a file in the directory, it can be downloaded via the WebJob; here’s a function that will download a file:

        public static async Task<string> GetFileContents(string connectionString, string containerString, string fileName)
        {
            CloudStorageAccount storage = CloudStorageAccount.Parse(connectionString);
            CloudBlobClient client = storage.CreateCloudBlobClient();
            CloudBlobContainer container = client.GetContainerReference(containerString);
            CloudBlob blob = container.GetBlobReference(fileName);

            MemoryStream ms = new MemoryStream();
            await blob.DownloadToStreamAsync(ms);
            ms.Position = 0;

            StreamReader sr = new StreamReader(ms);
            string contents = sr.ReadToEnd();
            return contents;
        }

The code to call this is here (note the commented out commands from the default WebJob Template):

        static void Main()
        {
            Console.WriteLine("Starting");

            var config = new JobHostConfiguration();

            if (config.IsDevelopment)
            {
                config.UseDevelopmentSettings();
            }

            //var host = new JobHost();

            string fileContents = AzureHelpers.GetFileContents(config.StorageConnectionString, "testblob", "test.xml").Result;
            Console.WriteLine(fileContents);

            // The following code ensures that the WebJob will be running continuously
            //host.RunAndBlock();

            Console.WriteLine("Done");
        }

Although this works (sort of – it doesn’t check for new files, and it would need to be run on a scheduled basis – “On Demand” in Azure terms), you don’t need it (at least not for jobs that react to files being uploaded to storage containers). WebJobs provide this functionality out of the box! There are a number of decorators that you can use for various purposes:

  • string
  • TextReader
  • Stream
  • ICloudBlob
  • CloudBlockBlob
  • CloudPageBlob
  • CloudBlobContainer
  • CloudBlobDirectory
  • IEnumerable<CloudBlockBlob>
  • IEnumerable<CloudPageBlob>

Here, we’ll use a BlobTrigger and accept a string. Moreover, doing it this way makes the writing to the log much easier, as there’s injection of sorts (at least I’m assuming that’s what it’s doing). Here’s what the complete solution looks like in the new paradigm:

        public static void ProcessFile([BlobTrigger("testblob/{name}")] string fileContents, TextWriter log)
        {            
            XmlDocument xmlDoc = new XmlDocument();
            xmlDoc.LoadXml(fileContents);            
            log.WriteLine($"Node count: {xmlDoc.FirstChild.ChildNodes.Count}");
        }

The key thing to notice here is that the function is static and public (the class it’s in needs to be public, too – even is that’s the Program class). The WebJob framework uses reflection to work out which functions it needs to run.

The other point to note is that I’m getting the parameter as a string – the article above details what you could have it as; for example, if you wanted to delete it afterwards, you’d probably want to use an ICloudBlob or something similar.

Anyway, it works:

The log file

Remember the storage area that we specified for the dashboard earlier? You should now see some new containers created in that storage area:

This has created a number of directories, but the one that we’re interested in is “output-logs” in the “azure-webjobs-hosts” container:

And here’s the log itself:

References

https://docs.microsoft.com/en-us/azure/app-service-web/web-sites-create-web-jobs

https://stackoverflow.com/questions/36610952/azure-webjobs-vs-azure-functions-how-to-choose

https://stackoverflow.com/questions/27580264/where-do-i-get-the-azurewebjobsdashboard-connection-string-information

http://www.hanselman.com/blog/IntroducingWindowsAzureWebJobs.aspx

https://stackoverflow.com/questions/24286214/where-are-azure-webjobs-blobinput-and-bloboutput-classes

https://docs.microsoft.com/en-us/azure/app-service-web/websites-dotnet-webjobs-sdk-storage-blobs-how-to

Deploying an Azure Recommendation Service Using an ARM Template

Azure provides a number of AI services out of the box. The recommendation service is one of these, and it’s part of the Azure Cognitive Services.

Why?

Deploying a new service to Azure is quite straightforward; for recommendations, you Navigate to the Portal and select a new service:

Then you select the various options one by one, and finally, you create the resource.

But, what if you want to create this in development, and then in test, and then again in production, or what if you want to deploy it again multiple times? Although it’s straightforward, putting data in this kind of form is prone to error – and it’s time consuming.

ARM Templates

Azure allows you to create a template, and to create your resource based on that. There are a number of ways to do this; ultimately, it’s just a JSON document, so you could open up notepad and just type one.

Here’s how I created it initially:

Create a new resource group:

However, this doesn’t seem to give you too much out of the box (there are templates, but recommendations isn’t one of them):

Fortunately, you can reverse engineer the deployment that you’ve already made:

Downloading this gives you everything you need to re-deploy:

Running the template

So, now you’ve got a JSON file, how do you tell Azure what to do? Powershell seems to be the answer of choice (as it is for so many things these days) for Microsoft.

You’ll need to change the execution policy first:

Set-ExecutionPolicy -Scope Process -ExecutionPolicy Unrestricted

Then run the script:

Success

When it works, you’ll get something like this:

And here’s the service:

Errors

It would be a gross exaggeration to say this worked straight away for me; here’s the errors that I encountered, and how I resolved them.

Resource Group Name is null

New-AzureRmResourceGroupDeployment : 18:23:28 – Error: Code=InvalidDeploymentParameterValue; Message=The value of deployment parameter ‘accounts_TestRecommendations_name’ is null. Please specify the value or use the parameter reference. See https://aka.ms/arm-deploy/#parameter-file for details.
At C:\Users\Paul\Downloads\ExportedTemplate-pcm-dev\deploy.ps1:104 char:5
+ New-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGr …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [New-AzureRmResourceGroupDeployment], Exception
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.NewAzureResourceGroupDeploymentCmdlet

Resolution

This is caused because the parameter is set to null by default. Change parameters.json:

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "accounts_TestRecommendations_name": {
            "value": "testRecommendations1"
        }
    }
}

No connection

Login-AzureRmAccount : The browser based authentication dialog failed to complete. Reason: The server or proxy was not found.
At C:\Users\Paul\Downloads\ExportedTemplate-pcm-dev\deploy.ps1:71 char:1
+ Login-AzureRmAccount;
+ ~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : CloseError: (:) [Add-AzureRmAccount], AadAuthenticationFailedException
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.Profile.AddAzureRMAccountCommand

Resolution

This is caused by not having a connection to Azure… so the resolution is to connect.

Invalid parameter value

C:\Users\Paul\Downloads\ExportedTemplate-pcm-dev\deploy.ps1 : Cannot retrieve the dynamic parameters for the cmdlet.
Error parsing boolean value. Path ‘parameters.accounts_TestRecommendations_name.value’, line 6, position 22.
At line:1 char:1
+ .\deploy.ps1
+ ~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [deploy.ps1], ParameterBindingException
+ FullyQualifiedErrorId : GetDynamicParametersException,deploy.ps1

Resolution

In my first attempt to resolve the first error, I specified a name without quotes; i.e.:

            "value": testRecommendations1

This seems to cause Azure to consider this a boolean; the fix is pretty straightforward once you’ve worked out what it’s actually saying:

            "value": "testRecommendations1"

Error

New-AzureRmResourceGroupDeployment : 07:58:51 – Resource Microsoft.CognitiveServices/accounts ‘testRecommendations1’
failed with message ‘{
“error”: {
“code”: “CanNotCreateMultipleFreeAccounts”,
“message”: “Operation failed. Only one free account is allowed for account type ‘Recommendations’.”
}
}’
At C:\Users\Paul\Downloads\ExportedTemplate-pcm-dev\deploy.ps1:104 char:5
+ New-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGr …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [New-AzureRmResourceGroupDeployment], Exception
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.NewAzureResourceGroupDeploymentCmdlet

Resolution

This was caused because my account would only allow me to have a single recommendation service at any time. So the fix is to delete existing recommendation account:

References

https://blogs.endjin.com/2015/07/using-azure-resource-manager-and-powershell-dsc-to-create-and-provision-a-vm/

https://blogs.endjin.com/2016/01/azure-resource-manager-authentication-from-a-powershell-script/

Using Azure Functions to Send an E-mail Alert from a Service Bus

In this post, I discussed creating an Azure service bus that sends an e-mail as an action once a message has expired; and in this post, I covered Azure functions and setting a basic one up.

These two pieces of functionality seem to be crying out to be together. After all, if your functionality to send an e-mail is in the cloud, you don’t have to worry about your server being down (which, if your message has expired, is a real possibility).

Create the Azure Function

The first thing to do is to create the Azure function to send an e-mail. Remember that we’ll be hooking into the service bus, and so we’ll create the function a little differently.

The first few steps are the same, though:

The new function is here:

We’ll create a custom function again:

Although this looks familiar from the last post, the next part does differ slightly. This time, we’ll set up a Service Bus Trigger:

This requires the connection string to your service bus…

As you can see above, the service bus connection is blank, and there are no possible entries… onto App Settings:

App Settings

On the App Settings tab, you can configure settings that pertain to your Azure Function App. Select “Manage App Settings”. Here we can set-up a connection string:

Now, we should be able to see that from the Function:

Does it work?

What does this function do out of the box?

Well, having populated the queue with 50 messages that time out after 30 seconds, the function kicked in and started logging that it was picking up messages after 30 seconds – so that’s a promising sign!

The messages are processed and removed from the dead letter queue. This process happens so quickly, it’s easy (as I did) to interpret this as a bug (i.e. messages are not being dead-lettered). However, as we can see from the function logs – they are.

This did, however, leave me with a concern that the messages were being disposed of before they had been successfully processed. To check this, I changed the function slightly:

So, it crashes correctly:

And here, safe and sound, are 50 freshly dead-lettered messages:

Function Code

Now we have a function, we need to make it send an e-mail… so we’ll need some code. Let’s start with what we created here.


using System;
using System.Threading.Tasks;
using System.Net.Mail;

public static void Run(string myQueueItem, TraceWriter log)
{
    log.Info($"Start C# ServiceBus queue trigger function processed message: {myQueueItem}");

    System.Net.Mail.MailMessage message = new System.Net.Mail.MailMessage();
    message.To.Add("to.address@hotmail.co.uk");
    message.Subject = "Message in queue has expired";
    message.From = new System.Net.Mail.MailAddress("from.address@hotmail.co.uk");
    message.Body = messageText;
    System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("smtp.live.com");
    smtp.Port = 587;
    smtp.UseDefaultCredentials = false;
    smtp.Credentials = new System.Net.NetworkCredential("my.address@hotmail.co.uk", "p@ssw0rd");
    smtp.EnableSsl = true;
    smtp.Send(message);

    log.Info($"End C# ServiceBus queue trigger function processed message: {myQueueItem}");
}


This doesn’t work:

2017-06-27T16:47:56.928 Function started (Id=1188dbdb-4963-4e55-af5c-4be1f71a1ca5)
2017-06-27T16:47:56.928 Start C# ServiceBus queue trigger function processed message: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA32
2017-06-27T16:47:56.928 Function completed (Failure, Id=1188dbdb-4963-4e55-af5c-4be1f71a1ca5, Duration=0ms)
2017-06-27T16:47:57.147 Exception while executing function: Functions.ServiceBusQueueTriggerCSharp1. mscorlib: Exception has been thrown by the target of an invocation. f-ServiceBusQueueTriggerCSharp1__-1971403142: Cannot complete.
2017-06-27T16:47:57.557 Exception while executing function: Functions.ServiceBusQueueTriggerCSharp1. mscorlib: Exception has been thrown by the target of an invocation. f-ServiceBusQueueTriggerCSharp1__-1971403142: Cannot complete.

Debugging Azure

A quick side note on debugging Azure. There are a number of resources with details of how this should work on the web, and I’ll probably have a later post of my own experiences, but it’s a pretty flaky experience, and I ended up using trial and error to determine the issue.

Working code

using System;
using System.Threading.Tasks;

public static void Run(string myQueueItem, TraceWriter log)
{
    log.Info($"Start C# ServiceBus queue trigger function processed message: {myQueueItem}");

    System.Net.Mail.MailMessage message = new System.Net.Mail.MailMessage();
    
    message.To.Add("to.address@hotmail.co.uk");    
    message.Subject = "Message in queue has expired";    
    message.From = new System.Net.Mail.MailAddress("from.address@hotmail.co.uk");
    message.Body = myQueueItem;
        
    System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("smtp.live.com");
    smtp.Port = 587;
    smtp.UseDefaultCredentials = false;
    smtp.Credentials = new System.Net.NetworkCredential("my.address@hotmail.co.uk", "p@ssw0rd");
    smtp.EnableSsl = true;
    smtp.Send(message);

    log.Info($"End C# ServiceBus queue trigger function processed message: {myQueueItem}");
}

So, the problem was just that I was referencing an unknown variable (messageText). I’m unsure exactly why I needed to travel to the mountains of Mordor to determine this – a simple error message in the online text would have sufficed.

The other issue that I faced was a security challenge; however, once I’d persuaded Azure that this really was me, everything sprung into life:

Credit Considerations

Unlike in previous posts where I’ve identified the Azure cost to be negligible, functions are the fastest way to use up credit I have found so far. Especially functions such as I’ve created here. I left the (non-working) function above active, but failing all night, and it used up over £40 worth of credit, continually trying, and failing, to process the dead-letter queue… I think the lights might even have dimmed in Redmond for a split second! The moral of the story is is: be careful when you’re debugging this – you can’t just leave at the end of the night with a function that doesn’t work, but is still active.

Summary

This concept is extremely compelling. I can have a service bus queue that is processed and monitored by an Azure function. If aliens land and steal the entire office, all the servers, dev PCs and programmers, this function will continue to run. There is obviously a mindset shift here, and it doesn’t make sense to move everything into this kind of model, but consider the possibilities; imagine a system that books holidays: it processes the customer request and adds it to a queue; the aeroplane booking system picks that from the queue and books the ticket on the plane, the car hire system takes the message to book a car, once they’re all complete they add respective messages to say so (but remain agnostic of each other), finally, if any one part of the system fails, an Azure function could sit there monitoring and cancel the whole lot. I’ve never worked in this kind of industry, so there’s a lot that I’ve probably not considered, but the essence is that you can have active functionality on (even catastrophic) failure – which is a brand new concept.

References

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus

https://stackoverflow.com/questions/10043219/view-content-of-an-azure-service-bus-queue

Service Bus Explorer:

https://code.msdn.microsoft.com/Service-Bus-Explorer-f2abca5a

http://markheath.net/post/remote-debugging-azure-functions

Sending e-mails:

https://stackoverflow.com/questions/25216202/smtp-live-com-mailbox-unavailable-the-server-response-was-5-7-3-requested-ac

Azure Functions

Azure functions are Microsoft’s answer to “serverless” architecture. The concept behind Serverless Architecture being that you can create service functionality, but you don’t need to worry about a server. Obviously, there is one: it’s not magic; it’s just not your problem.

How?

Let’s start by creating a new Azure function app:

Once created, search “All resources”; you might need to give it a minute or two:

Next, it asks you to pick function type. In this case, we’re going to pick “Custom function”:

Azure then displays a plethora of options for creating your function. We’re going to go for “Generic Webhook” (and name it):

A Webhook is a http callback; meaning that you can use them in the same way as you would any other HTTP service.

This creates your function (with some default code):

We’ll leave the default code, and run it (because you can’t go wrong with default code – it always does exactly what you need… assuming what you need is what it does):

The right hand panel shows the output from the function. Which means that the function works; so, we now have a web based function that works… well… says hello world (ish). How do we call it?

Using the function

The function has an allocated URL:

Given that we have a service, and a connection URL; the rest is pretty straightforward. Let’s try to connect from a console application:

        static void Main(string[] args)
        {
            HttpClient client = new HttpClient();
            string url = "https://pcm-test.azurewebsites.net/api/pcm_GenericWebhookCSharp1?code=Kk2397soUoaK7hbxQa6qUSMV2S/AvLCvjn508ujAJMMZiita5TsjkQ==";

            var inputObject = new
            {
                first = "pcm-Test-input-first",
                last = "pcm-Test-input-last"
            };
            string param = JsonConvert.SerializeObject(inputObject);
            HttpContent content = new StringContent(param, Encoding.UTF8, "application/json");

            HttpResponseMessage response = client.PostAsync(url, content).Result;
            string results = response.Content.ReadAsStringAsync().Result;

            Console.WriteLine($"results: {results}");
            Console.ReadLine();
        }
    }

When run, this returns:

Conclusion

Let’s think about what we’ve just done here: we have set up a service, connected to that service from a remote source and returned data. Now, let’s think about what we haven’t done: any configuration; that is, other than clicking “Create Function”.

This “serverless” architecture seems to be the nth degree of SOA. If I wish, I can create one of these functions for each of the server activities in my application, they are available to anything with an internet connection. It then becomes Microsoft’s problem if my new website suddenly takes off and millions of people are trying to access it.

References

http://robertmayer.se/2016/04/19/azure-function-app-to-send-emails/

http://www.c-sharpcorner.com/article/azure-functions-create-generic-webhook-trigger/

Azure Service Bus – Send an e-mail on Message Timeout

A message queue has, in its architecture, two main points of failure; the first is the situation where a message is added to a queue, but never read (or at least not read within a specified period of time); this is called a Dead Letter, and it is the subject of this post. The second is where the message is corrupt, or it breaks the reading logic in some way; that is known as a Poison Message.

There are a number of reasons that a message might not get read in the specified time: the service reading and processing the messages might not be keeping up with the supply, it might have crashed, the network connection might have failed.

One possible thing to do at this stage, is to have a process that automatically notifies someone that a message has ended up in the dead letter queue.

Step One – specify a timeout

Here’s how you would specify a timeout on the message specifically:

           BrokeredMessage message = new BrokeredMessage(messageBody)
            {
                MessageId = id,
                TimeToLive = new TimeSpan(0, 5, 0)
            };

Or, you can create a default on the queue from the QueueDescription (typically this would be done when you initially create the queue:

                QueueDescription qd = new QueueDescription("TestQueue")
                {
                    DefaultMessageTimeToLive = new TimeSpan(0, 5, 0)
                };
                nm.CreateQueue(qd);

Should these values differ, the shortest time will be taken.

What happens to the message by default?

I’ve added a message to the queue using the default timeout of 5 minutes; here it is happily sitting in the queue:

Looking at the properties of the queue, we can determine that the “TimeToLive” is, indeed, 5 minutes:

In addition, you can see that, by default, the flag telling Service Bus to move the message to a dead letter queue is not checked. This means that the message will not be moved to the dead letter queue.

5 Minutes later:

Nothing has happened to this queue, except time passing. The message has now been discarded. It seems an odd behaviour; however, as with ReadAndDelete Locks there may be reasons that this behaviour is required.

Step Two – Dead Letters

If you want to actually do something with the expired message, the key is a concept called “Dead Lettering”. The following code will direct the Service Bus to put the offending message into the “Dead Letter Queue”:


                QueueDescription qd = new QueueDescription("TestQueue")
                {
                    DefaultMessageTimeToLive = new TimeSpan(0, 5, 0),
                    EnableDeadLetteringOnMessageExpiration = true
                };
                nm.CreateQueue(qd);

Here’s the result for the same test:

Step Three – Doing something with this…

Okay – so the message hasn’t been processed, and it’s now sat in a queue specially designed for that kind of thing, so what can we do with it? One possible thing is to create a piece of software that monitors this queue. This is an adaptation of the code that I originally created here:

        static void Main(string[] args)
        {
            System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
            sw.Start();

            if (!InitialiseClient())
            {
                Console.WriteLine("Unable to initialise client");
            }
            else
            {
                while (true)
                {
                    string message = ReadMessage("TestQueue/$DeadLetterQueue");

                    if (string.IsNullOrWhiteSpace(message)) break;
                    Console.WriteLine($"{DateTime.Now}: Message received: {message}");
                }
            }

            sw.Stop();
            Console.WriteLine($"Done ({sw.Elapsed.TotalSeconds}) seconds");
            Console.ReadLine();
        }

        private static bool InitialiseClient()
        {
            Uri uri = ServiceManagementHelper.GetServiceUri();
            TokenProvider tokenProvider = ServiceManagementHelper.GetTokenProvider(uri);

            NamespaceManager nm = new NamespaceManager(uri, tokenProvider);
            return nm.QueueExists("TestQueue");
        }

        private static string ReadMessage(string queueName)
        {
            QueueClient client = QueueManagementHelper.GetQueueClient(queueName, true);

            BrokeredMessage message = client.Receive();
            if (message == null) return string.Empty;
            string messageBody = message.GetBody<string>();

            //message.Complete();

            return messageBody;
        }

If this was all that we had to monitor the queue, then somebody’s job would need to be to watch this application. That may make sense, depending on the nature of the business; however, we could simply notify the person in question that there’s a problem. Now, if only the internet had a concept of an offline messaging facility that works something akin to the postal service, only faster…

        static void Main(string[] args)
        {
            System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
            sw.Start();

            if (!InitialiseClient())
            {
                Console.WriteLine("Unable to initialise client");
            }
            else
            {
                while (true)
                {
                    string message = ReadMessage("TestQueue/$DeadLetterQueue");

                    if (string.IsNullOrWhiteSpace(message)) break;
                    Console.WriteLine($"{DateTime.Now}: Message received: {message}");

                    Console.WriteLine($"{DateTime.Now}: Send e-mail");
                    SendEmail(message);
                }
            }

            sw.Stop();
            Console.WriteLine($"Done ({sw.Elapsed.TotalSeconds}) seconds");
            Console.ReadLine();
        }

        private static void SendEmail(string messageText)
        {
            System.Net.Mail.MailMessage message = new System.Net.Mail.MailMessage();
            message.To.Add("notification.address@hotmail.co.uk");
            message.Subject = "Message in queue has expired";
            message.From = new System.Net.Mail.MailAddress("my.address@hotmail.co.uk");
            message.Body = messageText;
            System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("smtp.live.com");
            smtp.Port = 587;
            smtp.UseDefaultCredentials = false;
            smtp.Credentials = new System.Net.NetworkCredential("my.address@hotmail.co.uk", "passw0rd");
            smtp.EnableSsl = true;
            smtp.Send(message);
        }

In order to prevent a torrent of mails, you might want to put a delay in this code, or even maintain some kind of list so that you only send one mail per day.

References

https://docs.microsoft.com/en-us/dotnet/api/microsoft.servicebus.messaging.queuedescription.enabledeadletteringonmessageexpiration?view=azureservicebus-4.0.0#Microsoft_ServiceBus_Messaging_QueueDescription_EnableDeadLetteringOnMessageExpiration

https://www.codit.eu/blog/2015/01/automatically-expire-messages-in-azure-service-bus-how-it-works/

https://stackoverflow.com/questions/9851319/how-to-add-smtp-hotmail-account-to-send-mail