Category Archives: C#

Working with Multiple Cloud Providers – Part 3 – Linking Azure and GCP

This is the third and final post in a short series on linking up Azure with GCP (for Christmas). In the first post, I set-up a basic Azure function that updated some data in table storage, and then in the second post, I configured the GCP link from PubSub into BigQuery.

In the post, we’ll square this off by adapting the Azure function to post a message directly to PubSub; then, we’ll call the Azure function with Santa’a data, and watch that appear in BigQuery. At least, that was my plan – but Microsoft had other ideas.

It turns out that Azure functions have a dependency on Newtonsoft Json 9.0.1, and the GCP client libraries require 10+. So instead of being a 10 minute job on Boxing day to link the two, it turned into a mammoth task. Obviously, I spent the first few hours searching for a way around this – surely other people have faced this, and there’s a redirect, setting, or way of banging the keyboard that makes it work? Turns out not.

The next idea was to experiment with contacting the Google server directly, as is described here. Unfortunately, you still need the Auth libraries.

Finally, I swapped out the function for a WebJob. WebJobs give you a little move flexibility, and have no hard dependencies. So, on with the show (albeit a little more involved than expected).

WebJob

In this post I described how to create a basic WebJob. Here, we’re going to do something similar. In our case, we’re going to listen for an Azure Service Bus Message, and then update the Azure Storage table (as described in the previous post), and call out to GCP to publish a message to PubSub.

Handling a Service Bus Message

We weren’t originally going to take this approach, but I found that WebJobs play much nicer with a Service Bus message, than with trying to get them to fire on a specific endpoint. In terms of scaleability, adding a queue in the middle can only be a good thing. We’ll square off the contactable endpoint at the end with a function that will simply convert the endpoint to a message on the queue. Here’s what the WebJob Program looks like:

public static void ProcessQueueMessage(
    [ServiceBusTrigger("localsantaqueue")] string message,
    TextWriter log,
    [Table("Delivery")] ICollector<TableItem> outputTable)
{
    Console.WriteLine("test");
 
    log.WriteLine(message);
 
    // parse query parameter
    TableItem item = Newtonsoft.Json.JsonConvert.DeserializeObject<TableItem>(message);
    if (string.IsNullOrWhiteSpace(item.PartitionKey)) item.PartitionKey = item.childName.First().ToString();
    if (string.IsNullOrWhiteSpace(item.RowKey)) item.RowKey = item.childName;
 
    outputTable.Add(item);
 
    GCPHelper.AddMessageToPubSub(item).GetAwaiter().GetResult();
    
    log.WriteLine("DeliveryComplete Finished");
 
}

Effectively, this is the same logic as the function (obviously, we now have the GCPHelper, and we’ll come to that in a minute. First, here’s the code for the TableItem model:


[JsonObject(MemberSerialization.OptIn)]
public class TableItem : TableEntity
{
    [JsonProperty]
    public string childName { get; set; }
 
    [JsonProperty]
    public string present { get; set; }
}

As you can see, we need to decorate the members with specific serialisation instructions. The reason being that this model is being used by both GCP (which only needs what you see on the screen) and Azure (which needs the inherited properties).

GCPHelper

As described here, you’ll need to install the client package for GCP into the Azure Function App that we created in post one of this series (referenced above):

Install-Package Google.Cloud.PubSub.V1 -Pre

Here’s the helper code that I mentioned:

public static class GCPHelper
{
    public static async Task AddMessageToPubSub(TableItem toSend)
    {
        string jsonMsg = Newtonsoft.Json.JsonConvert.SerializeObject(toSend);
        
        Environment.SetEnvironmentVariable(
            "GOOGLE_APPLICATION_CREDENTIALS",
            Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Test-Project-8d8d83hs4hd.json"));
        GrpcEnvironment.SetLogger(new ConsoleLogger());

        PublisherClient publisher = PublisherClient.Create();
        string projectId = "test-project-123456";
        TopicName topicName = new TopicName(projectId, "test");
        SimplePublisher simplePublisher = 
            await SimplePublisher.CreateAsync(topicName);
        string messageId = 
            await simplePublisher.PublishAsync(jsonMsg);
        await simplePublisher.ShutdownAsync(TimeSpan.FromSeconds(15));
    }
 
}

I detailed in this post how to create a credentials file; you’ll need to do that to allow the WebJob to be authorised. The Json file referenced above was created using that process.

Azure Config

You’ll need to create an Azure message queue (I’ve called mine localsantaqueue):

I would also download the Service Bus Explorer (I’ll be using it later for testing).

GCP Config

We already have a DataFlow, a PubSub Topic and a BigQuery Database, so GCP should require no further configuration; except to ensure the permissions are correct.

The Service Account user (which I give more details of here needs to have PubSub permissions. For now, we’ll make them an editor, although in this instance, they probably only need publish:

Test

We can do a quick test using the Service Bus Explorer and publish a message to the queue:

The ultimate test is that we can then see this in the BigQuery Table:

Lastly, the Function

This won’t be a completely function free post. The last step is to create a function that adds a message to the queue:

[FunctionName("Function1")]
public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Function, "post")]HttpRequestMessage req,             
    TraceWriter log,
    [ServiceBus("localsantaqueue")] ICollector<string> queue)
{
    log.Info("C# HTTP trigger function processed a request.");
    var parameters = req.GetQueryNameValuePairs();
    string childName = parameters.First(a => a.Key == "childName").Value;
    string present = parameters.First(a => a.Key == "present").Value;
    string json = "{{ 'childName': '{childName}', 'present': '{present}' }} ";            
    queue.Add(json);
    

    return req.CreateResponse(HttpStatusCode.OK);
}

So now we have an endpoint for our imaginary Xamarin app to call into.

Summary

Both GCP and Azure are relatively immature platforms for this kind of interaction. The GCP client libraries seem to be missing functionality (and GCP is still heavily weighted away from .Net). The Azure libraries (especially functions) seem to be in a pickle, too – with strange dependencies that makes it very difficult to communicate outside of Azure. As a result, this task (which should have taken an hour or so) took a great deal of time, and it was completely unnecessary.

Having said that, it is clearly possible to link the two systems, if a little long-winded.

References

https://blog.falafel.com/rest-google-cloud-pubsub-with-oauth/

https://github.com/Azure/azure-functions-vs-build-sdk/issues/107

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus

https://stackoverflow.com/questions/48092003/adding-to-a-queue-using-an-azure-function-in-c-sharp/48092276#48092276

Working with Multiple Cloud Providers – Part 1 – Azure Function

Regular readers (if there are such things to this blog) may have noticed that I’ve recently been writing a lot about two main cloud providers. I won’t link to all the articles, but if you’re interested, a quick search for either Azure or Google Cloud Platform will yield several results.

Since it’s Christmas, I thought I’d do something a bit different and try to combine them. This isn’t completely frivolous; both have advantages and disadvantages: GCP is very geared towards big data, whereas the Azure Service Fabric provides a lot of functionality that might fit well with a much smaller LOB app.

So, what if we had the following scenario:

Santa has to deliver presents to every child in the world in one night. Santa is only one man* and Google tells me there are 1.9B children in the world, so he contracts out a series of delivery drivers. There needs to be around 79M deliveries every hour, let’s assume that each delivery driver can work 24 hours**. Each driver can deliver, say 100 deliveries per hour, that means we need around 790,000 drivers. Every delivery driver has an app that links to their depot; recording deliveries, schedules, etc.

That would be a good app to write in, say, Xamarin, and maybe have an Azure service running it; here’s the obligatory box diagram:

The service might talk to the service bus, might control stock, send e-mails, all kinds of LOB jobs. Now, I’m not saying for a second that Azure can’t cope with this, but what if we suddenly want all of these instances to feed metrics into a single data store. There’s 190*** countries in the world; if each has a depot, then there’s ~416K messages / hour going into each Azure service. But there’s 79M / hour going into a single DB. Because it’s Christmas, let assume that Azure can’t cope with this, or let’s say that GCP is a little cheaper at this scale; or that we have some Hadoop jobs that we’d like to use on the data. In theory, we can link these systems; which might look something like this:

So, we have multiple instances of the Azure architecture, and they all feed into a single GCP service.

Disclaimer

At no point during this post will I attempt to publish 79M records / hour to GCP BigQuery. Neither will any Xamarin code be written or demonstrated – you have to use your imagination for that bit.

Proof of Concept

Given the disclaimer I’ve just made, calling this a proof of concept seems a little disingenuous; but let’s imagine that we know that the volumes aren’t a problem and concentrate on how to link these together.

Azure Service

Let’s start with the Azure Service. We’ll create an Azure function that accepts a HTTP message, updates a DB and then posts a message to Google PubSub.

Storage

For the purpose of this post, let’s store our individual instance data in Azure Table Storage. I might come back at a later date and work out how and whether it would make sense to use CosmosDB instead.

We’ll set-up a new table called Delivery:

Azure Function

Now we have somewhere to store the data, let’s create an Azure Function App that updates it. In this example, we’ll create a new Function App from VS:

In order to test this locally, change local.settings.json to point to your storage location described above.

And here’s the code to update the table:


    public static class DeliveryComplete
    {
        [FunctionName("DeliveryComplete")]
        public static HttpResponseMessage Run(
            [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, 
            TraceWriter log,            
            [Table("Delivery", Connection = "santa_azure_table_storage")] ICollector<TableItem> outputTable)
        {
            log.Info("C# HTTP trigger function processed a request.");
 
            // parse query parameter
            string childName = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "childName", true) == 0)
                .Value;
 
            string present = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "present", true) == 0)
                .Value;            
 
            var item = new TableItem()
            {
                childName = childName,
                present = present,                
                RowKey = childName,
                PartitionKey = childName.First().ToString()                
            };
 
            outputTable.Add(item);            
 
            return req.CreateResponse(HttpStatusCode.OK);
        }
 
        public class TableItem : TableEntity
        {
            public string childName { get; set; }
            public string present { get; set; }
        }
    }

Testing

There are two ways to test this; the first is to just press F5; that will launch the function as a local service, and you can use PostMan or similar to test it; the alternative is to deploy to the cloud. If you choose the latter, then your local.settings.json will not come with you, so you’ll need to add an app setting:

Remember to save this setting, otherwise, you’ll get an error saying that it can’t find your setting, and you won’t be able to work out why – ask me how I know!

Now, if you run a test …

You should be able to see your table updated (shown here using Storage Explorer):

Summary

We now have a working Azure function that updates a storage table with some basic information. In the next post, we’ll create a GCP service that pipes all this information into BigTable and then link the two systems.

Footnotes

* Remember, all the guys in Santa suits are just helpers.
** That brandy you leave out really hits the spot!
*** I just Googled this – it seems a bit low to me, too.

References

https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings#manage-app-service-settings

https://anthonychu.ca/post/azure-functions-update-delete-table-storage/

https://stackoverflow.com/questions/44961482/how-to-specify-output-bindings-of-azure-function-from-visual-studio-2017-preview

Short Walks – Using JoinableTaskFactory

While attending DDD North this year, I attended a talk on avoiding deadlocks in asynchronous programming. During this talk, I was introduced to the JoinableTaskFactory.

This became strangely useful very quickly, when I encountered a problem similar to that described here. There are a couple of solutions to this question, but the least code churn is to simply make the code synchronous; however, if you do that by simply adding

.GetAwaiter().GetResult()

to the end of the async calls, you’re very likely to result in a deadlock.

One possible solution is to wrap the call using the JoinableTaskFactory, in the following way:

var jtf = new JoinableTaskFactory(new JoinableTaskContext());
var result = jtf.Run<DataResult>(() => _myClass.GetDataAsync());

This allows the task to return on the same synchronisation context without causing a deadlock.

References

https://docs.microsoft.com/en-us/dotnet/api/microsoft.visualstudio.threading.joinabletaskfactory

https://stackoverflow.com/questions/33913836/how-to-render-a-partial-view-asynchronously

Asynchronous Debugging

Everyone who has spent time debugging errors in code that has multiple threads knows the pain of pressing F10 and seeing the cursor jump to a completely different part of the system (that is, everyone who has ever tried to).

There are a few tools in VS2017 that make this process slightly easier; and this post attempts to provide a brief summary. Obviously the examples in this post are massively contrived.

Errors

Let’s start with an error occurring inside a parallel loop. Here’s some code that will cause the error:

static void Main(string[] args)
{
    Console.WriteLine("Hello World!");
 
    Parallel.For(1, 10, (i) => RunProcess(i));
 
    Console.ReadLine();
}
 
static void RunProcess(int i)
{
    Task.Delay(500).GetAwaiter().GetResult();
 
    Console.WriteLine($"Running {i}");
 
    if (i == 3) throw new Exception("error");
}

For some reason, I get an error when a few of these threads have started. I need a tool that tells me some details about the local variables in the threads specifically. Enter the Parallel Watch Window:

Launch Parallel Watch Window

Figure 1 – Launch Parallel Watch Window

This gives me a familiar interface, and tells me which thread I’m currently on:

Parallel Watch Window

Figure 2 – Parallel Watch Window

However, what I really want to see is the data local to the thread; what if I put “i” in the “Add Watch” cell:

Add a watch

Figure 3 – Add a watch

As you can see, I have a horizontal list of watch expressions, so I can monitor variables in multiple threads at a time.

Flagging a thread

We know there’s an issue with one of these threads, so one possibility is to flag that thread:

Flagging a thread

Figure 4 – Flagging a thread

Then you can select to show only flagged threads:

Filter flagged threads

Figure 5 – Filter flagged threads

Freezing non-relevant threads

The flags help to only trace the threads that you care about, but if you want to only run the threads that you care about, you can freeze the other threads:

Freeze Thread

Figure 6 – Freeze Thread

Once you’ve frozen a thread, a small pause icon appears, and that thread will stop:

Frozen Thread

Figure 7 – Frozen Thread

In order to freeze other threads, simply highlight all the relevant threads (Ctrl-A) and select Freeze.

It’s worth remembering that you can’t freeze a thread that doesn’t exist yet (so your breakpoint in a Parallel.For loop might only show half the threads).

Manual thread hopping

By using freeze, you can stop the debug message from jumping between threads. You can then manually control this process by simply selecting a thread and “Switch To Frame”:

Figure 8 – Switch to Frame

You can switch to a frozen frame, but as soon as you try to progress, you’ll flip back to the first non-frozen frame (unless you thaw it). The consequence of this is that, it is possible to switch to a frozen frame, freeze all other frames and then press F10 – you’re program will then stop dead.

Stack Trace

In a single threaded application (and in a multi-threaded application), you can always view the stack trace of a given line of executing code. There is also a Parallel Stack trace:

Parallel Stacks

Figure 9 – Parallel Stacks

Selecting any given method will give us the active threads, and allow switching:

Active Threads

Figure 10 – Active Threads

Parallel Stack Trace – Task View

The above view gives you a view of the created threads for your program; but most of the time, you won’t care what threads are created; only the tasks that you’ve spawned (they are not necessarily a 1 – 1 relationship. You can simply switch the view in this window to view Tasks instead:

Task View

Figure 11 – Task View

Tasks & Threads Windows

There is a tool that allows you to view all active, blocked and scheduled tasks:

Tasks Window

Figure 12 – Tasks Window

This allows you to freeze an entire task, switch to a given task, and Freeze All But This:

Freeze All But This

Figure 13 – Freeze All But This

There is an equivalent window for Threads. It is broadly the same idea; however, it does have one feature that the Tasks window does not, and that it the ability to rename a thread:

Rename a Thread

Figure 14 – Rename a Thread

Flags

The other killer feature both of these windows have is the flag feature. Simply flag a thread, switch to it, and then select “Show Only Flagged Threads” (little flag icon). If you now remove the breakpoints, you can step through only your thread or task!

Breakpoints

So, what to do where you have a breakpoint that you might only wish to fire for a single thread? Helpfully, the breakpoints window has a filter feature:

Filter breakpoints on thread Id

Figure 15 – Filter Breakpoints

References

https://msdn.microsoft.com/en-us/library/dd554943.aspx

https://stackoverflow.com/questions/5304752/how-to-debug-a-single-thread-in-visual-studio

Short Walks – Instantiating an Object Without calling the Constructor

One of the things that caught my attention at DDD North was the mention of a way to instantiate an object without calling its constructor.

Disclaimer

Typically, classes have code in their constructors that are necessary for their functionality, so you may find that doing this will cause your program to fall over.

System.Runtime.Serialization

The title of the namespace is probably the first thing that betrays the fact that you shouldn’t be doing this; but we’re already halfway down the rabbit hole!

Here’s some code that will create a class using reflection the normal way:

    static void Main(string[] args)
    {
        var test = Activator.CreateInstance<MyTestClass>();
        test.MyMethod();

        Console.WriteLine("Hello World!");
        Console.ReadLine();
    }

    public class MyTestClass
    {
        public MyTestClass()
        {
            Console.WriteLine("MyTestClass Initialise");
        }

        public string test1 { get; set; }

        public void MyMethod()
        {
            Console.WriteLine("Test MyMethod.");
        }
    }

The output is:

And here’s the code that circumvents the constructor:

        static void Main(string[] args)
        {
            var test2 = FormatterServices.GetUninitializedObject(typeof(MyTestClass)) as MyTestClass;
            test2.MyMethod();

            Console.WriteLine("Hello World!");
            Console.ReadLine();
        }

And we haven’t invoked the constructor:

Short Walks – Running an Extension Method on a Null Item

I came across this issue recently, and realised that I didn’t fully understand extension methods. My previous understanding was that an extension method was simply added to the original class (possible in the same manner that weavers work); However, a construct similar to the following code changed my opinion:

class Program
{
    static void Main(string[] args)
    {
        var myList = GetList();            
        var newList = myList.Where(
            a => a.IsKosher());
        var evaluateList = newList.ToList();
 
        foreach(var a in evaluateList)
        {
            Console.WriteLine(a.Testing);
        }
    }
 
    static IEnumerable<TestClass> GetList()
    {
        return new List<TestClass>()
        {
            new TestClass() {Testing = "123"},
            null
        };
    }
}
 
public class TestClass
{
    public string Testing { get; set; }
}
 
public static class ExtensionTest
{
    public static bool IsKosher(this TestClass testClass)
    {
        return (!string.IsNullOrWhiteSpace(testClass.Testing));
    }
}

As you can see from the code, GetList() returns a null class in the collection. If you run this code, you’ll find that it crashes inside the extension method, because testClass is null.

A note on Linq

If you’re investigating this in the wild, you might find it particularly difficult because of the was that Linq works. Even though the call to the extension method is on the line above, the code doesn’t get run until you actually use the result (in this case, via a ToList()).

New understanding

As I now understand it, extension methods are simply a nice syntactical way to use a static method. That is, had I simply declared my IsKosher method as a standard static method, it would behave exactly the same. To verify this, let’s have a look at the IL; here’s the IL for my function above:

IL Code for extension method

And here’s the IL for the same function as a standard static method:

IL code for static method

The only difference is the line at the top of the extension method calling the ExtensionAttribute constructor.

References

https://stackoverflow.com/questions/847209/in-c-what-happens-when-you-call-an-extension-method-on-a-null-object

SendGrid – Azure e-mail functionality

In this post, I discussed the prospect of sending an e-mail from an Azure function in order to alert someone that something had gone wrong. In one of the comments, it was suggested that I should look into a third party tool called “SendGrid”, and this post is the result of that investigation.

Azure Configuration

SendGrid is a third party application, and so the first thing you need to do is to create an account:

The free tier covers you for 25,000 e-mails per month. However, you do get a scary warning that, because this isn’t a Microsoft product, it is not covered by Azure credits.

Anyway, click create and, after a while, you’re new SendGrid Account should be created:

You’ll need to get the API Key: to do this, select Manage:

That takes you to https://app.sendgrid.com, where you can select to create an API Key:

Clearly, you wouldn’t want a full access account to just send an e-mail in real life… but Restricted Access has a form that would take longer to fill in, and I can’t be mythered*… so we’ll go with full access for now.

Once you’ve created it, and given it a name, you should have a key (remember that key – don’t write it down, or copy it, you must remember it!).

Code

Create a new Function App, and add the SendGrid NuGet package:

https://www.nuget.org/packages/Sendgrid/

In this case, let’s create a HttpTrigger function (this will fire when a web address is accessed); the body of which needs to look something like this:

        [FunctionName("SendEmail")]
        public static async Task<HttpResponseMessage> Run(
            [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequestMessage req, 
            TraceWriter log)
        {
            log.Info("C# HTTP trigger function processed a request.");

            // parse query parameter
            var addresses = req.GetQueryNameValuePairs();
            string[] addressArr = addresses
                .Where(a => a.Key == "address")
                .Select(a => a.Value).ToArray();

            if (addressArr.Count() == 0)
            {
                return req.CreateResponse(HttpStatusCode.BadRequest, 
                    "Please pass an address in the query string");
            }
            else
            {
                HttpStatusCode status = await CallSendEmailAsync(
                    "noreply@test.com", addressArr, "test e-mail",
                    "Once more unto the breach, dear friends, once more;\n" +
                    "Or close the wall up with our English dead.   \n" +
                    "In peace there's nothing so becomes a man     \n" +
                    "As modest stillness and humility:             \n" +
                    "But when the blast of war blows in our ears,  \n" +
                    "Then imitate the action of the tiger;         \n" +
                    "Stiffen the sinews, summon up the blood,");
                switch (status)
                {
                    case HttpStatusCode.OK:
                    case HttpStatusCode.Accepted:
                    {
                        return req.CreateResponse(status, "Mail Successfully Sent");
                    }
                    default:
                    {
                        return req.CreateResponse(status, "Unable to send e-mail");
                    }
                }
            }
        }

The `CallSendEmailAsync` helper method might look like this:

public static async Task<HttpStatusCode> CallSendEmailAsync(string from, string[] recipients, string subject, string body)
{
    EmailAddress fromAddress = new EmailAddress(from);
 
    SendGridMessage message = new SendGridMessage()
    {
        From = fromAddress,
        Subject = subject,
        HtmlContent = body
    };
 
    message.AddTos(recipients.Select(r => { return new EmailAddress(r); }).ToList());
   
    string sendGridApiKey = "AB.keythisisthekey.nkfdhfkjfhkjfd0ei8L9xTyaTCzy_sV5gPJNX-3";
 
    SendGridClient client = new SendGridClient(sendGridApiKey);
    Response response = await client.SendEmailAsync(message);
 
    return response.StatusCode;
}

The key is the string that I said you should remember earlier.

You can just paste the URL into a browser, give it the e-mail addresses in the format:

https://url.com?address=my@email.com&my-second@email.com

As you will see from the link at the bottom of this post, OK (200) signifies that the message is valid, but not queued to be delivered; and Accepted (202) indicates that the message is valid and queued. My guess is that if you get an OK, it means that the mail has already been delivered.

Footnotes

* It probably took longer to type this out than to do it.

References

https://go.sendgrid.com/

https://sendgrid.com/docs/API_Reference/Web_API_v3/Mail/errors.html

Short Walks – Using CompilerService Arguments in an Interface

Until today, I thought that the following code would work:

class Program
{
    static void Main(string[] args)
    {
        ITest test = new Test();
        test.Log("testing");
        Console.ReadLine();
    }
}
 
interface ITest
{
    void Log(string text, string function = "");
}
 
class Test : ITest
{
    public void Log(string text, [CallerMemberName] string function = "")
    {
        Console.WriteLine($"{function} : text");
    }
}

And, by work, I mean output something along the lines of:

Main : testing

However; it actually outputs:

: testing

CompilerServiceAttributes need to be on the Interface, and not on the implementation

class Program
{
    static void Main(string[] args)
    {
        ITest test = new Test();
        test.Log("testing");
        Console.ReadLine();
    }
}
 
interface ITest
{
    void Log(string text, [CallerMemberName] string function = "");
}
 
class Test : ITest
{
    public void Log(string text, string function = "")
    {
        Console.WriteLine($"{function} : text");
    }
}

Why?

When you think about it, it does kind of make sense. Because you’re calling against the interface, the compiler injected value needs to be there; if you took the interface out of the equation, then the attribute needs to be on the class.

You live and learn!

Upgrade to C# 7.1

Async Main (C# 7.1)

Another new feature in C# 7.1 is the ability to make a console app deal with Async. Have you ever written a test console app to call an async function; for example, what will this do?

static void Main(string[] args)
{
    MyAsyncFunc();
 
    Console.WriteLine("done");
    
}
 
static async Task MyAsyncFunc()
{
    await Task.Delay(1000);
}

I’m pretty sure that I’ve been asked a question similar to this during an interview, and probably asked the question myself when interviewing others. The way around it in a console app previously was:


static void Main(string[] args)
{
    MyAsyncFunc().GetAwaiter().GetResult();
 
    Console.WriteLine("done");
    
}

However, in C# 7.1, you can do this:


static async Task Main(string[] args)
{
    await MyAsyncFunc();
 
    Console.WriteLine("done");
    
}

Upgrading the Project

Unlike other new features of 7.1, this feature doesn’t afford you the ability to “Control Dot” it. If you try to do this in C# 6, for example, it just won’t compile:

To upgrade, go to the Advanced tab in the Build menu (of project properties):

References

https://github.com/dotnet/roslyn/issues/1695

Default Literals in C# 7.1

One of the new features added to the latest* version of C# is that of a “default” literal. What this means is that you can now use the default keyword as though it were a variable. For example, if you were to want to create a new integer and assign it to its default value; you would write something like this:

int i = default(int);

But, surely C# knows you want a default int? In fact, it does, because if you type:


int i = default(long);

Then it won’t compile. Think of how much you could accomplish if you didn’t have to type those extra five characters! That’s where the default literal comes in:

Default Literal

Default Literal

You can also use the literal in comparison statements:

static void Main(string[] args)
{
    int i = default;
 
    Console.WriteLine(i);
 
    for (i = 0; i <= 3; i++)
    {
        if (i == default)
        {
            Console.WriteLine("i is default");
        }
        else
        {
            Console.WriteLine("i NOT default");
        }
    }
}
Output

Output

IL

What’s happening behind the scenes? The following code:

static void Main(string[] args)
{
    int i = default(int);
 
    Console.WriteLine(i);
    Console.ReadLine();
}

Produces the IL:


.method private hidebysig static void  Main(string[] args) cil managed
{
  .entrypoint
  // Code size       17 (0x11)
  .maxstack  1
  .locals init ([0] int32 i)
  IL_0000:  nop
  IL_0001:  ldc.i4.0
  IL_0002:  stloc.0
  IL_0003:  ldloc.0
  IL_0004:  call       void [mscorlib]System.Console::WriteLine(int32)
  IL_0009:  nop
  IL_000a:  call       string [mscorlib]System.Console::ReadLine()
  IL_000f:  pop
  IL_0010:  ret
} // end of method Program::Main

And the code using the new default literal:

static void Main(string[] args)
{
    int i = default;

    Console.WriteLine(i);
    Console.ReadLine();
}

The IL looks vary familiar:


.method private hidebysig static void  Main(string[] args) cil managed
{
  .entrypoint
  // Code size       17 (0x11)
  .maxstack  1
  .locals init ([0] int32 i)
  IL_0000:  nop
  IL_0001:  ldc.i4.0
  IL_0002:  stloc.0
  IL_0003:  ldloc.0
  IL_0004:  call       void [mscorlib]System.Console::WriteLine(int32)
  IL_0009:  nop
  IL_000a:  call       string [mscorlib]System.Console::ReadLine()
  IL_000f:  pop
  IL_0010:  ret
} // end of method Program::Main

Footnotes

* C# 7.1 – Latest at the time of writing

References

https://github.com/dotnet/csharplang/blob/master/proposals/target-typed-default.md