Category Archives: Opinions

NDC London 2018

As usual with my posts, the target audience is ultimately me. In this case, I’m documenting the talks that I’ve seen so that I can remember to come back and look at some of the technology in more depth. I spent the first two days at a workshop, building a web app in Asp.Net Core 2.0…

Asp.Net Core 2.0 Workshop

These were two intense days with David Fowler and Damien Edwards where we created a conference app (suspiciously similar to the NDC app) from the ground up.

Notable things that were introduced were HTML helper tags, the authentication and authorisation features of .Net Core 2.0. The ability to quickly get this running in Azure.

Day One

Keynote – What is Programming Anyway? – Felienne Hermans

This was mainly relating to learning, and how we learn and teach, how we treat each other as developers, and the nature of programming in general. Oddly, these themes came up again several times during the conference, so it clearly either struck a chord, or it’s something that’s on everyone’s mind at this time.

Sondheim, Seurat and Software: finding art in code – Jon Skeet

Okay, so let’s start with: this is definitely not the kind of talk I would normally go to; however, it was Jon Skeet, so I suppose I thought he’d just talk about C# for an hour and this was just a clever title. There was C# in there – and NodaTime, but not much. It was mainly a talk about the nature of programming. It had the same sort of vibe as the keynote – what is programming, what is good programming (or more accurately, elegant programming). At points throughout the talk, Jon suddenly burst into song; so all in all, one of the more surreal talks I’ve seen.

Authorization is hard! Implementing Authorization in Web Applications and APIs – Brock Allen & Dominick Baier

This was sort of part two of a talk on identity server. They discussed a new open source project that Microsoft have released that allows you to control authorisation; so, you can configure a policy, and within that policy you can have roles and features. What this means (as far as I could tell – and I need to have a play) is that out of the box, you can state that only people with a specific role are able to access, say an API function; or, only people that have roles with a specific feature are able to access an API function.

The example given was using a medical environment: a nurse, a doctor and a patient; whilst they all live in the same system, only the nurse and doctor are able to prescribe medication, and it is then possibly to configure the policy such that the nurse is able to prescribe less.

I’m Pwned. You’re Pwned. We’re All Pwned – Troy Hunt

This was the first of two talks I saw by Troy. This one was on security; although, oddly, the second was not. He did what he normally does which was start tapping around the internet, and showing just how susceptible everyone was to an attack.

He also mentioned that the passwords that he keeps in his database are available to be queried. I believe there’s an API endpoint, too. So the suggestion was that instead of the usual: “Your password much be 30 paragraphs long with a dollar sign, a hash and a semi-colon, and you have to change it every five minutes,” restriction on password entry, it would be better to simply ensure that the password doesn’t exist on that database.

Compositional Uis – the Microservices Last Mile – Jimmy Bogard

The basic premise here is that, whilst many places have a partitioned and Microservice architected back end, most front ends are still just one single application, effectively gluing together all the services. His argument followed that the thing to then do was to think about ways that you could split the font end up. The examples he gave included Amazon, so this isn’t a problem that most people will realistically have to solve; but it’s certainly interesting; especially his suggestion that you could shape the model by introducing a kind of message bus architecture in the front end: so each separate part of the system is polled and in turn “asked” if it had anything to add to the current request; that part of the system would then be responsible for communicating with its service.

C# 7, 7.1 and 7.2 – Jon Skeet and Bill Wagner

This was actually two talks, but they kind of ended up as one, single, two hour talk on all the new C# features. I have previously written about some of the new features of C#7 +. However, there were a few bits that I either overlooked, or just missed: pattern matching being one that I overlooked. The concept of deconstruction was also mentioned: I need to research this more.

Day Two

Building a Raspberry Pi Kubernetes Cluster and running .Net Core – Scott Hanselman & Alex Ellis

This was a fascinating talk where Alex had effectively build a mini-cloud set-up using a Raspberry Pi tower (IIRC six), and using a piece of open source software called Open Faas to orchestrate them.

This is a particularly exciting area of growth in technology: the fact that you can buy a fully functional machine for around £20 – £30 and then chain them together to provide redundancy. The demonstration given was a series of flashing lights; they demonstrated pulling a cable out of one, and the software spotted this and moved the process across to another device.

An Opinionated Approach to Asp.Net Core – Scott Allen

In this talk, Scott presented a series of suggestions for code layout and architecture. There was a lot of ideas; obviously, these all work well for Scott, and there was a lot of stuff in there that made sense; for example, he suggested mirroring the file layout that MS have used in their open source projects.

How to win with Automation and Influence People – Gwen Diagram

Gwen gave a talk about the story of her time at Sky, and how she dealt with various challenges that arose from dealing with disparate technologies and personality traits within her testing team. She frequently referred back to the Dale Carnegie book “How to win friends and influence people” – which presumably inspired the talk.

Hack Your Career – Troy Hunt

It’s strange listening to Troy talk about something that isn’t security related. He basically gave a rundown of how he ended up in the position that he’s in, and the challenges that lie therein.

HTTP: History & Performance – Ana Balica

This was basically a review of the HTTP standards from the early days of the modern internet, to now. Scott Hanselman touched on a similar theme later on, which was that it helps to understand where technology has come from in order to understand why it is like it is.

GitHub Beyond your Browser – Phil Haack

Phil demonstrated some new features of the GitHub client (which is written in Electron). He also demonstrated a new feature of GitHub that allows you to work with a third party on the same code (a little like the feature that VS have introduced recently).

.Net Rocks Live with Jon Skeet and Bill Wagner – Carl Franklin & Richard Cambpell

I suppose if you’re reading this and you don’t know what .Net Rocks is then you should probably stop – or go and listen to an episode and then come back. The interview was based around C#, and the new features. You should look out for the episode and listen to it!

Keynote – The Modern Cloud – Scott Guthrie

Obviously, if you ask Scott to talk about the cloud, he’s going to focus on a specific vendor. I’ll leave this, and come back to it in Scott’s later talk on a similar subject.

Web Apps can’t really do *that* can they? – Steve Sanderson

Steve covered some new areas of web technology here; specifically: Service Workers, Web Assembly, Credential Management and Payment Requests.

The highlight of this talk was when he demonstrated the use of Blazor which basically allows you to write C# and run it in place of Javascript.

The Hello World Show Live with Scott Hanselman, Troy Hunt, Felienne and Jon Skeet

I’d never heard of The Hello World Show before. To make matters worse, it is not the only You Tube program called this. Now I’ve heard of it, I’ll definitely be watching some of the back catalogue.

I think the highlight of the show was Scott’s talk – which pretty much had me in stitches.

Tips & Tricks with Azure – Scott Guthrie

This is the talk that I referred to above. Scott described a series of useful features of Azure that many people weren’t aware of. For example, the Azure Advisor, which gives tailored recommendations for things like security settings, cost management, etc.

Other tips included the Security Centre, Hybrid Use Rights (reduced cost for a VM is you own the Windows license) and Cost Management.

Serverless – the brief past, the bewildering present, and the beautiful (?) future – Mike Roberts

Mike has worked with AWS for a while now, and imparted some of the experience that he had, gave a little history of how it all started, and talked about where it might be going.

Why I’m Not Leaving .Net – Mark Rendle

Mark introduced a series of tools and tricks in response to every reason he could think of that people gave for leaving .Net.

Amongst the useful information that he gave was a sort of ORM tool he’d written called Beeline. Basically, if all you’re doing with your ORM tool is reading from the DB and then serialising it to JSON, then this does that for you, but without populating a series of .Net classes first.

He also talked about CoreRT which allows you to compile .Net. There’s a long way to go with it, but the idea is that you can produce an executable that will run with no runtime libraries.

Serverless Computing – A Paradigm Shift

In the beginning

When I first started programming (on the Spectrum ZX81), you would typically write a program such as this (apologies if it’s syntactically incorrect, but I don’t have a Speccy interpreter to hand):

10 PRINT "Hello World"
20 GOTO 10

You could type that in to a computer at Tandys and chuckle as the shop assistants tried to work out how to stop the program (sometimes, you might not even use “Hello World”, but something more profane).

However, no matter how long it took them to work out how to stop the program, they only paid for the electricity the computer used while it was on. Further, there will only ever be a finite and, presumably (I never tried), predictable number of “Hello World” messages produced in an hour.

The N-Tier Revolution

Fast forward a few years, and everyone’s talking about N-Tier computing. We’re writing programs that run on servers. Some of those servers are big and expensive, but pretty much the same statement is true. No matter how big and complex your program that you run on the server, it’s your server (well, actually, it probably belongs to a company that you work for in some capacity). For example, if you have a poorly written SQL Server Procedure that scans an entire table, the same two statements are still true: no matter how long it takes to run, the price for execution is consistent, and the amount of time it takes to run is predicable (although, you may decide that if it’s slow, speeding it up might be a better use of your time than calculating exactly how slow).

Using Other People’s Computers

And now we come to cloud computing… or do we? The chronology is a little off on this, and that’s mainly because everyone keeps forgetting what cloud computing actually is. You’re renting time on somebody else’s computer. If I was twenty years older, I might have started this post by saying that “this was what was done back in the 70’s and 80’s in a manner of speaking. But I’m not, so we’ll jump back to the mid to late 90’s: and SETI. Anyone who had a computer back in those days of dial-up connections and 14.4K modems will remember that SETI (search for extra-terrestrial intelligence) were distributing a program for everyone to run on their computer instead of toaster screensavers*.

Wait – isn’t that what cloud computing is?

Yes – but SETI did it in reverse. You would dial-up, download a chunk of data and your machine would process it as a screensaver.

Everyone was happy to do that for free: because we all want to find aliens. But what if there had been a cost associated to each byte of data processed; clearly something similar was in the mind of Amazon when they started with this idea.

Back to the Paradigm Shift

So, the title, and gist of this post is that the two statements that have been consistent right through from programs written on the ZX Spectrum to programs written in Turbo C and Pascal, to programs written in C# running on a dedicated server, has now changed. So, here are the four big changes brought about by the cloud **:

1. Development and Debugging

If you write a program, you can no-longer be sure of the cost, or the execution time. This isn’t a scare post: both are broadly predictable; but the paradigm has now changed. If you’re in the middle of writing an Azure function, or a GCP BigQuery query, and it doesn’t work, you can’t just close your laptop and go for dinner while you have a think, because while you do, nodes are lighting up all over the world trying to complete your task. The lights are dimming in Seattle while your Azure function crashes again and again.

2. Scale and Accessibility

The second big change is the way that your code is scaled. For example, you might be used to parallelising slow code so that you can make use of all available threads; however, in our new world, if you do that, you may actually be making it harder for your cloud platform of choice to scale your code.

Because you pay per minute of computing time (typically this is more expensive than storage), code that is unnecessarily slow or inefficient may not cause your system to slow down – what it will probably do is to cost you more money to run it at the same speed.

In some cases, you might find it’s more efficient to do the opposite of what you have thus-far believed to be accepted industry best practice. For example, it may prove more cost efficient to de-normalise data that you need to access.

3. Logging

This isn’t exactly new: you should always have been logging in your programs – it’s just common sense. However, there’s a new emphasis here: you’re not running this on your own server (you’re not even running it on a customer’s server) – it’s somebody else’s. That means that if it crashes, there’s a limited amount of investigation that you can do. As a result, you need to log profusely.

4. Microservices and Message Busses

IMHO, there are two reason why the Microservice architecture has become so popular in the cloud world: it’s easy – that is, spinning up a new Azure function endpoint takes minutes.

Secondly, it’s more scaleable. Microservices make your code more scaleable because it’s easy for the cloud provider to instantiate two instances of your program, and then three, and then a million. If your program does one small thing, then only that small thing needs to be instantiated. If your program does twenty different things, it can still scale, but it’ll cost more, because it will require more processing power.

Finally, instead of simply calling the service that you need, there is now the option to place a message on a queue; apart from separating your program into definable responsibility sectors, this means that, when your cloud provider of choice scales your service out, all the instances can pick up a message and deal with it.

Summary

So, what’s the conclusion: is cloud computing good, or bad? In five years time, the idea that someone in your organisation will know the spec of the machine your business critical application is running on seems a little far-fetched. The days of having a trusted server that has a load of hugely important stuff on, but nobody really knows what, and has been running since 1973 are numbered.

Obviously, there’s a price to pay for everything. In the case of the cloud, it’s complexity – not of the code, and not really of the combined system, but typically you are introducing dozens of moving parts to a system. If you decide to segregate your database, you might find that you have several databases, tens, or even hundreds of independent processes and endpoints, you could even spread that across multiple cloud providers. It’s not that hard to lose track of where these pieces are living and what they are doing.

Footnotes

* If you don’t get this reference then you’re probably under 30.

** Clearly this is an opinion piece.

Irreducible Complexity in Agile

What is irreducible complexity?

A common critique levelled at the agile philosophy is that of irreducible complexity. The argument is very much like the argument of the same name often levelled at evolutionists. The argument typically given against evolution is this:

If evolution predicts gradual changes, then what use is half an eye?

This article isn’t about evolution, and I’ll leave the rebuttal of that to people like Richard Dawkins (if you’re interested then read the Blind Watchmaker – it’s that book that made me start writing this post). The following is a paraphrase of a concern that I’ve heard a few times:

Using the principle of gradual improvement, what if I ask for a table? I can split the story up into four separate “leg stories” and a “table top story”, but I can’t release a table with two legs.

Oddly, the concern often uses the same metaphor; so maybe there’s an article out there that I’ve missed!

A table with two legs

Certainly the above statement is true (as far as it goes), but let’s forget about tables (as we can’t create one using software – or at least not without a 3D printer). What about the following requirement:

As a sales order entry clerk
I want a new sales order entry client
So that my efficiency can be improved

Clearly we can’t have “half of a sales order entry system” any more than we can have half a table.

The first thing here is that we need some answers. Forget about the system that’s been asked for, for a minute, and consider why it is being asked. *

So that my efficiency can be improved

What are they using now, and why is it not efficient? Since there is no-one to answer this hypothetical question, I will do so myself (after going to speak with the imaginary clerk):

The sales order clerk is currently using an existing piece of software, written sometime in the eighties. It’s character based (1. Place sales order, 2. Find sales order, etc..).

I’ve just watched him (his name is Brian) create a hypothetical sales order. It took around 5 minutes, and most of the time was taken up looking up product codes using a sheet of A4 that has them printed on.

Do we really need the table

The first thing to ask here is what exactly is the minimum viable product (MVP). That is: what does the client actually need. Do they really need a new sales order system, or do they just need a quick look-up for the product?

Let me just clarify what I’m saying here: I am not saying that when a client comes and asks for a table that they are told they can have a stool with a plank of wood lashed to it, and they can like it or lump it. We are at a point now (and have been for some time) in IT where we can build anything, but it takes time; and, at this moment (2017) development time is likely more expensive than any other single part of the IT infrastructure. Furthermore, it should always be kept in mind that, during the software development process, requirements change.

But we really need the table

Whether it’s an internal or external client, they are well within their rights to ask for a complete product, and that can be quoted for and produced. The process under waterfall would be that a Business Requirement document would be produced, followed by a functional specification, which details how the stated business requirement would be solved, and finally, the software would be written to address the functional specification.

There’s a lot of room for misinterpretation here. The person writing the code can not only be a different person to the one talking to the customer, but can be three or four layers removed. Even if you ask the client to sign off documents at every stage of the process, you have to assume that they have read and understood what you have placed in front of them. There are essentially three things that can go wrong here:
1. The requirements are misunderstood, or wrongly stated, the stated functional specification doesn’t fulfil the business requirement, the code doesn’t correctly represent the functional specification, or one or more of these documents is misleading; in summary, the people producing the software get it wrong somewhere.
2. The client changes their requirement due to budget constraints, or due to changing business requirements, or external factors, or because they realise that they don’t need what has been requested anymore, or need something different, or extra; in summary: the client has changed the scope.
3. The required software is more, or less complex than initially anticipated, a new software development tool is introduced; in summary, the quoted price for the job becomes invalid.

These are not mutually exclusive: all three can happen at the same time. (The same is true for the table: the blueprints can be wrong, the carpenter can fail to follow them correctly, and the person asking for the table can realise that they need a bigger or smaller one).

What does this mean for the sales order system? What if we identified that, rather that replace the sales order system entirely, we could start by replacing the product lookup; maybe we’d raise a few user stories that asked for a simple application or web page with a look-up facility. We would identify all the parts of the system that need replacing, but we would order them, and we would arrange to show the client at regular intervals.

Again, let’s reduce that complexity further:

– Create a simple CSV text document that contains all the product codes and descriptions.
– Create a RESTful call that accepts a string and searches the product description in the database (in our case, the CSV).
– Create a console app that calls this.

Now the clerk has more than he had before! It doesn’t look much better, but he isn’t trying to find a product on a sheet of paper anymore… so the next story might be:

– Create a webpage that allows the entry of a product description and returns a product code

The point being that, at every stage, the product is getting slightly better. We can show this to the clerk, and after a few days development, he can identify any issues. We can have something that provides tangible benefit in days.

This takes more time, though!

Yes – yes it does. If you compare it to the waterfall process on paper, I think it probably does take more time, especially if you start getting embroiled in all the scrum ceremonies. But let’s consider what that means:

1. The business requirement document is right, first time.
2. The functional specification is right, first time.
3. The software is right, first time.

Because the three points above will never** be true, they get iterated. I actually read an interesting article some time ago that argued that, for this reason, waterfall doesn’t even exist as a system. Unfortunately, I couldn’t find a link to it.

So, whilst on paper it might seem time hungry, what about the following scenario: you complete the product look-up page, and the client gets a big order; they need to get a web-site to enable customers to place orders directly themselves. The requirements have changed. Thinking a few months down the line, should the web-site work, the old sales order system may no longer be required at all. In the old way of a detailed spec, sign-off and months of development, they would effectively have to cancel the order and lose the time spent thus far. Most clients these days will appreciate that, by seeing the software in an iterative development cycle, they have opportunities to change the scope that they didn’t previously have.

It’s worth remembering what the agile manifesto actually says – because it’s not a big long document. It basically says that you should talk to people – your colleagues, your clients and their customers – whoever you can: show the software as often as you possibly can; talk to the client and check that they still have the same requirements; release as often as possible and check it’s really improving things.

Agile doesn’t always work, though

There are some times when this approach absolutely doesn’t work. For example, when NASA build a new spaceship; it’s unlikely that the people that program the guidance system will be using an iterative approach to development; whilst a guidance system that sends the ship in broadly the right direction may (or may not) be better than none at all, $100B worth of kit needs a complete product. It also has a long shelf life.

However, if you consider the principle of agile, rather than the specific implementations, such as scrum, you can take some of the advice (even into NASA***).

Summary

In summary, and to blatantly plagiarise Richard Dawkins, half an eye is useful, and so is half a sales order system, and so is half a table; the key thing is to ensure that it’s the correct half, and it fulfils some of the requirements.

Footnotes

*I realise that this requirement for such a large and complex system would not be sufficient, but it serves to illustrate a point.

**Okay, I can’t prove this, but I’ve never worked on a project where these have all been right first time. It is, of course, theoretically possible.

***I don’t, and never have, worked for NASA, so I actually have no idea what the processes are. I know nothing, whatsoever, about space travel, either – this is entirely based on guesswork and conjecture. For all I know, they launch the thing, and there’s a guy sat in Houston with a joystick, controlling it.

References

http://agilemanifesto.org/

http://www.scrumguides.org/docs/scrumguide/v2016/2016-Scrum-Guide-US.pdf

http://www.first55tech.com/agile-requirements-blog/irreducible-complexity