Thursday, 29 June 2017

Azure Failover and Resilience

Azure provides a highly resilient hosting platform, with significant built-in redundancy within a data centre, as well as the presence of more than 30 data centres across the world.

When first coming to Azure, it can be hard to understand what resilience you get automatically and what you might have to set up yourself.

This post provides a high-level overview of the principles. It is intended as an introduction to help you ask the right questions.

The usual starting point for a system is to host it in a single data centre. Azure is highly resilient even within a single data centre, but even so, all the data is continually backed up to a secondary data centre.

In the case of a complete failure of a data centre, the data can be restored to another data centre. This is not the same as automatic failover to another data centre; In order to get the data restored in the other data centre and get the system back up and running, you will have to do it yourself; Azure will (for the most part) not do this for you. How much work depends on how much preparatory work has been done and is primarily a business decision based on risk and cost.

Any conversation about failover is complicated by the fact that a system consists of different components, which can fail independently, which have different probability and impact and which require different failover strategies.

Before going into the details, it is important to understand that even the most basic setup in Azure has a very high level of resilience with each individual component typically having a guaranteed uptime of 99.95% or more. At the same time, data is continually backed up to a secondary data centre. In other words, even the most basic Azure setup has a level of resilience that is difficult and expensive to achieve with on-premise hosting.

In this post “failover” will refer to failing over between data centres.

Resilience in a single data centre

Azure Data Centres are built in a modular way, meaning that each data centre can be thought of as many smaller data centres built next to each other. This means that your data and system will be physically spread over different parts of the data centre, in turn meaning that even if an entire part of the data centre fails, you are unlikely to notice.

Physically, all Azure data centres have multiple redundant power grid connections, multiple redundant internet connections, redundant backup generators, batteries and so on and so forth.

As a general rule, any data you save in Azure, in databases, to disk or to other types of storage, is written to three different locations inside that one data centre and a single copy is written to another remote data centre as a backup. For example the London data centre backs up to Cardiff and the Amsterdam data centre backs up to Dublin etc.

Azure App Service has some built-in resilience so even with only a single compute node in your app hosting plan, you are pretty well protected from outages. With Cloud Services, you must ensure that you have at least two instances running at all times to ensure resilience. With Virtual Machines – you are much more on your own, though there are a number of things you can configure, such as Availability Sets etc. As a very general rule, to make reliability easier, avoid using VMs, use one of the managed options instead, when you can.

When you want to be able to fail over to another data centre, there are several options available to you. I have grouped them here under “Cold”, “Warm” and “Hot”. These are just convenience labels and may not correlate to other people’s definitions.

Cold

A Cold failover is what you get by default.

Your data is automatically backed up to another data centre. In the case of a failure of the primary data centre, you can go to the other data centre, set up your systems again, deploy everything and restore your database. Of course, the more automated your deployment is, the easier this will be.

You should be aware that while you can manually trigger a restore of SQL and CosmosDB databases, you cannot yourself trigger a “restore” of anything you put into Azure Storage. Microsoft has to do that by changing DNS entries and their SLA on that is up to 48 hours, last time I checked. There are things you can do to improve this, such as using read-access geo-redundant storage, but you will need to develop specifically to take advantage of that. Often, though, the data in Azure Storage is secondary to the main system and you may be able to live without that data for a day or two.

The exact frequency of database backups depends on the chosen database but is generally four hours or less.

Warm

A Warm failover allows faster fail-over to a secondary data centre, but still requires manual intervention.

In order to reduce the time it takes to move to a secondary data centre, it is possible to prepare the infrastructure and have detailed plans in place. You can do this by configuring the whole system in the secondary data centre but not deploy anything to it; For many services you can define it but just not deploy anything to it. Similarly, you can deploy VMs and then de-allocate them etc. An alternative is to create an ARM template, which will allow you to quickly create a whole environment in Azure.

You should also write plans and scripts so you can quickly restore the databases and direct traffic to the other data centre etc. It may also require periodic testing of the plans.

Finally, you should make sure your DNS is set up with a short enough TTL that you can quickly move traffic to the new websites.

Document storage, such as files, may in the Warm scenario still take up to 48 hours to be made available in the other data centre.

Hot

A Hot failover will automatically fail over to a secondary data centre if the primary data centre fails, in whole or in part.

In practice, there are many different components to a system and it usually makes sense to only have hot failover in place for some of the components. A bespoke cost/benefit exercise should be carried out where hot failover is desired.

The primary things to consider;

Web site failover

It is possible to deploy the front-end web servers  to more than one data centre and use Traffic Manager to automatically direct traffic between the two. This works with most kinds of web hosting you can do in Azure. This means that if the websites in one data centre fails, requests will automatically be served by the other data centre, usually within 1 minute. The main costs are in paying for the extra server(s) and the added complexity in every deployment.

Database failover

Azure offers hot failover for both Azure SQL and CosmosDB, the two main databases. With this failover, Azure will dynamically fail over to a secondary data centre in case of a failure and/or serve requests from both data centres. The mechanisms used by SQL Azure and CosmosDB are fundamentally different and will require different approaches.

 As a general rule, you have to pay for two copies of your database and you may have to use a more expensive service tier.

In the case of CosmosDB, it may be required to consider consistency levels and the system may need to be adapted to deal with this.

Storage failover

It is common to store files and certain other types of data in Azure Storage. By default, data is backed up to another data centre (though this can be disabled when not required). However, Azure is in control of enabling access to the backup in case of a failure and the SLA is up to 48 hours. In many cases, this is acceptable as the loss of access to historical files may be considered a service degradation rather than a failure.

Where required, Azure do provide ways to have direct access to a read-only copy in the secondary data centre. This can be utilised to build a very high level of resilience, but it requires explicit programming in your software to take advantage of this.

Queue failover

In an effort to increase resilience and scalability, it is common to use queues in systems; Rather than do something straight away, the system will put a message on a queue and a background job will then process this. This design has many benefits, including automatic retrying, resilience to external systems being down and significant scale benefits as sudden peaks in demand just causes queues to get longer for a little while.
This does, however, mean that the queues can be a single point of failure; if the queue service fails, you can no longer enqueue messages.

From NewOrbit’s many years of working with Azure, it is clear that Microsoft are very aware of the crucial importance queues play in many systems and they have worked very hard to make them extremely resilient; Despite very extensive usage, NewOrbit has never experienced a failure with “storage queues” and has only experienced an issue with “service bus queues” on a single occasion in 2013.

It is possible to implement failover for queues and NewOrbit has done that before. There are different approaches that can be taken and Service Bus Queues have some native support for failover, though it does require programming to take full advantage of it.

Other items

There are many other items that can be used in Azure, including virtual machines. For most of these items, a bespoke failover strategy is required to achieve hot failover.

More SLA details

The individual SLAs for all Azure services can be found at https://azure.microsoft.com/en-gb/support/legal/sla/
If you need to report on your overall SLA, it is important to understand how to combine them. If you have, say, an Azure App Service with 99.95% SLA and an Azure SQL database with a 99.99% SLA then the overall SLA for both to be up is (99.95% x 99.99%) = 99.94%. This obviously compounds with more components.

On the other hand, adding a hot failover App Service in another data centre using Traffic Manager means you now have a better than 99.95% expected SLA for the App Service component. However, calculating the actual SLA is not practical due to the presence of “systemic risk”; There is one risk of a single data centre going down and a separate risk of a worldwide outage of Azure App Services.

Help?

If you have a quick question, ping me on twitter.

If you want more extensive advice and guidance, my company NewOrbit offers help to other companies who are moving to Azure. We have been building systems on Azure since 2011 and are a Microsoft Cloud Gold Partner.

Monday, 30 January 2017

Combine documents with other data in Azure Search

Azure Search has built-in support for indexing documents in blob storage, which works great. But what if you want to combine the documents with other data, for example if you are building a recruitment system and want to search on, say, location and CV content at the same time?

TL;DR; Indexers in Azure Search uses a "create or update" approach; as long as your different indexers use the same id, they can all write data to the same document in the index.

Terminology in Azure Search

(deliberately simplified and made Azure Search specific)

An index in Azure Search is more like a table in an RDBMS or a collection in a document-based database. It has a specific "schema" (the index definition) that specifies the structure of the documents stored in the index.

A document is a single entity in Azure Search, think "record" or (indeed) document in a document database. It has an id that is unique within that index.

An indexer is something that takes data from a source, like Blob storage, SQL or something else, re-formats it and writes it to documents in an index in Azure Search.

A data source is a definition in Azure Search of somewhere that an indexer can read data from. It's sort of like a connection string.

Scenario

The specific scenario I have is the need to build a database of candidates for a recruitment system. I want to be able to search the contents of CVs, but I also want to be able to filter and use faceted search on various bits of data about the candidate themselves, such as location, qualification and so on.

Azure Search provides built-in functionality to read the contents from a range of files stored in Azure Blob Storage, so that sounded pretty ideal; just upload the CVs to blob storage, point Azure search to it and be done. Of course, that would not combine it with the other candidate data. Initially I thought about several options;
  • I could extract the content of the files myself and write code to combine it with candidate data in code. But extracting content out of files is not a trivial problem.
  • I could let Azure Search index the files, then write code to extract the content back out of Azure Search and write it back to another index. But that seemed like a very long-winded solution.
  • I asked Microsoft if I could somehow access the content-extraction feature directly. But you can't.

The solution

Liam Cavanagh gave me the outline solution with this statement;
Leverage Blob Storage with Data Store (such as Azure SQL or DocumentDB):  If you store your files in Azure Blob storage, and your structured data in something like Azure SQL or DocumentDB, you can use both the Azure Search Blob Indexer as well as the Azure Search SQL or DocumentDB Indexer together.  The only trick is to make sure that the unique key that is defined for each document matches between Blob and the structured data store.  For example, if you choose to use the Base64 Encoded filename as the key, then you would need to make sure that the matching content in your structured datastore also contains this value.  That way the Indexer which does something called MergeOrUpdate, will first take a row from say the Blob Storage and insert it as a new row in Azure Search and then it will bring in the data from the structured data store and update the existing row with the new fields.

With a bit more help from Liam I put together the sample solution outlined below. I should also mention that in this solution I am using the built-in tools in Azure Search to read data directly from SQL and Blob storage. If you wanted to use code to manually write to the index instead of letting Azure Search read from SQL then you can do that too, of course. Or read from some of the other supported data sources, or combine it all or...

Implementation

You'll need to create an Azure Search account and whatever data sources you want to use. You can do a number of the steps directly in the Azure Portal, though not all of them. For the ones you can't I will show the appropriate JSON payload you need to send. It is ridiculously easy with something like Postman. You just need to make sure you add two headers to your requests;
  • Content-Type : application/json
  • api-key : [an admin key for your Azure Search instance]
In summary, this is what we are going to build:



Create the Index

You can create a new index directly in the Azure Portal, which is what I did. It has a very nice editor that makes it quite easy. I ended up with something looking like this;

Please note the "content" field; When Azure Search indexes files, it will place the content of those files in the content field. Id is there by default and we need that, so leave it alone, but the remaining fields can be whatever you like. I put a name in there for the Candidate Name and the last two fields are some fields I added so I could experiment with faceted search.

Create the data sources

Next we need to tell Azure Search where it can get the data - we need to create the Data Sources.
You can't create a Data Source on it's own in the Portal; there is an option to "import data" but that combines setting up the data source with the index and the indexer so we can't use it for our purposes.
Time to start posting JSON (see above).

POST these to https://yoursearchservice.search.windows.net/datasources?api-version=2016-09-01 and replace the XXX with valid connection strings.
{
    "name" : "blobcvs",
    "type" : "azureblob",
    "credentials" : { "connectionString" : "XXX" },
    "container" : { "name" : "cvs" }
}

{
    "name" : "candidates",
    "type" : "azuresql",
    "credentials" : { "connectionString" : "XXX" },
    "container" : { "name" : "Candidates" }
} 
This tells Azure Search how to access your data.

Create the indexers

POST these to https://yoursearchservice.search.windows.net/indexers?api-version=2016-09-01
{
    "name" : "candidateindexer",
    "dataSourceName" : "candidates",
    "targetIndexName" : "candidates",
    "fieldMappings" : [ { "sourceFieldName" : "Thingiemajics", 
                          "mappingFunction" : { "name" : "jsonArrayToStringCollection" } 
                      } ]
}
This tells Azure Search to take the data in the SQL database specified in the SQL data source and create a document in Azure Search for each row. Azure Search will automatically match fields with the same names; I've got an Id field as well as Name, Type and Thingiemajics columns in SQL. The only one that is a bit special is Thingiemajics; I'm storing an array of tag values in that field in SQL in the format ["red", "white", "blue"] and putting that mapping function in there tells Azure Search to make them individual tags that can be filtered individually. See the docs for more details.

Before I create the indexer for the files, let me just take a little detour. If you remember, the original statement was that for this to work, the two different indexers need to use the same id for data about the same candidate. The SQL indexer in my example uses the database ID of the candidate and we need to ensure that when Azure Search indexes the CV for a candidate it returns the same index. By default Azure Search will use the filename, which is obviously no good in this situation. The way I solved this was to add a custom meta data property to the blob when I uploaded it to Azure Blob Storage, something like this:

using (var fileStream = System.IO.File.OpenRead(file))
{
   await blob.UploadFromStreamAsync(fileStream);
}
blob.Metadata.Add("mykey", identifier);
await blob.SetMetadataAsync();
Here I have called it "mykey", but it could be called anything.

On to the indexer, which is created with this:
{
    "name" : "cvindexer",
    "dataSourceName" : "blobcvs",
    "targetIndexName" : "candidates",
    "fieldMappings" : [ { "sourceFieldName" : "mykey", "targetFieldName" : "id" } ],
    "parameters" : { "configuration" : { "failOnUnsupportedContentType" : false } }
}
The most important thing here is that I tell Azure Search to take the metadata property "mykey" and map it to the "id" field in my index. That's all that is required to ensure the contents of the CV ends up in the same search document as the other Candidate information.

Notes

In my solution here I added an ID property to the blob meta data. I could, of course, have gone the other way instead and added the blob reference to the SQL data and then remapped that reference to "id" in the SQL indexer. Both approaches are equally valid, both have pros and cons, including the need to manually maintain uniqueness of the ID property and the need to base64 encode filenames.

You should also note that this works because the different datasources do not have any of the same field names; if the two indexers both return the same field, one will win and delete the data from the other one.

Friday, 6 January 2017

How many ways can I host a web app in Azure?

I got talking to a colleague about how many ways there are to host web apps in Azure and even managed to surprise myself by just how many ways I could think of. It inspired me to compile this list, which is just off the top of my head. I'm sure there are more - if you can think of other ways, please leave a comment.

For the purposes of this, I am defining a web app as something that has a user-facing UI and some server-side functionality. I have listed a few additional options at the bottom if you only need one of those things.

I put this list together real quick like and the description of each service is just a very quick summary, mainly just from memory so please do not view this as a definitive or highly accurate document; It's mainly just a fun exercise. That said, do please point out any factual errors in the comments so I can correct them.

Web Apps

This is the default option and probably what you should choose if in doubt. It's getting a lot of love from Microsoft at the moment and is constantly getting new features. You get fail-over and auto-scaling by default, plans from free to expensive and it is available in both Windows and Linux flavours (in preview). It's very easy to get started with and supports pretty much anything from static pages to complex deployment processes, staging slots and even a built-in basic CD pipeline.
The main thing to be aware of is that you don't get admin access to the server, so if you need to, say, customise which SSL protocols are available or you need to install fonts, you are out of luck. From experience, it is very rare that you need this, though. 

Cloud Services

This is the original Platform-as-a-Service option in Azure. It doesn't get much love these days, but I am still fond of it for those few situations where I need more than Web Apps can give me. Essentially, you provide Azure with an application package and Azure will take care of deploying that to one or more servers for you. You do get full admin access to the servers so you can do what you like - as long as you script it as part of your package. Patching, fail-over, load-balancing, auto-scaling, health monitoring etc is all taken care of for you.

Functions

This isn't really meant for doing a proper web app, but you can write a collection of functions that acts as an API or return some HTML etc, so you could certainly do it if you really wanted. Understand me right, though, Functions are brilliant for what they are meant to do, even if building whole web apps isn't it. That said, if you just need a couple of simple APIs to support a front-end app then it is certainly something you should consider.

Service Fabric

Not very well known, but Azure provides a pretty advanced micro-services framework that you can use for building sophisticated, large-scale applications. It supports both a service model and a basic Actor model out of the box. It's got a pretty high base cost relative to other Azure services due to the minimum number of nodes you have to use, but if you have a need for lots of scale then you should definitely look at this. Azure uses it to power a lot of their own architecture, including both SQL Azure and DocumentDb. 

Virtual Machine

You can, of course, deploy good old-fashined virtual machines and run your web app on them. You are then responsible for patching and some level of maintenance yourself. You can define images so Azure can do auto scaling for you as well. I personally try to avoid using VMs as far as I can as I don't like to be responsible for patching and maintenance etc - yet, I still have about 40 of them :).

Container Services / Docker

If you are one of those cool kids who like Docker, you are in luck. Azure has native support for Docker and support DC/OS, Docker Swarm and Kubernetes out of the box.


So, how many was that? Six, I think, though I probably shouldn't count the Functions one :)

Related

As if all the above wasn't enough, there are a couple of other technologies that can also be used to deal with web sites or APIs.

Blob Storage / CDN

Blob storage is Azure's file storage system (okay, it's more than that, but let that suffice for now). You can share a container publicly and put html, js, css and whatever other files you like in it. I quite often use it for static assets, though in theory you could host a whole website in there. Azure CDN can sit on top of blob storage and gives you geo-replication of the files. You can map custom domains to blob storage as well. Of course, this is all static so I couldn't quite include it in my list above. 

Traffic Manager

Traffic Manager doesn't host anything, but it is worth understanding where it fits in. Pretty much all of the above options include a load balancer (automatic, you don't really have to worry about it) and fail-over within a data centre. If you need fail-over between data centres or want to route traffic to servers close to your users, you can use Traffic Manager. It works at the DNS level and is used to direct user requests for a given domain name to the nearest available data centre. 

API Management

Not strongly related to web app hosting, but I just thought I'd mention it; Basically, if you develop an API and you want to give external users access to it you probably want to do things like controlling who can use it (maybe so you can bill them), rate limiting, authentication and so on. Azure API management deals with all that as a service that just sits in front of your naked API so you don't have to write it all yourself.

Takeaway

When I started this list, I really didn't expect it would be this long. I think it's great there is so much choice and I know that each option has it's own set of strengths and weaknesses. We use most of the technologies listed here on different projects and for different reasons, and I'm very happy that I can choose. At the same time, I think it is probably quite tough for someone new to Azure to even get started on figuring out which of the many options are right for their particular scenario. And I have only dealt with web app hosting here, not the multitude of other things you can do.

I just meant for this post to be a fun little exercise but, having written it, I should mention that at NewOrbit we have recently started helping other companies move to the cloud and sharing our years of Azure experience with them. 

If you have any questions or want to talk more, ping me on twitter or add a comment below.

Tuesday, 6 December 2016

Find docs with no PartitionKey in Azure DocumentDb

When you are using Partitioned Collections in Azure DocumentDb you need to specify a Partition Key on each Document. At least, I thought you did. But, it turns out that you actually can save documents without a partitionkey. But if you do, you'll have a hard time retrieving or deleting them - until you meet Undefined.Value.
Note: This post is written for C#, I am not sure about the equivalent for other languages.


Details

If you create a Partitioned Collection in Azure DocumentDb you probably think that every document you save must have a partitionkey property and probably also that it must have a value. In this post I am dealing with the situation where you don't have a partition key property on your document at all, not the situation where you have one but you set it to null or an empty string.

For example, if you have created your collection with code similar to this;
var docCollection = new DocumentCollection()
{
   Id = this.collectionName
};
docCollection.PartitionKey.Paths.Add("/partitionKey");
await docClient.CreateDocumentCollectionAsync(
                    UriFactory.CreateDatabaseUri(this.dbName), 
                    docCollection);
and you then try to save an instance of a class that looks like this:
public class MyItem
{
    [JsonProperty("id")]
    public string Id { get; set; }

    public string SomeValue { get; set; }
}
then you may expect to get an error. But, in fact, it will save just fine as I found out to my detriment after a major refactoring.

Now that you have that item in the database you will find it hard to retrieve it and even harder to delete it - until you meet your new friend Undefined.Value.

How to read the document:

MyItem item = (dynamic)client.ReadDocumentAsync(
                           UriFactory.CreateDocumentUri(DbName, CollectionName, id),
                           new RequestOptions() {
                             PartitionKey = new PartitionKey(Undefined.Value)
                           })
                         .Result.Resource;

How to delete the document:

client.DeleteDocumentAsync(
          UriFactory.CreateDocumentUri(DbName, CollectionName, id), 
          new RequestOptions() { PartitionKey = new PartitionKey(Undefined.Value) });
Many thanks to Aravind Ramachandran for telling me about Undefined.Value.





Thursday, 29 September 2016

Find Documents with missing properties in Azure DocumentDb with the .Net SDK

Azure DocumentDb stores documents as JSON. One of the effects of this is that sometimes you may end up with documents in the database that have missing properties and it can be quite tricky to search for them with the .Net SDK. This blog post has an approach to doing it - and quite simply too.

Background

Most commonly, you would encounter the issue of the missing property when you add a new property to an existing class in your .Net code. There is no automatic method of adding this new property to all the existing entries in the database, short of re-saving them all.

Alternatively, you can explicitly configure Json.Net to not store properties that have null values like this:
JsonConvert.DefaultSettings = () => 
  new JsonSerializerSettings
       {
          NullValueHandling = NullValueHandling.Ignore
       };

You can use this configuration option to test the behaviour I am describing here or to save space in the database.

For example, imagine you have a class called MyItem looking like this:
public class MyItem
{
    [JsonProperty("id")]
    public string Id { get; set; }
    public string SomeValue { get; set; }
}

If you have an item where SomeValue is null, by default that will be serialised and stored in DocumentDb like this:
{
  "id" : "1",
  "SomeValue" : null
}

However, if you configure Json.Net to not store null values (or the SomeValue field was added to your .Net code after you stored this item in DocumentDb) it will look like this in the database:
{
  "id" : "1",
}


Selecting missing properties with SQL

According to the documentation you can use SQL to select missing properties like this:
SELECT f.lastName ?? f.surname AS familyName
FROM Families f
You can then extrapolate from that example to, for example, select items etc.

Finding items with null or missing with the .Net SDK

Imagine you have added the SomeValue property to the MyItem class after you had already saved some items. Further, sometimes you store a null in the SomeValue property. Or you have configured Json.Net to ignore null values. And now you want to find all the items where SomeValue is either missing or null.
You might try this: 
var query1 = client.CreateDocumentQuery<MyItem>(collectionUrl)
              .Where(i => i.SomeValue == null);
But you will find that this will not actually return any results - at least, it won't return any documents where SomeValue is not present at all. However, this odd-looking statement will work:
var query2 = client.CreateDocumentQuery<MyItem>(collectionUrl)
              .Where(i => (i.SomeValue ?? null) == null);

It is using the null coalescor to make DocumentDb return a null value for the property for the property if it does not exist, which we can then compare to null.

I have tested this with version 1.6, 1.8 and 1.10 of the SDK, but I would advise you to put an integration test in your code if you are going to rely on it, just in case the behaviour changes in the future. You'll probably also want to put a comment wherever you use this syntax as R# is quite keen to tell you that you should get rid of the "?? null" part.
Finally, I have not done any performance testing on this, but I suspect DocumentDb won't be able to use any indexes to execute this query; it will probably have to evaluate each document in a scan so use with caution.

A full sample

If you want to try this out, there is a full example here: https://gist.github.com/flytzen/b8dbcb2079a66b1c8a67005ef5198052

Friday, 26 August 2016

Auto publish Azure Web Jobs with ASP.Net Core RTM

This is an update of my original post on how to do this with ASP.Net Core RC1
At the time of this writing, there is no tooling to auto publish Azure Web Jobs with an ASP.Net Core website. You can do it with old-style websites, but not yet the new ones. The tooling is highly likely to appear eventually, but as of right now you have to take a few steps to set it up yourself, which is what I describe in this post. For clarity, what I am describing works only with source control deploy (git etc), not with publishing from Visual Studio.

This information will eventually become obsolete, probably around the RTM of the ASP.Net Core Visual Studio tooling. If that has happened and I have not updated the post, please post a comment and I'll get on it.
Note that for very simple, self-contained web jobs, there may be a simpler approach (see http://stackoverflow.com/a/33291097/11534 for some thoughts). In this post I am catering for the scenario where your webjob references another project in your solution - though it'll work just as well if it doesn't.

In summary

You have to use a custom deployment script and insert an action to publish your webjob to the path that Azure looks for webjobs in; App_Data/Jobs/Continuous for continuous jobs. Azure will automatically detect content in that folder and assume it's a webjob, so all we really have to do is make sure our webjob is copied there. And yes, it will happily overwrite a running webjob, the Kudu functionality handles that somehow.

The reason we are publishing, using dotnet publish, is to ensure we get all the dependencies. It won't re-compile so it's just a copy operation.
If we had a very simple webjob that was self contained, you could just copy the files, as long as you added a run.cmd (see link above).

In detail

Prepare a site
  1. Set up a solution with an ASP.Net 5 Web site and a webjob written as a ASP.Net 5 / Core console app.
  2. Set up source control deploy to an Azure website. It shouldn't matter which type.
  3. Wait for the initial deploy of the site. 

Set up a custom deployment script

In this example, we will download the deployment script that Azure has created. There are ways to also do this with the Azure CLI, but the generated script is not quite the same at this point in time - which may or may not matter.
  1. Get the auto generated script
    1. Log in to the web app console at https://[yoursite].scm.azurewebsites.net
    2. Go to Tools - Download deployment script
    3. Unzip the downloaded zip file to the root of your solution folder
  2. Modify your deploy.cmd file in the following ways
    1. Add a line like this near the top - just to make it easier to check that your custom script is being used (whatever you put after echo will be output in the log file)
      echo CUSTOM SCRIPT Start
      
    2. Near the middle of the file you will find a series of steps that are numbered. One of the steps will look like this;
      NOTE: The indented lines will be on a single line, the line breaks are only added here for readability
      :: 2. Build and publish
      call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\MySolution.sln" 
             /nologo 
             /verbosity:m 
             /p:deployOnBuild=True;
                AutoParameterizationWebConfigConnectionStrings=false;
                Configuration=Release;
                UseSharedCompilation=false;
                publishUrl="%DEPLOYMENT_TEMP%" 
             %SCM_BUILD_ARGS%
      
      
    3. Below that insert these lines (updating the path with the actual path to your webjob in the solution)
      NOTE: The indented lines need to put on a single line, the line breaks are only here for readability
      :: 2.1 Publish webjobs
      echo STARTING TO PUBLISH WEBJOBS
      echo DEPLOYMENT_TEMP is %DEPLOYMENT_TEMP%
      call :ExecuteCmd dotnet publish 
             "%DEPLOYMENT_SOURCE%\src\MyWebJob\project.json" 
               -o "%DEPLOYMENT_TEMP%\App_Data\Jobs\Continuous\MyWebJob" 
             -c Release
      
      If you have more than one web job, just add individual publish lines for each one

Check that it worked

Push your changes and wait for Azure to deploy, then look at the webjobs in the Azure portal. You should see your job there. In case you don't, have a look at the publish log file to see if there are any errors in there.

Thursday, 25 August 2016

InfoSec with SQL Azure

I've been using SQL Azure since 2011 and it's been a journey. One of the big problems I used to have was passing security audits from some of our clients; We deal with a  lot of data that is highly sensitive so are under a lot of scrutiny to make sure it is protected.

A lot of what you have to comply with when you start going into the areas of ISO27001 and PCI compliance is not just about technical security in the way we normally think about it, it’s just as much about people and processes. In my experience from filling in dozens of security questionnaires, I believe that SQL Azure can tick all the boxes, as long as you switch on the right features. Just to be clear, I'm not required to comply with PCI so can't vouch for that, but I do have to comply with a number of ISO27001 and Data Protection requirements.

Box ticking

Most security audits will ask you if your data is "encrypted at rest". If you understand how Azure really works and where  that requirement originally comes from, you'll know that this is a meaningless requirement when you're in Azure. But, just try to convince a security auditor of that. So go ahead and switch on Transparent Data Encryption  and you can tick that box on the questionnaire.
Incidentally, Azure also recently enabled you to do something similar for Blob storage - again mainly so you can tick the box.

Intrusion detection

SQL Azure has a neat Threat Detection facility to monitor the use of your database and alert you to anomalous behaviour. I've seen it detect potential SQL injection attacks and allegedly it will detect "unusual" behaviour, though I have yet to actually see that (thankfully :)).
This is  a useful feature to potentially detect both external and internal attacks (see below).
You have nothing to lose by switching it on.

Internal attacks

As a developer it's easy to think that once you pass penetration testing, your job is done. But, for the organisation, that's only half the battle. A proper security audit will look just as much at how you are preventing and detecting what they sometimes call "Internal Data Leakage". Basically, they are worried that people inside your organisation may access the data and leak it. This may be deliberately, it may be a result of social engineering or it may be that an external agency decided to hack a person who works for you. It's the modern equivalent of sneaking in through the kitchen entrance instead of taking a battering ram to the front gates. It's hardly surprising there is a lot of focus on this, given that most of the big attacks recently seems to have come from some kind of inside job.
You may well trust everyone in your organisation to not want to deliberately betray you and to be sensible about not having their laptops hacked - but an auditor won't, so you may as well buckle up and do the right thing. SQL Azure makes it surprisingly easy, though there are number of manual steps that could be made a lot easier with some better tooling, especially if you have many databases.

Step 1 - Give users individual logins

Your application probably uses a master username and password to log in to SQL. It's tempting to just give that to the people who need to go and look stuff up in the database, including developers, devops, support etc. But this means you have no way of knowing which actual person did what. Worse, when someone leaves the organisation you may not find it that easy to change the application's SQL Password.

You can create individual users in SQL Azure to give to the people who need access and I urge you to do so. You may want to give users read-only access while you are it. The tooling is lacking, in that you have to create the users using T-SQL from something like SQL Server Management Studio or similar; you can’t manage the users in the Azure Portal, which is quite an oversight IMO.

If you use Azure AD for authentication you can add Azure AD users to the SQL Databases. This has the benefit that user access will be automatically revoked from all databases when the user's AD Account is disabled - probably when they leave the organisation. It also means that, as of August 2016, you can use 2 Factor Authentication with the SQL logins - just download the latest version of SQL Server Management Studio 2016 and choose the Universal authentication method.
At NewOrbit, all user access to Azure and SQL  Azure is controlled through Azure AD, through our Office 365 subscription and all users are required to have 2FA enabled.

Step 2 - hide the sign-in details for the application

Once you have given users individual logins, you'll need to change the password the application uses to login to SQL. And you need to ensure it's not available to developers or others. You can do this by specifying things in the portal, if you use web apps. However, there are two other methods you may want to consider;

  • You can let your Application itself authenticate to SQL with Azure AD using a certificate - see the same link as above about using Azure AD with SQL Azure. This way there literally is no password and as long as you delete the certificate after uploading it, there is no way for anyone to log in to SQL as the application.
  • You could store the SQL password in Azure Key Vault.


Step 3 - audit all the things

SQL Azure has an Auditing facility whereby you can audit all queries made against the database. Given that you have assigned an individual user name to each of your people, you can (and should) audit exactly which queries each of your people have run - and Azure will even flag up queries that return an unusually large number of records. This is gold dust for complying with InfoSec requirements about preventing internal data leakage.

Step 4 - use data masking

SQL Azure has a facility to "mask" certain columns, such as email addresses, phone numbers and so on. I think the original design of this feature was as an aide to applications. But actually it's really useful in complying with InfoSec requirements about limiting access to sensitive data; You can set it up so your Application has normal access to all the data, but the logins your users have can automatically mask sensitive data. This means your users can write queries as normal and even see if data exists, but sensitive details that they don't need for troubleshooting will be masked in the output.
Do understand that this feature is not completely tamper proof; you can still search on the underlying data, so with patience you can triangulate your way to the real data - but then that will show up in the audit logs.

Conclusion

When properly configured, I believe SQL Azure offers a very compelling InfoSec story, one that should be able to satisfy most security audits you are likely to encounter.