Logging to the new Table Storage in the cloud is a very common case when you’re working with Azure projects. There are various ways to do this. You can setup log4net to use the Trace log that is available in the .NET Framework and then setup WindowsAzureDiagnostics to transfer the logs every now and then to Table Storage. Another way is to log to an EventLog and then, again, setup WindowsAzure Diagnostics to transfer these logs on a scheduled interval. A third way is to log to regular logfiles and have these logfiles transferred on a scheduled interval. These solutions have been covered on various other blogs.

However, there is some added complexity as the process to properly log to Table Storage does differ a bit depending on which platform you’re deploying on. For example. On an Azure Website you cannot use the EventLog, as you don’t have rights to write to a custom source. To use the EventLog with a source of your own, you’ll need administrator privileges to create one, which requires you to use the WebRole.cs if you’re deploying a WebRole or a custom start-up script to do this for you. And there have been more situations that I’ve run into that require some special tweaking.

There is an existing AzureAppender floating around, but it is based upon one of the described solutions in the first paragraph, and unfortunately does not work on Azure Websites. It does a great job simplifying the process of logging to Table Storage, but it is not foolproof.

Writing my own TableStorageAppender

In order to use one solution that fits all I’ve written a TableStorageAppender that works on most if not all platforms. It has been tested on a WebRole, WorkerRole, Azure Website and even regular desktop applications.

It works by implementing the IAppender Log4net interface. Whenever something is logged it converts the log statement to an internal object (WadTableEntity) which is added to a queue. The queue is used to temporarily store the objects, as we don’t want to block the thread executing the log statement. On a scheduled interval the queue is emptied and the items are written to the specified Storage Table. This is done in a separate thread to make sure the application performance is not impacted. Smart locking of the queue makes sure your application can continue logging while the transfer is in progress. And by allowing you to specify which table to use it will be possible to log multiple applications to the same storage account.

The format that is used to log is compatible with the default WindowsAzureDiagnostics (wad) format. Any tools you might use to easily read the logs from storage will still be usable.

How to use
 
Add a reference to Ms.Azure.Logging to your project by installing the Ms.Azure.Logging nuget package. This package will also install log4net if you don’t already have it and add the System.Configuration reference:

Install-Package Ms.Azure.Logging

Manually add the Microsoft.WindowsAzure.ServiceRuntime and Microsoft.WindowsAzure.StorageClient references if you don’t have them already. They are not enforced by nuget, but they are required.

Somewhere during your application start-up, preferably the first thing you do, add the following code:

var credentials = new StorageCredentialsAccountAndKey(“account, “key”);
LoggingHelper.InitializeAzureTableLogging(credentials, logLevel: Level.All);

And when your application closes, such as in the Application_End for a webapplication or in a try/finally for a forms desktop application, make sure you flush the final log statements:

LoggingHelper.FlushAppenders();

That’s all. This will use some default values: 1 minute transfer interval, 15 minutes logmarker interval, Debug threshold and a default layout.

Check the source code and readme at the Ms.Azure.Logging github page for more examples.

Advertisements