Monday, 20 April 2015

When is a repository not a repository?

I think the answer to this question is “most of the time”, at least that’s the case judging by a lot of the code I’ve seen lately.

For example, during a recent recruitment campaign we’ve been setting a fairly open technical test for the candidates, a test that required them to write some code to solve a problem. A number of candidates chose to expose data using something called a ‘repository’ but in almost every case the so-called repository didn’t look like a repository to me at all.

Does the following look like an interface for a repository to you?

public interface ISeatRepository
{
    IEnumerable<Seat> GetAllSeats();

    IEnumerable<Seat> GetSeatsForVenue(Venue venue);

    IEnumerable<Seat> GetSeatsForCustomer(Customer customer);
}

This is typical of the kind of thing I have been seeing.

My concern is that the term ‘repository’ is being applied to anything that exposes data with no real understanding of the original pattern. This is at best misleading and in most cases demonstrates a lack of understanding of the pattern and/or a laziness to use a more appropriate architecture.

Defining the Repository Pattern

Martin Fowler’s web site describes a repository in the following way:

“A Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection. Client objects construct query specifications declaratively and submit them to Repository for satisfaction. Objects can be added to and removed from the Repository, as they can from a simple collection of objects, and the mapping code encapsulated by the Repository will carry out the appropriate operations behind the scenes.” [1]

I like this definition. The take-away point is that the repository “Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects.” Note, ‘a collection-like interface’.

To me the pattern is quite clear: the repository should look like a collection. For example:

public interface IRepository<T>
{
    void Add(T instance);
    void Remove(T instance);
    IEnumerable<T>
    FindAll();
    T Find(int id);
    IEnumerable<T>
    Find(Expression<Func<T, bool>> predicate);
}

There are many alternative approaches to repositories but the one above serves well enough as an example.

One of the key drivers behind the repository pattern is that it isolates domain objects from details of the database access code. This can be a benefit or a drawback depending on your requirements. If for example you need tight control over the SQL then a repository may not be the best way to go; the pattern is designed to hide that sort of detail from client code.

Eric Evans in his book Domain-Driven Design, Tackling Complexity in the Heart of Software talks extensively about repositories. In DDD the repository is a key concept to be used in combination with Aggregates or Aggregate Roots. In fact, repositories should be limited to aggregate roots – you don’t have repositories for every domain type in your system! A discussion of those concepts is beyond the scope of this post but I’d encourage the user to investigate further.

Evans provides this definition:

“For each type of object that needs global access, create an object that can provide the illusion of an in-memory collection of all objects of that type. Set up access through a well-known global interface. Provide methods to add and remove objects, which will encapsulate the actual insertion or removal of data in the data store. Provide methods that select objects based on some criteria and return fully instantiated objects or collections of objects whose attribute values meet the criteria, thereby encapsulating the actual storage and query technology.” [2]
If you are interested Vaughn Vernon goes in to some detail about repositories, including collection-oriented repositories, in his book Implementing Domain-Driven Design.

Repositories - how and when

Use repositories if you want to present clients with a simple model for obtaining persistent objects and managing their life cycle. Be aware that using this pattern will provide a bland interface to the data – that’s the whole point. If you need to write optimised SQL then this probably isn’t the way to go.

In my experience Object Relational Mappers (ORMs) like Entity Framework much less successful but still possible.

Note that if you use repositories you will probably need an alternative method of managing transactionality. The Unit of Work pattern is commonly used in combination with repositories to solve this problem. Rob Connery has an interesting article on why Repositories On Top UnitOfWork Are Not a Good Idea but I think a good part of the issues here are caused by Entity Framework and the need to pass DbContext objects around.

One category of data access technology I don’t think is a good fit with repositories are the micro-ORMs (e.g. PetaPoco, Dapper). These tend to have APIs that are quite close to the SQL and just don’t have a comfortable fit with repositories. If you use these technologies and your data access API doesn’t have a collection-like interface then don’t call your data access classes repositories!

An alternative

An alternative that I like is to use is to implement Commands and Queries. Having seen the potential benefits of Command Query Responsibility Segregation (CQRS) originally put forward by Udi Dahaan the basic concept of separating operations that modify data (Commands) and operations that fetch data (Queries) seemed very attractive. Now CQRS is quite an involved architecture but it’s really an extension of Command Query Separation, a principle that’s been around for years.

Note that:

  • Queries: Return a result and do not change the observable state of the system (are free of side effects).
  • Commands: Change the state of a system but do not return a value.


Jimmy Bogard discusses a similar approach in his post Favor query objects over repositories.

By providing some simple interfaces commands and queries can be easily defined. Firstly queries:

/// <summary>
/// This interface encapsulates queries - read-only operations against a backing store.
/// </summary>
/// <typeparam name="TParameters">The query parameters to use</typeparam>
/// <typeparam name="TResult">The type of result to return</typeparam>
public interface IQuery<TParameters, TResult>
{
    /// <summary>
    /// Execute the query against the backing store.
    /// </summary>
    /// <param name="parameters">The query parameters to use</param>
    /// <returns>The result of the query</returns>
    TResult Execute(TParameters parameters);
}

/// <summary>
/// This interface encapsulates queries - read-only operations against a backing store.
/// </summary>
/// <typeparam name="TResult">The type of result to return</typeparam>
public interface IQuery<TResult>
{
    /// <summary>
    /// Execute the query against the backing store.
    /// </summary>
    /// <returns>The result of the query</returns>
    TResult Execute();
}

And now commands:

/// <summary>
/// An interface for commands. Commands write to a backing store.
/// </summary>
/// <remarks>
/// Commands do not return values because it is expected that the operation will succeed. If it fails an exception will be thrown.
/// </remarks>
public interface ICommand
{
    /// <summary>
    /// Execute the command.
    /// </summary>
    void Execute();
}

/// <summary>
/// An interface for commands with parameters. Commands write to a backing store.
/// </summary>
/// <remarks>
/// Commands do not return values because it is expected that the operation will succeed. If it fails an exception will be thrown.
/// You should create classes to represent you <typeparamref name="TParameters"/>, simple POCOs with properties for the
/// individual parameters to be passed into the <see cref="Execute"/> method.
/// </remarks>
/// <typeparam name="TParameters">The type of parameters to use</typeparam>
public interface ICommand<TParameters>
{
    /// <summary>
    /// Execute the command using the given parameters.
    /// </summary>
    /// <param name="parameters">The parameters to use</param>
    void Execute(TParameters parameters);
}

/// <summary>
/// An interface for commands with parameters. Commands write to a backing store.
/// </summary>
/// <remarks>
/// Commands do not return values because it is expected that the operation will succeed. If it fails an exception will be thrown.
/// You should create classes to represent your <typeparamref name="TParameters"/>, simple POCOs with properties for the
/// individual parameters to be passed into the <see cref="Execute"/> method.
/// </remarks>
/// <typeparam name="TParameters">The type of parameters to use</typeparam>
/// <typeparam name="TResult">The type of result to return</typeparam>
public interface ICommand<TParameters, TResult>
{
    /// <summary>
    /// Execute the command using the given parameters.
    /// </summary>
    /// <param name="parameters">The parameters to use</param>
    /// <returns>The result</returns>
    TResult Execute(TParameters parameters);
}

Interfaces such as these allow transactionality to be implemented on the Execute() methods. For example a base class could be created that automatically has transactions applied to an abstract Execute() method using an attribute (I have done this using Spring.Net which provides a TransactionAttribute).

A simple command might look like this:

/// <summary>
/// A command for saving a <see cref="User"/>.
/// </summary>
public class SaveUserCommand : ICommand<User>
{
    /// <summary>
    /// Execute the command using the given parameters.
    /// </summary>
    /// <param name="user">The user to save</param>
    public void Execute(User user)
    {
        // Save the user using your technology of choice...
    }
}
One thing I have noticed about this approach is that you quickly build up a library of commands and queries which can become difficult to manage and error prone. They all start to look the same. A case of buyer beware.

Wrapping up

If you are writing a data access API and it doesn’t have a collection-like interface then don’t call it a repository. Don’t use the name ‘repository’ as a crutch for poor, lazy design. There are alternatives.

References

[1] Repository, by Edward Hieatt and Rob Mee, http://martinfowler.com/eaaCatalog/repository.html
[2] Evans, Domain-Driven Design, Tackling Complexity in the Heart of Software, p.151-152

Sunday, 19 April 2015

Could Not Detect Test Framework error when running ApprovalTests from an automated build

I was making some changes to an automated continuous integration build in TeamCity when I encountered an unexpected error. I had a suite of unit tests using the ApprovalTests framework but when the build was triggered all of the tests failed, tests which ran perfectly from within Visual Studio.

Investigation in TeamCity revealed the following error details (the same for each test):

Test(s) failed. System.Exception : 
Could Not Detect Test Framework

Either:
1) Optimizer Inlined Test Methods

Solutions:
a) Add [MethodImpl(MethodImplOptions.NoInlining)]
b) Set Build->Opitmize Code to False 
   & Build->Advanced->DebugInfo to Full

or
2) Approvals is not set up to use your test framework.
It currently supports [NUnit, MsTest, MbUnit, xUnit.net, xUnit.extensions, Machine.Specifications (MSpec)]

Solution:
To add one use ApprovalTests.Namers.StackTraceParsers.StackTraceParser.AddParser() method to add implementation of ApprovalTests.Namers.StackTraceParsers.IStackTraceParser with support for your testing framework.
To learn how to implement one see http://blog.approvaltests.com/2012/01/creating-namers.html
   at ApprovalTests.Namers.StackTraceParsers.StackTraceParser.Parse(StackTrace stackTrace)
   at ApprovalTests.Namers.UnitTestFrameworkNamer..ctor()
   at ApprovalTests.Approvals.<.cctor>b__c()
   at ApprovalTests.Approvals.GetDefaultNamer()
   at ApprovalTests.Approvals.Verify(IApprovalWriter writer)
   at ApprovalTests.Approvals.VerifyFile(String receivedFilePath)
   at ApprovalTests.Approvals.Verify(FileInfo receivedFilePath)

 

SNAGHTML472c94c

The first point to note is that the build was set to run using the Release configuration.

SNAGHTML4758484

The fix requires changes to the Visual Studio project settings for the test project.
Open the test project settings for the Release configuration. On the Build tab uncheck Optimize code and click Advanced

SNAGHTML479828c

In the advanced settings dialog set Debug Info to full.

image

OK the dialog and save the project. Commit changes to source control and trigger the build again.

That should do it!

The error is caused by the absence of stacktrace information when the source is built using the Release configuration. Approvals Tests infers the name the approval file by walking the stack trace to get the test method name. No stacktrace, no inferred method name and therefore ApprovalTests can’t find the approval file.

Sunday, 12 April 2015

Comparing .Net statsd clients

This post follows on from 2 previous posts: Running statsd on Windows and Testing Log4Net.Async with statsd and Graphite. In those posts I set up a virtual Linux box which hosted statsd and Graphite before experimenting with log4Net to compare performance when using different appenders. In this post I’m going to look at some .Net statsd clients to see how they compare with regards to performance and usability.

Now, the performance tests I describe below aren’t particularly scientific given my machine is running a lot of applications - including the virtual machine hosting statsd and Graphite. I ran each of the tests a number of times and the elapsed times I provide below are rough averages and should be taken as indicative of performance only. This is ‘finger in the air’ testing and any conclusions should be taken with a pinch of salt.

Statsd C# Client

I created a really simple console application to send a counter to statsd from inside a ‘for’ loop which ran 100,000 iterations. For the first version I used the Statsd C# Client which I used in my previous posts.

namespace StatsD.Client.Test
{
    using System;

    using StatsdClient;

    /// <summary>
    /// The main program class.
    /// </summary>
    class Program
    {
        /// <summary>The stopwatch.</summary>
        private static System.Diagnostics.Stopwatch stopwatch;

        /// <summary>
        /// Defines the entry point of the application.
        /// </summary>
        /// <param name="args">The arguments.</param>
        static void Main(string[] args)
        {
            Configure();
            Start();

            for (int i = 0; i < 100000; i++)
            {
                Metrics.Counter("test-counter");
            }
            
            Stop();
        }

        /// <summary>
        /// Creates and starts the stopwatch.
        /// </summary>
        private static void Start()
        {
            stopwatch = System.Diagnostics.Stopwatch.StartNew();
        }

        /// <summary>
        /// Stops the stopwatch and outputs the elapsed time.
        /// </summary>
        private static void Stop()
        {
            stopwatch.Stop();
            Console.WriteLine("Elapsed time: {0}", stopwatch.ElapsedMilliseconds);
            Console.ReadKey();
        }

        /// <summary>
        /// Configure th <c>statsd</c> client.
        /// </summary>
        private static void Configure()
        {
            var metricsConfig = new MetricsConfig
            {
                StatsdServerName = "localhost",
                StatsdServerPort = 8125
            };

            Metrics.Configure(metricsConfig);   
        }
    }
}

Running this took 591 milliseconds.

From a usability point of view the API is quite nice offering the static Metrics class used above or a more direct Statsd class which you have to instantiate. The nice thing about the static Metrics class is that you can use it from within a ‘using’ block.

using (Metrics.StartTimer("stat-name"))
{
  DoMagic();
}

However, I’m rather cautious of using static classes as they tend to lead to code that violates the open/closed principle and can make testing awkward. I’m inclined to wrap frameworks in my own service classed that implement interfaces I can easily mock in unit tests.

NStatsD.Client

For the next test I used the NStatsD.Client by Rob Bihun. There haven’t been any commits to the GitHub repo for 7 months or so. This package adds a bit of configuration to the App.config file so no need for configuration in code.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <configSections>
    <section name="statsD" type="NStatsD.StatsDConfigurationSection, NStatsD.Client" />
  </configSections>
  <startup> 
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.1" />
  </startup>
  <statsD>
    <server host="localhost" port="8125" />
  </statsD>
</configuration>

Running the NStatsD.Client version took 1031 milliseconds. Interesting.

The API is ‘no frills’ but gets the job done. The API isn’t using static classes but rather employs the singleton pattern.

NStatsD.Client.Current.Increment("increment");
NStatsD.Client.Current.Increment("increment", 0.5); // Optional Sample Rate included on all methods
NStatsD.Client.Current.Decrement("decrement");
NStatsD.Client.Current.Timing("timing", 2345);
NStatsD.Client.Current.Gauge("gauge", 45);

NStatsD.HighPerformance

Next I tried the NStatsD.HighPerformance package. This is actually a fork of the NStatsD.Client and was created by Aaron Stannard, one of the driving forces behind Akka.Net a really interesting framework for creating distributed systems using the Actor model.

Aaron has a very interesting blog post on the Tradeoffs in High Performance Software which talks about his fork of the NStatsD.Client. The big change is in the private SendToStatsD method of the Client class. Originally it looked like this:

private void SendToStatsD(Dictionary<string, string> sampledData, AsyncCallback callback)
{
	var prefix = Config.Prefix;
	var encoding = new System.Text.ASCIIEncoding();

	foreach (var stat in sampledData.Keys)
	{
		var stringToSend = string.Format("{0}{1}:{2}", prefix, stat, sampledData[stat]);
		var sendData = encoding.GetBytes(stringToSend);
		_client.BeginSend(sendData, sendData.Length, callback, null);
	}
}

By simply changing the async BeginSend call to an inline Send he got an improvement in performance of around 40%.

private void SendToStatsD(Dictionary<string, string> sampledData, AsyncCallback callback)
{
	var prefix = Config.Prefix;
	foreach (var stat in sampledData.Keys)
	{
		var stringToSend = string.Format("{0}{1}:{2}", prefix, stat, sampledData[stat]);
		var sendData = Encoding.ASCII.GetBytes(stringToSend);
		UdpClient.Send(sendData, sendData.Length);
	}
}

There hasn’t been any activity on the GitHub repo for over 9 months so I guess this is considered done.

Running the NStatsD.HighPerformance version resulted in an elapsed time of about 691 milliseconds. Not surprisingly the API is identical to NStatsD.Client.

Graphite

The next package I tried was Graphite, a project inspired by MiniProfiler. This project seems to have a larger scope than some of the others I’ve looked at and provides hooks for Elmah and WCF. With this package you can send metrics to statsd or directly to Graphite (the graphing framework). I note that there haven’t been any commits to the GitHub repo for a year or so.

Running the Graphite version resulted in an elapsed time of about 608 milliseconds.

From an API point of view Graphite looks quite nice. Configuration is done through App.config again and the API provides for ‘using’ statements for timers (you can see the MiniProfiler influence here). For timers there’s a special Step method that basically measure the time taken for the Dispose method to be called.

using (MetricsPipe.Current.Step("duration"))
{
    // Do some work here...
}

Wrapping up

As my old Dad used to say, ‘you pays your money and you takes your choice’. That seems to be the case here. There isn’t a massive difference in performance between any of them. One thing you might like to take in to consideration is the user base.

Judging by the number of NuGet downloads and performance the Statsd C# Client comes out on top.

Package Elapsed time (ms) NuGet downloads Description
Statsd C# Client 591 9830 Allows ‘using’ statements for timers and has static methods for metrics.
NStatsD.Client 1031 1635 Configuration in App.config. Singleton pattern for sending metrics. No ‘using’ statements for timers.
NStatsD.HighPerformance 691 295 Same as NStatsD.Client.
Graphite 608 930 Configuration in App.config. API allows called direct to Graphite. Includes ‘using’ statement for timers.

Sunday, 5 April 2015

Testing Log4Net.Async with statsd and Graphite

Example code: You can get the example code for this post on BitBucket.

Carrying on from my previous post, Running statsd on Windows, I thought it would be interesting to try to gather some application metrics and send them to statsd. I’ve also been meaning to have a look at the Log4Net.Async library for a while and thought I might be able to combine the two objectives: having a look at Log4Net.Async, and experimenting with statsd metrics to see how the Log4Net.Async appenders compare with standard log4Net ones.

The first question is, how to get metrics out of an application and in to statsd? Not surprisingly someone has beaten me to it – quite a few people have actually – and there are a number of libraries out there. I chose the Statsd C# Client library which seems to have had a good number of downloads on NuGet.

 

I have a simple logging library of my own that can use log4Net. So, I decided to use my library to create a simple application to write to a log file in a loop and to time how long it took to do a large number of writes.

Getting the NuGet packages

The first step was to create a console application and to add Log4Net.Async and the Statsd C# Client NuGet packages.

image

image

The code

I started by creating a simple interface for a logging service. This allowed me to created separate implementations that could be assigned separate log4Net loggers and subsequently separate appenders.

namespace Logging.StatsD
{
    /// <summary>
    /// Classes implementing this interface will write to the log.
    /// </summary>
    public interface ILoggingService
    {
        /// <summary>Does the logging.</summary>
        /// <param name="numberOfTimes">The number of times to log.</param>
        void DoLogging(int numberOfTimes);
    }
}

I created 3 implementations of the ILoggingService:

  • SynchronousLoggingService
    • Logged messages from this class would be handled by a standard log4Net RollingFileAppender
  • AsynchronousLoggingService
    • Logged messages from this class would be handled by one of the Log4Net.Async appenders.
  • AsyncAwaitLoggingService
    • Just out of interest I used async await – somewhat naively – to wrap calls to the logger in a task.

Here’s an example of one of the logging services, the SynchronousLoggingService:

namespace Logging.StatsD
{
    using Andy.French.Logging;

    using StatsdClient;

    /// <summary>
    /// This class will write to the log synchronously.
    /// </summary>
    public class SynchronousLoggingService : ILoggingService
    {
        /// <summary>The logger.</summary>
        private ILogger logger;

        /// <summary>
        /// Initialises a new instance of the <see cref="SynchronousLoggingService"/> class.
        /// </summary>
        /// <param name="loggerFactory">The logger factory.</param>
        public SynchronousLoggingService(ILoggerFactory loggerFactory)
        {
            this.logger = loggerFactory.CreateInstance(typeof(SynchronousLoggingService));
        }

        /// <summary>
        /// Does the logging.
        /// </summary>
        /// <param name="numberOfTimes">The number of times to log.</param>
        public void DoLogging(int numberOfTimes)
        {
            using (Metrics.StartTimer("time-to-log-sync"))
            {
                for (int i = 0; i < numberOfTimes; i++)
                {
                    this.logger.Debug("This is test message {0} from the SynchronousLoggingService.", i);
                }
            }
        }
    }
}

You can see that the DoLogging method simply writes a Debug message to the log in a ‘for’ loop. The most interesting line is line 30. This is a call to the Statsd C# Client library and creates a timer around the code enclosed by the using statement. Note the name of the metric (time-to-log-sync). Similar code was added to the other logging services but with each one using a unique metric name.

So far we only have half of the story. What remains is the need to configure the Statsd C# Client library before it can successfully send messages to statsd. That was done in the main program.

namespace Logging.StatsD
{
    using System;

    using Andy.French.Configuration.Service;
    
    using Ninject;

    using StatsdClient;

    /// <summary>
    /// The main program.
    /// </summary>
    public class Program
    {
        /// <summary>The configuration service.</summary>
        private static IConfigurationService configurationService;

        /// <summary>The synchronous logging service.</summary>
        private static ILoggingService synchronousLoggingService;

        /// <summary>The asynchronous logging service.</summary>
        private static ILoggingService asynchronousLoggingService;

        /// <summary>The async await logging service.</summary>
        private static ILoggingService asyncAwaitLoggingService;

        /// <summary>
        /// Defines the entry point of the application.
        /// </summary>
        /// <param name="args">The arguments.</param>
        public static void Main(string[] args)
        {
            try
            {
                ConfigureMetrics();
                ResolveDependencies();

                var numberOfTimes = configurationService.GetInt("Number.Of.Times");

                asynchronousLoggingService.DoLogging(numberOfTimes);
                synchronousLoggingService.DoLogging(numberOfTimes);
                asyncAwaitLoggingService.DoLogging(numberOfTimes);
            }
            catch (Exception ex)
            {
                Console.WriteLine("An error occurred: {0}", ex.Message);
            }
        }

        /// <summary>
        /// Configures the <c>statsd</c> client.
        /// </summary>
        private static void ConfigureMetrics()
        {
            var metricsConfig = new MetricsConfig
            {
                StatsdServerName = "localhost",
                StatsdServerPort = 8125
            };

            Metrics.Configure(metricsConfig);
        }

        /// <summary>
        /// Resolves the dependencies.
        /// </summary>
        private static void ResolveDependencies()
        {
            var kernel = new StandardKernel(new ProgramModule());
            configurationService = kernel.Get<IConfigurationService>();
            synchronousLoggingService = kernel.Get<ILoggingService>("Synchronous");
            asynchronousLoggingService = kernel.Get<ILoggingService>("Asynchronous");
            asyncAwaitLoggingService = kernel.Get<ILoggingService>("AsyncAwait");
        }
    }
}

The Statsd C# Client library is configured in the call to the ConfigureMetrics method. Here we specify the statsd server name and port by creating a new instance of MetricsConfig and passing it to the static Metrics.Configure method.

Note – To find the statsd port I went back into the Chef cookbooks created when I set up the virtual machine. In <statsd-graphite-vm repository directory>\cookbooks\statsd\attributes there’s a default.rb file. The default.rb file contains the settings used to setup statsd in the virtual machine, including the port number.

default["statsd"]["dir"]                          = "/usr/share/statsd"
default["statsd"]["conf_dir"]                     = "/etc/statsd"
default["statsd"]["repository"]                   = "https://github.com/etsy/statsd.git"
default["statsd"]["reference"]                    = "master"
default["statsd"]["flush_interval"]               = 10000
default["statsd"]["percent_threshold"]            = 90
default["statsd"]["address"]                      = "0.0.0.0"
default["statsd"]["port"]                         = 8125
default["statsd"]["mgmt_address"]                 = "0.0.0.0"
default["statsd"]["mgmt_port"]                    = 8126
default["statsd"]["graphite_host"]                = "127.0.0.1"
default["statsd"]["graphite_port"]                = 2003
default["statsd"]["graphite_role"]                = "graphite_server"
default["statsd"]["graphite_query"]               = "roles:#{node['statsd']['graphite_role']} AND chef_environment:#{node.chef_environment}"
default["statsd"]["delete_idle_stats"]            = false
default["statsd"]["delete_timers"]                = false
default["statsd"]["delete_gauges"]                = false
default["statsd"]["delete_sets"]                  = false
default["statsd"]["delete_counters"]              = false
default["statsd"]["username"]                     = "statsd"
default["statsd"]["debug"]                        = false
default["statsd"]["dump_messages"]                = false

# Graphite storage config
default["statsd"]["graphite"]["legacy_namespace"] = true
default["statsd"]["graphite"]["global_prefix"]    = "stats"
default["statsd"]["graphite"]["global_suffix"]    = ""
default["statsd"]["graphite"]["prefix_counter"]   = "counters"
default["statsd"]["graphite"]["prefix_timer"]     = "timers"
default["statsd"]["graphite"]["prefix_gauge"]     = "gauges"
default["statsd"]["graphite"]["prefix_set"]       = "sets"

# nodejs
default["statsd"]["nodejs_bin"] = "#{node["nodejs"]["dir"]}/bin/node"

Configuring log4Net

At this point I had the code to write to a log but I still needed to configure log4Net. The log4Net configuration was all in  App.config.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
  </configSections>
  
  <log4net>
    <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender">
      <file value="log.txt" />
      <immediateFlush value="true" />
      <appendToFile value="true" />
      <maximumFileSize value="100MB" />
      <maxSizeRollBackups value="2" />
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%thread] %-5level %logger - %message%newline" />
      </layout>
      <lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
    </appender>

    <appender name="AsyncForwarder" type="Log4Net.Async.AsyncForwardingAppender,Log4Net.Async">
      <appender-ref ref="RollingFileAppender" />
    </appender>

    <logger name="Logging.StatsD.AsynchronousLoggingService">
      <level value="DEBUG" />
      <appender-ref ref="AsyncForwarder" />
      <!--<appender-ref ref="AsyncRollingFileAppender" />-->
    </logger>

    <logger name="Logging.StatsD.SynchronousLoggingService">
      <level value="DEBUG" />
      <appender-ref ref="RollingFileAppender" />
    </logger>

    <logger name="Logging.StatsD.AsyncAwaitLoggingService">
      <level value="DEBUG" />
      <appender-ref ref="RollingFileAppender" />
    </logger>
    
  </log4net>

  <appSettings>
    <add key="Number.Of.Times" value="10000" />
  </appSettings>
  
  <startup> 
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.1" />
  </startup>
</configuration>

Starting on line 24 there are 3 loggers, one for each of the logging service classes I created. Each logger was configured to use one of the appenders defined on lines 8 and 20:

  • RollingFileAppender – a standard log4Net rolling file appender
  • AsyncForwarder – A Log4Net.Async appender. This appender uses a ring buffer and it is possible for message loss to take place. More on that later.

 

A note about Log4Net.Async appenders

It pays to read documentation and that’s certainly true of the Log4Net.Async library. In my test application the logger created for the AsynchronousLoggingService is set to use the AsyncForwarder appender. This appender – one of the Log4Net.Async appenders - provides high performance at the expense of potential lost messages.

And lost messages is what I got. When running the example I saw occasional log entries like this:

2015-04-05 10:15:13,886 [AsyncForwarder Forwarding Appender Thread] ERROR AsyncForwardingAppender - Buffer overflow. 13 logging events have been lost in the last 30 seconds. [BufferSize: 1000]

This isn’t an error; it’s by design.

“This appender utilizes a ring buffer with a a default limit of 1000 messages. If the head of the buffer overtakes the tail, message loss occurs. This behavior prioritizes application performance over logging fidelity. The current implementation uses a 10ms polling (sleep) period on the background thread.” - https://github.com/cjbhaines/Log4Net.Async

If you need lossless messaging another option is to use the ParallelForwardingAppender.

“This appender utilizes System.Collections.Concurrent.BlockingCollection(T) and other facets of the Task Parallel Library (TPL) to implement a lossless message queue. This implementation does not use polling but rather waits until new messages are available to append. This results in less CPU overhead for queue polling and appending starts without having to wait on sleep expiration.” - https://github.com/cjbhaines/Log4Net.Async

Trying things out

I ran the application a number of times over the course of a few minutes and then viewed the Graphite UI in my browser (to get to the Graphite I went to http://localhost:8888 as described in my previous post). By expanding the Graphite > stats > timers folder on the Graphite page I was able to see the 3 metrics I added to the application. That confirmed that the metrics were in fact being forwarded to statsd. Cool!

image

By expanding each of the timers I could see various values which I could select and add to the graph. I chose the ‘upper’ value for each of the 3 timers.

image

Once the values had been chosen for each of the metrics the results appeared in the graph. I ran the test application approximately once a minute over a 10 minute period so to make things clearer I configured the graph to only show data for the last 10 minutes. I also changed the line type to be a connected line.

image

image

Now I have to say I was blown away by what I saw. The version of the logging service that was using the Log4Net.Async forwarding appender was blisteringly fast. Check out the graph for yourself.

image

I’m a bit suspicious so I did some additional tests using a StopWatch in the code but the results were confirmed. The down-side was lost messages (roughly between 8 and 20 per run).

My naive async await implementation certainly wasn’t any faster – it was actually slower - although I could see that logging work was being assigned to different threads.

As an experiment I switched to a different Log4Net.Async appender, the ParallelForwardingAppender, which guarantees message delivery and tried again.

image

Not as fast as the AsyncForwardingAppender but still much faster than the standard log4Net appender.

Wrapping up

So there we have it. We’ve managed to get metrics from an application into statsd using the Statsd C# Client library and we’ve seen that the Log4Net.Async appenders offer performance gains over the standard log4Net equivalents. We’ve also managed to view the metrics in Graphite.

Result! Glass of wine anybody?

Saturday, 4 April 2015

Running statsd on Windows

We are going to be using statsd (or possibly Heka) to consume and analyse performance metrics for our applications. Being a curious soul I wanted to start playing with statsd on a Windows 8.1 machine at home but quickly discovered that the best approach is to go with Linux.

There are quite a few projects knocking around on GitHub to help build virtual machines with statsd and Graphite – a graphing solution that allows time series data to be visualised – but I chose José Padilla’s statsd-graphite-vm as a starting point although I had to customise things a little because I didn’t have all the necessary infrastructure in place.

José’s solution makes use of several technologies:

  • VirtualBox - a x86 and AMD64/Intel64 virtualization product
  • Vagrant – orchestrates the provisioning of the virtual environment
  • Chef – an automation platform for building virtualised environments

 

At the end of the process I ended up with a virtual Linux box running 32-bit Ubuntu complete with statsd and Graphite.

Cloning the statsd-graphite-vm project

The first step was to clone the statsd-graphite-vm project from GitHub. I chose to use SourceTree because it’s nice.

image

Installing VirtualBox

OK, I love Chocolatey. I was late to the game here but I have to say if you’re not using it already you really should. There’s no messing around finding the latest version of something or worrying about 32 or 64-bit distributions. You just ask Chocolatey to install a package by name and away it goes.

So, the next thing to do here is to install VirtualBox. Time to fire up Chocolatey.

image

That’s it. VitualBox is installed. VirtualBox is described in the following terms:

“VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprise as well as home use. Not only is VirtualBox an extremely feature rich, high performance product for enterprise customers, it is also the only professional solution that is freely available as Open Source Software under the terms of the GNU General Public License (GPL) version 2.” - https://www.virtualbox.org/

Basically, VirtualBox gives us a virtual machine into which we can load an operating system etc. a process which is greatly simplified by using Vagrant

Installing Vagrant

The next piece of the puzzle is Vagrant. Chocolatey to the rescue again.

image

So what part is Vagrant playing? Vagrant is described thus:

“Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team.

To achieve its magic, Vagrant stands on the shoulders of giants. Machines are provisioned on top of VirtualBox, VMware, AWS, or any other provider. Then, industry-standard provisioning tools such as shell scripts, Chef, or Puppet, can be used to automatically install and configure software on the machine.” - https://docs.vagrantup.com/v2/why-vagrant/

And:

“If you're a developer, Vagrant will isolate dependencies and their configuration within a single disposable, consistent environment, without sacrificing any of the tools you're used to working with (editors, browsers, debuggers, etc.). Once you or someone else creates a single Vagrantfile, you just need to vagrant up and everything is installed and configured for you to work.” - https://docs.vagrantup.com/v2/why-vagrant/

If you look inside the statsd-graphite-vm folder you’ll find a Vagrantfile. This file is read by Vagrant and is used to define the type of virtual machine we want including instructions to run Chef to setup software etc. Chef is being used to create a ‘recipe’ of all the ingredients we need included in the new virtual machine.

My customisation - Getting the Chef cookbooks

Now, the statsd-graphite-vm solution uses Chef to configure all the bits and pieces that go in to building the virtual machine. In fact, the statsd-graphite-vm project suggests you use librarian-chef to automatically download all the Chef cookbooks required to build the VM by running ‘librarian-chef install’ as the first step in the process of provisioning the new virtual machine. This really simplifies things – or at least it should - but after a lot of time spent installing Ruby and getting librarian-chef up-and-running I was still having problems (e.g. SSL certificate issues).

Life’s too short so I abandoned that approach and elected to download the cookbooks manually. To do that I created a ‘cookbooks’ folder under statsd-graphite-vm, hit the interweb and Googled with Bing to find the necessary cookbooks and put them in the cookbooks folder myself. A bit of a pain but it worked.

If you choose to do the same the full set of cookbooks looks like this:

image

If you’re looking for Chef cookbooks a great place to start is the Chef Supermarket. An alternative for some of the cookbooks used here is Hector Castro’s contributions on GitHub. I used a variety of locations which I’ve listed here:

 

Firing up the VM

At this point things are just about ready to go but I found that I needed to change a port number in the Vagrantfile to avoid a conflict. I changed the forward_port for port 80 to be 8888 from the default 8080.

# -*- mode: ruby -*-
# vi: set ft=ruby :
#^syntax detection

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  # Every Vagrant virtual environment requires a box to build off of.
  config.vm.box = 'precise32'
  config.vm.box_url = 'http://files.vagrantup.com/precise32.box'

  # Forward a port from the guest to the host, which allows for outside
  # computers to access the VM, whereas host only networking does not.
  config.vm.network 'forwarded_port', guest: 80, host: 8888
  config.vm.network 'forwarded_port', guest: 8125, host: 8125, protocol: 'udp'
  config.vm.network 'forwarded_port', guest: 8126, host: 8126

  config.vm.provision 'chef_solo' do |chef|
    chef.add_recipe 'apt'
    chef.add_recipe 'graphite'
    chef.add_recipe 'nodejs'
    chef.add_recipe 'statsd'
  end

end

You can also see in the Vagrantfile that the build is based on a ‘box’ called precise32 (in this case a 32-bit Ubuntu 12.04 build). What is a box?

“Instead of building a virtual machine from scratch, which would be a slow and tedious process, Vagrant uses a base image to quickly clone a virtual machine. These base images are known as boxes in Vagrant, and specifying the box to use for your Vagrant environment is always the first step after creating a new Vagrantfile.” - http://docs.vagrantup.com/v2/getting-started/boxes.html

Precise32 is a standard Ubuntu 12.04 LTS 32-bit box from HashiCorp's Atlas box catalog.

Once all the above had been done I was able to start the VM by issuing a simple Vagrant command. I opened a command prompt and changed to the statsd-graphite-vm folder before running the following:

vagrant up
vagrant ssh

And we are in!

image

So, let’s see what we get if we browse http://localhost:8888 in a web browser.

image

Boom shakalaka, rude boy!