Thursday, 27 January 2011

Breaking dependencies on specific DI containers

Warning: Before going any further you probably want to have a look at Service locator anti-pattern. The code featured in this post uses Service Locator but the pattern is regarded by some to be an anti-pattern and as such should be avoided.

I’ve been working on a WCF service that uses Unity for dependency resolution. Everything has been working but I’ve been unhappy about a tight dependency on Unity itself that I have introduced in my code. I recalled that there is a service locator library knocking around that defines an interface that IoCs can adopt.

The Common Service Locator library contains a shared interface for service location which application and framework developers can reference. The library provides an abstraction over IoC containers and service locators. Using the library allows an application to indirectly access the capabilities without relying on hard references. The hope is that using this library, third-party applications and frameworks can begin to leverage IoC/Service Location without tying themselves down to a specific implementation.” - http://commonservicelocator.codeplex.com/

The Common Service Locator library provides a simple interface for service location:

public interface IServiceLocator : IServiceProvider
{
    object GetInstance(Type serviceType);
    object GetInstance(Type serviceType, string key);
    IEnumerable<object> GetAllInstances(Type serviceType);
    TService GetInstance<TService>();
    TService GetInstance<TService>(string key);
    IEnumerable<TService> GetAllInstances<TService>();
}

It turns out that the library is supported by Unity (as well as a bunch of other IoC implementations). The Common Service Locator library site links to an adapter for Unity but peeking around I found the UnityServiceLocator class in the Microsoft.Practices.Unity assembly.

I have now been able to replace all references to IUnityContainer with IServiceLocator (i.e. breaking the tight dependency on Unity). In the case of the WCF service all I needed to do was create a ServiceHostFactory implementation that passes an instance of UnityServiceLocator around rather than an instance of UnityContainer.

public class UnityServiceLocatorServiceHostFactory : ServiceHostFactory
{
    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)
    {
        var unityContainer = new UnityContainer();
        unityContainer.LoadConfiguration();
        var unityServiceLocator = new UnityServiceLocator(unityContainer);
        return new ServiceLocatorServiceHost(serviceType, unityServiceLocator, baseAddresses);
    }  
}

The UnityServiceLocatorServiceHostFactory is the only class that has a tight dependency on Unity and can be farmed off into a separate Unity assembly. All other classes, including the service host implementation, only need to deal with IServiceLocator:

public class ServiceLocatorServiceHost : ServiceHost
{
    private IServiceLocator _serviceLocator;
        
    public ServiceLocatorServiceHost(IServiceLocator serviceLocator) : base()
    {
        _serviceLocator = serviceLocator;
    }

    public ServiceLocatorServiceHost(Type serviceType, IServiceLocator serviceLocator, params Uri[] baseAddresses)
        : base(serviceType, baseAddresses)
    {
        _serviceLocator = serviceLocator;
    }

    protected override void OnOpening()
    {
        if (Description.Behaviors.Find<ServiceLocatorServiceBehaviour>() == null)
        {
            Description.Behaviors.Add(new ServiceLocatorServiceBehaviour(_serviceLocator));
        }

        base.OnOpening();
    }
}

The ServiceLocatorServiceBehavior adds a service locator instance provider to the endpoint dispatcher, something like this:

public class ServiceLocatorServiceBehavior : IServiceBehavior
{   
    private readonly IServiceLocator _serviceLocator;
	
    public ServiceLocatorServiceBehavior(IServiceLocator serviceLocator)
    {
        _serviceLocator = serviceLocator;
    }

    public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
    {
        // Nothing to see here. Move along...
    }

    public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints, BindingParameterCollection bindingParameters)
    {
        // Nothing to see here. Move along...
    }

    public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
    {
        foreach (ChannelDispatcher channelDispatcher in serviceHostBase.ChannelDispatchers)
        {
            foreach (EndpointDispatcher endpointDispatcher in channelDispatcher.Endpoints)
            {
                string contractName = endpointDispatcher.ContractName;
                ServiceEndpoint serviceEndpoint = serviceDescription.Endpoints.FirstOrDefault(e => e.Contract.Name == contractName);
                endpointDispatcher.DispatchRuntime.InstanceProvider = new ServiceLocatorInstanceProvider(_serviceLocator, serviceEndpoint.Contract.ContractType);
            }
        }
    }
}

And finally the ServiceLocatorInstanceProvider uses the service locator to resolve dependencies, something like this:

public class ServiceLocatorInstanceProvider : IInstanceProvider
{
    private readonly IServiceLocator _serviceLocator;
    private readonly Type _contractType;

    public ServiceLocatorInstanceProvider(IServiceLocator serviceLocator, Type contractType)
    {
        this._serviceLocator = serviceLocator;
        this._contractType = contractType;
    }

    public object GetInstance(InstanceContext instanceContext)
    {
        return GetInstance(instanceContext, null);
    }

    public object GetInstance(InstanceContext instanceContext, Message message)
    {
        return _serviceLocator.GetInstance(_contractType);
    }

    public void ReleaseInstance(InstanceContext instanceContext, object instance)
    {
            
    }
}

Thursday, 20 January 2011

Some notes on ArcGIS and associated technologies

I’m getting started with ArcGIS so I need to keep some notes. NB: This post is just an aide-mémoire for me as I get started so nothing is covered in any detail.

What is Esri?

Esri is a company providing Geographic Information System (GIS) software and geodatabase management applications. They are based in California and have about 30% of the GIS software market (see http://en.wikipedia.org/wiki/Esri).

What is the APDM?

APDM (ArcGIS Pipeline Data Model) is an open standard for storing geographical data associated with pipelines:

The ArcGIS Pipeline Data Model is designed for storing information pertaining to features found in gathering and transmission pipelines, particularly gas and liquid systems. The APDM was expressly designed for implementation as an ESRI geodatabase for use with ESRI's ArcGIS and ArcSDE® products. A geodatabase is an object-relational construct for storing and managing geographic data as features within an industry-standard relational database management system (RDBMS).” - http://www.apdm.net/

What is ArcSDE?

ArcSDE technology is a core component of ArcGIS Server. It manages spatial data in a relational database management system (RDBMS) and enables it to be accessed by ArcGIS clients.” - http://www.esri.com/software/arcgis/arcsde/index.html

ArcSDE technology serves as the gateway between GIS clients and the RDBMS. It enables you to easily store, access, and manage spatial data within an RDBMS package…

ArcSDE technology is critical when you need to manage long transactions and versioned-based workflows such as

* Support for multiuser editing environments
* Distributed editing
* Federated replicas managed across many RDBMS architectures
* Managing historical archives

The responsibility for defining the specific RDBMS schema used to represent geographic data and for application logic is retained in ArcGIS, which provides the behavior, integrity, and utility of the underlying records.” - http://www.esri.com/software/arcgis/geodatabase/storage-in-an-rdbms.html

What is a geodatabase?

“The geodatabase is the common data storage and management framework for ArcGIS. It combines "geo" (spatial data) with "database" (data repository) to create a central data repository for spatial data storage and management.” - http://www.esri.com/software/arcgis/geodatabase/index.html

Basic terms and concepts

There are four fundamental types upon which geographic representations in a GIS are based:

  • Features (collections or points, lines, and polygons)
    • Representations of things located on or near the surface of the earth.
    • Can be natural (rivers, vegetation, etc).
    • Can be constructions (roads, pipelines, buildings, etc.).
    • Can be subdivisions of land (counties, political divisions, land parcels, etc.).
    • Most commonly represented as points, lines, and polygons.
  • Attributes (descriptive attributes of features)
    • Managed in tables based on simple relational database concepts.
  • Imagery
    • Imagery is managed as a raster data type composed of cells organized in a grid of rows and columns.
    • In addition to the map projection, the coordinate system for a raster dataset includes its cell size and a reference coordinate (usually the upper left or lower left corner of the grid).
    • These properties enable a raster dataset to be described by a series of cell values starting in the upper left row.
    • Each cell location can be located using the reference coordinate, the cell size, and the number of rows and columns.
  • Continuous surfaces (such as elevation)
    • A surface describes an occurrence that has a value for every point on the earth.
    • Surface elevation is a continuous layer of values for ground elevation above mean sea level.
    • Other surface type examples include rainfall, pollution concentration, and sub-surface representations of geological formations.

See the ArcGIS Desktop Help file for further details.

GIS data structures

Features, rasters, attributes, and surfaces are managed using three primary GIS data structures:

  • Feature classes
  • Attribute tables
  • Raster datasets

Map Layer Types GIS Datasets
Features (points, lines, and polygons) Feature classes
Attributes Tables
Imagery Raster datasets
Surfaces

Both features and rasters can be used to provide a number of alternative surface representations:

  • Feature classes (such as contours)
  • Raster-based elevation datasets
  • TINs built from XYZ points and 3D line feature classes

In a GIS datasets hold data about a particular feature collection (for example, roads) that is geographically referenced to the earth's surface. A dataset is a collection of homogeneous features. Most datasets are collections of simple geographic elements.

Users work with geographic data in two fundamental ways:

  • As datasets (homogeneous collections of features, rasters, or attributes)
  • As individual elements (e.g. individual features, rasters, and attribute values) contained within each dataset

Datasets are:

  • The primary inputs and outputs for geoprocessing.
  • Datasets are the primary means for data sharing.

 

See also

There’s some good basic information on GIS systems on the Ordinance Survey website: http://www.ordnancesurvey.co.uk/oswebsite/gisfiles/index.html

Thursday, 20 January 2011

Tuesday, 18 January 2011

Log on as a batch job in Windows Server 2008

Problem

When creating a scheduled task on Windows Server 2008 I needed to assign a local user to run the task. For this to work the user must be given “Log on as a batch job” privileges.

Solution

1. Administrative Tools > Local Security Policy
2. Security Settings > Local Policies > User Rights Assignment
3. Find and double-click on the “Log on as a batch job” policy.
4. Add User or Group…
5. Add the user and click OK.

untitled

Multiple X.509 certificates found

Problem

I was configuring a WCF service to use SSL and had created and installed a self-signed certificate. The WCF service configuration looked something like this:

<serviceBehaviors>
  <behavior name="EnquirySubmissionServiceBehavior">
    <serviceMetadata httpsGetEnabled="true" />
    <serviceDebug includeExceptionDetailInFaults="true" />
    <serviceAuthorization principalPermissionMode="UseAspNetRoles" roleProviderName="SqlRoleProvider" />
    <serviceCredentials>
      <serviceCertificate findValue="CertificateNameHere" storeLocation="LocalMachine" storeName="My" x509FindType="FindBySubjectName" />
    </serviceCredentials>
  </behavior>
</serviceBehaviors>

When trying to access the service metadata in a browser I received an error stating that multiple X.509 certificates had been found using the given search criteria.

Solution

The solution was to change the configuration to use an alternative method to find the certificate. In this case I used FindByThumbprint and provided the certificate thumbprint. To obtain the thumbprint do the following:

1. Start > Run > mmc
2. File > Add/Remove snap in…
3. Find and add Certificates (local machine).
4. Find the certificate and double-click on it.
5. In the pop-up dialog scroll to Thumbprint and click on it to view the value.
6. Copy the thumbprint value and remove spaces.

untitled

I then changed the WCF service configuration to look something like this:

<serviceBehaviors>
  <behavior name="EnquirySubmissionServiceBehavior">
    <serviceMetadata httpsGetEnabled="true" />
    <serviceDebug includeExceptionDetailInFaults="true" />
    <serviceAuthorization principalPermissionMode="UseAspNetRoles" roleProviderName="SqlRoleProvider" />
    <serviceCredentials>
      <serviceCertificate findValue="46677f6006fb15fe64e5f394d1d99c22f3729155" storeLocation="LocalMachine" storeName="My" x509FindType="FindByThumbprint" />
    </serviceCredentials>
  </behavior>
</serviceBehaviors>

Enable 32-bit application pools on IIS7

The problem

I ran into the situation where a WCF service had a reference to sever 3rd party components. One of the components itself actually had a reference to some ancient 16-bit code. The WCF service was hosted in IIS7 on a 64-bit machine and used Unity to resolve dependencies.

As Unity tried to resolve dependencies I kept getting an error that looked a bit like this:

Unexpected exception thrown by call to <service type here>: Resolution of the dependency
failed, type = "<service contract interface here>", name = "(none)".
Exception occurred while: Calling constructor <type that failed construction>.
Exception is: FileNotFoundException - Could not load file or assembly <assembly name here>,
Version=1.0.0.0, Culture=neutral, PublicKeyToken=<key here>' or one of its dependencies. 
The system cannot find the file specified.

 

The solution

The solution was to configure the application pool that the service was using to allow 32-bit assemblies.

1. View application pools and select the appropriate pool from the list.
2. Choose ‘Advanced settings…’ from the menu on the right.

untitled

3. Set ‘Enable 32-Bit Applications’ to True.
4. Click OK.

Untitled2

Friday, 14 January 2011

IIS7 and WCF MIME types

Problem

I needed to deploy a WCF service to IIS7 running on windows server 2008. The service was accessed via a .svc file but upon calling it from a browser an error was reported stating that the .svc extension was not recognised.

Solution

The solution was to reregister the WCF Service Model with IIS by running ServiceModelReg.exe:

"%windir%\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\ServiceModelReg.exe" -r -y

The flags used were:

Flag Description
-r Re-registers this version of WCF and updates scriptmaps at the IIS metabase root and for all scriptmaps under the root. Existing scriptmaps are upgraded to this version regardless of the original versions.
-y Do not ask for confirmation before registering or re-registering components.

After reregistering the WCF Service Model the .svc extension was correctly installed. Note that IIS7 did not require a restart.

Full details of ServiceModelReg.exe and all its flags can be found here: ServiceModel Registration Tool (ServiceModelReg.exe)

Friday, 14 January 2011,

Generating temp SSL certificates for development

Update 01/03/2014

If you’re using IIS 7 there is a quick way to create self-signed certificates. Details can be found in this post: Create a self-signed certificate for development in IIS 7.

If you still want to know about the manual method read on.

Original post

I needed to generate an SSL certificate for testing a WCF service which needed to be secure. Not wanting (or having the budget for) a real SSL certificate I elected to generate my own. The following batch file contains the main ingredients:

@echo off

echo Step 1 - Creating a self-signed root authority certificate and export the private key.
echo You will be prompted to provide a password to protect the private key.
echo The password is required when creating a certificate signed by the root certificate.
echo ===================================================================================
makecert -n "CN=RootTempCA" -r -sv RootTempCA.pvk RootTempCA.cer

echo Step 2 - Create a new certificate signed by a root authority certificate
echo ========================================================================
makecert -sk domain.to.secure -iv RootTempCA.pvk -n "CN=domain.to.secure" -ic RootTempCA.cer -sr localmachine -ss my -sky exchange -pe

The domain.to.secure should be replaced to match the environment (this could be localhost, the machine name, whatever you need). Step 2 should install the certificate into the certificate store – no need to do it manually.

The makecert flags used above breakdown as follows:

Flag Step Description
-n subjectname 1, 2

Specifies the subject name. The convention is to prefix the subject name with "CN = " for "Common Name".

-r 1

Specifies that the certificate will be self-signed.

-sv privateKeyFile 1

Specifies the file that contains the private key container.

-sk subjectKey 2

The location of the subject's key container that holds the private key. If a key container does not exist, one is created. If neither of the -sk or -sv options is used, a key container called JoeSoft is created by default.

-iv issuerKeyFile 2

Specifies the issuer's private key file.

-ic issuerCertFile 2 Specifies the location of the issuer's certificate.
-sr location 2

Specifies the subject's certificate store location. location can be either currentuser (the default) or localmachine.

-ss store 2

Specifies the subject's certificate store name that stores the output certificate.

-sky keytype 2

Specifies the subject's key type, which must be one of the following: signature (which indicates that the key is used for a digital signature), exchange (which indicates that the key is used for key encryption and key exchange), or an integer that represents a provider type. By default, you can pass 1 for an exchange key or 2 for a signature key.

-pe 2

Marks the generated private key as exportable. This allows the private key to be included in the certificate.

Because this process creates a self-signed certificate if you access the service from a remote machine you will likely run in to problems because the certificate was issued by an unknown Certification Authority. To get around this you need to import the root certificate into the trusted root certificate store on the client machine. I find it best to import the certificate using a Personal Information Exchange (pfx) file. To create the .pfx run the following:

pvk2pfx.exe -pvk RootTempCA.pvk -spc RootTempCA.cer -pfx RootTempCA.pfx -po password_here

References

 

See also

Thursday, 13 January 2011

Creating shortcuts to Remote Desktop to a specific machine

I like using Launchy to open and close applications. This includes opening Remote Desktop but it’s nice to be able to jump straight to a Remote Desktop session on a particular machine by simply typing the machine name into Launchy.

The way I do this is:

  1. Create a folder under C:\Documents and Settings\<username>\Start Menu\Programs\ called Shortcuts.
  2. Add this folder to the Launch catalog using the Launchy options dialog.
  3. Create a new shortcut in the new folder.
  4. For the ‘Location of the item’ type %windir%\system32\MSTSC.EXE /v:MACHINE_NAME where MACHINE_NAME is replaced with the name of the machine you want to Remote Desktop to.
  5. For the name of the shortcut use the machine name.
  6. Save the shortcut and get Launchy to rescan its catalog.

 

untitled

Untitled2


Now, by tying the machine name into Launchy you can immediately open a Remote Desktop session to the machine.

Untitled3

NUnit tests being ignored

Problem

I created a test assembly and added NUnit test fixtures to it. Everything was proceeding nicely until I added an App.config file to the test project. Thereafter, I could not run any of the tests with the ReSharper test runner which reported that each test was being ignored.

untitled

I also found it was impossible to add the test assembly to the Gallio test runner.

Solution

The problem was that I had accidentally created a badly formatted App.config file (I had forgotten to wrap <add/> elements in an <appSettings/> element). Correcting this error allowed all tests to run normally. The incorrectly formatted App.config file looked something like this:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <add key="SomeKey" value="SomeValue" />
</configuration>

The corrected version looked something like this:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <appSettings>
    <add key="SomeKey" value="SomeValue" />
  </appSettings>
</configuration>

Friday, 7 January 2011

Useful NUnit attributes

I can never remember the various NUnit attributes I find useful (other than the basics like TestFixture, Test, SetUp, TearDown and ExpectedException) so here’s a quick aide-mémoire (NB: this is not a complete list by any means):

Attribute Description
CombinatorialAttribute Used on a test to specify that NUnit should generate test cases for all possible combinations of the individual data items provided.

ExplicitAttribute

Causes a test or test fixture to be ignored unless it is explicitly selected for running.

RandomAttribute

Used to specify a set of random values to be provided for an individual parameter of a parameterized test method.
RangeAttribute Used to specify a range of values to be provided for an individual parameter of a parameterized test method.

SequentialAttribute

used on a test to specify that NUnit should generate test cases by selecting individual data items provided for the parameters of the test, without generating additional combinations.
TestCaseAttribute Serves the dual purpose of marking a method with parameters as a test method and providing inline data to be used when invoking that method.
TestCaseSourceAttribute Used on a parameterized test method to identify the property, method or field that will provide the required arguments.
ValuesAttribute Used to specify a set of values to be provided for an individual parameter of a parameterized test method.

Row testing with NUnit

In times past there was an extension to NUnit that facilitated row testing (i.e. allowing a test case to be executed multiple times, passing in different values to each run). The extension is no longer required; you can use the [TestCase] attribute instead:

[TestCase("wrongusername", "password")]
[TestCase("username", "wrongpassword")]
[TestCase("worngusername", "wrongpassword")]
[ExpectedException(typeof(SecurityTokenException))]
public void Validate_WithInvalidUsernameOrPassword_ThrowsException(string username, string password)
{
    var validator = new SimpleUsernameValidator();
    validator.Validate(username, password);
    Assert.Fail();
}

Note that you can also specify an expected result if the method under test returns a value*:

[TestCase(12,3, Result=4)]
[TestCase(12,2, Result=6)]
[TestCase(12,4, Result=3)]
public int DivideTest(int n, int d)
{
  return( n / d );
}

There is also a [TestCaseSource] attribute that can be used in conjunction with [TestCase] to identify a class that can supply test case arguments**:

[Test, TestCaseSource("DivideCases")]
public void DivideTest(int n, int d, int q)
{
    Assert.AreEqual( q, n / d );
}

static object[] DivideCases =
{
    new object[] { 12, 3, 4 },
    new object[] { 12, 2, 6 },
    new object[] { 12, 4, 3 } 
};

 

References

* TestCaseAttribute (NUnit 2.5)
** TestCaseSourceAttribute (NUnit 2.5)

Wednesday, 5 January 2011

Basic unit test coverage reporting with PartCover

I’m fed up with having to guess at how much of my code is covered with unit tests and there’s no budget for NCover. So, after a bit of hunting I came across PartCover, an open source code coverage tool for .Net (https://github.com/sawilde/partcover.net4). As I’ve just started using it so I thought I’d jot down a few notes on the basics of getting a coverage report working.

The first step is creating a settings file. This is an XML file containing the settings PartCover will use to produce a coverage report.

<PartCoverSettings>
  <Target>C:\path_to_nunit\nunit-console-x86.exe</Target>
  <TargetWorkDir>C:\path_to_source\EnquirySubmissionServiceCoreTests\bin\Debug</TargetWorkDir>
  <TargetArgs>EnquirySubmissionServiceCoreTests.dll</TargetArgs>
  <Rule>+[*]*</Rule>
  <Rule>-[EnquirySubmissionServiceCore]EnquirySubmissionServiceCore.Some.Namespace*</Rule>
  <Rule>-[log4net*]*</Rule>
  <Rule>-[Moq*]*</Rule>
  <Rule>-[nunit*]*</Rule>
  <Rule>-[EnquirySubmissionServiceCoreTests*]*</Rule>
</PartCoverSettings>

You need to provide a path to NUnit, a working directory (the directory containing the assembly to check), the target assembly and a set of rules. The rules are basically RegEx and tell PartCover what to include and what to exclude from the report. My rules say include everything (+[*]*) then start excluding assemblies I don’t want in the report.

Note line 6 of the above settings file. It is possible to exclude specific namespaces within an assembly so you can exclude files you don’t want to be tested for coverage. In fact, you can even exclude specific classes in the same way.

I saved the settings file into the PartCover installation directory and then ran PartCover.exe:

partcover.exe --settings settings.xml --output output.xml

This runs the PartCover with the settings file and creates an XML file containing the output (in this case named output.xml). The output file can be opened with PartCover.Browser.exe to view the results:

untitled

That’s it for a basic setup and first report generation. Looks like I’ve got a few unit tests to write…