Thursday, 26 December 2013

The Bounded Context in Domain Driven Design (DDD)

I have used NHibernate for some time now but have recently started to reengage with Microsoft’s Entity Framework. I came across Julie Lerman’s course Entity Framework in the Enterprise on Pluralsight. This is an interesting course and deals with Repositories and the Unit of Work pattern, two concepts I have used extensively with NHibernate. But Julie also deals with a DDD concept – the Bounded Context. It’s been a while since I’ve dealt with DDD so I thought I’d spend some time re-familiarising myself the Bounded Context.

Firstly, let’s see what Eric Evans had to say about Bounded Contexts in his book, “Domain-Driven Design, Tackling Complexity in the Heart of Software”.

“Multiple models coexist on big projects, and this works fine in many cases. Different models apply in different contexts.” [1]

“Multiple models are in play on any large project. Yet when code based on distinct models is combined, software becomes buggy, unreliable, and difficult to understand. Communication among team members becomes confused. It is often unclear in what context a model should not be applied.” [2]

“A model applies in a context. The context may be a certain part of the code, or the work of a particular team. For a model invented in a brainstorming session, the context could be limited to that particular conversation.” [2]

“A BOUNDED CONTEXT delimits the applicability of a particular model so that team members have a clear and shared understanding of what has to be consistent and how it relates to other CONTEXTS. Within that CONTEXT, work to keep the model logically unified, but do not worry about applicability outside those bounds. In other CONTEXTS, other models apply, with differences in terminology, in concepts and in rules, and in dialects of the UBIQUITOUS LANGUAGE. By drawing an explicit boundary, you can keep the model pure, and therefore potent, where it is applicable. At the same time, you avoid confusion when shifting your attention to other CONTEXTS. Integration across the boundaries necessarily will involve some translation, which you can analyze explicitly.” [3]

This makes perfect sense. You can see how the concept of a user, for example, would have different meanings to different departments within a business. If we tried to model a single user that has all the attributes required to satisfy all departments we might end up with a bloated and confused user entity. By separating the model into different bounded contexts, for different departments for example, and having the user defined separately within each context the result is more sharply focused and less confusing models.

The question that might remain is, how could we keep different bounded contexts synchronised? For example, if a user is created in one department in our hypothetical business application, how would we also create it in another at the same time? This could be accomplished using Domain Events or some kind of correlation ID for the same entity in different contexts.

 

See also

 

References

[1] Eric Evans, “Domain-Driven Design, Tackling Complexity in the Heart of Software”, ISBN 0-321-12521-5, p335.

[2] Eric Evans, “Domain-Driven Design, Tackling Complexity in the Heart of Software”, ISBN 0-321-12521-5, p336.

[3] Eric Evans, “Domain-Driven Design, Tackling Complexity in the Heart of Software”, ISBN 0-321-12521-5, p336-357.

Saturday, 21 December 2013

SQL Server indexing basics – a bit of revision

This post really just for me. You may get something from it but don’t expect very much! It’s been a while since I visited SQL Server indexing fundamentals so I thought I’d do a bit of revision and this post is really just my notes.

Pages

“The fundamental unit of data storage in SQL Server is the page. The disk space allocated to a data file (.mdf or .ndf) in a database is logically divided into pages numbered contiguously from 0 to n. Disk I/O operations are performed at the page level. That is, SQL Server reads or writes whole data pages.” [1]

  • The page size is 8 KB
  • This means databases have 128 pages per megabyte
  • Each page begins with a 96-byte header including:
    • Page number
    • Page type
    • Amount of free space on the page
    • Allocation unit ID of the object that owns the page

 

http://i.technet.microsoft.com/dynimg/IC147464.gif

 

Extents

“Extents are the basic unit in which space is managed. An extent is eight physically contiguous pages, or 64 KB. This means SQL Server databases have 16 extents per megabyte.” [1]

 

Heaps

“A heap is a table without a clustered index.” [2]

“If a table is a heap and does not have any nonclustered indexes, then the entire table must be examined (a table scan) to find any row.” [2]

  • Do not use a heap when
    • the data is frequently returned in a sorted order
    • the data is frequently grouped together
    • ranges of data are frequently queried from the table
    • there are no nonclustered indexes and the table is large

 

Indexes

“An index is an on-disk structure associated with a table or view that speeds retrieval of rows from the table or view. An index contains keys built from one or more columns in the table or view. These keys are stored in a structure (B-tree) that enables SQL Server to find the row or rows associated with the key values quickly and efficiently.” [3]

 

Clustered Indexes

  • Clustered indexes sort and store the data rows in the table or view based on their key values. These are the columns included in the index definition. There can be only one clustered index per table, because the data rows themselves can be sorted in only one order. [3]

  • The only time the data rows in a table are stored in sorted order is when the table contains a clustered index. When a table has a clustered index, the table is called a clustered table. If a table has no clustered index, its data rows are stored in an unordered structure called a heap. [3]

 

  • Clustered indexes
    • have a row in sys.partitions (with index_id = 1)
    • determine the physical order of data
  • Consider using when
    • There are a large number of distinct values in a column
    • On columns that are
      • frequently accessed
      • frequently searched for a range of values
    • Queries return very large result sets

 

Nonclustered indexes

  • Nonclustered indexes
    • do not affect the physical order of the data rows
    • have a structure separate from the data rows
    • contain the nonclustered index key values and each key value entry has a pointer to the data row that contains the key value
  • The pointer from an index row in a nonclustered index to a data row is called a row locator

 

Primary keys

  • A primary key automatically creates a clustered index, except when
    • a nonclustered primary key has been explicitly specified
    • a clustered index already exists

“Indexes are automatically created when PRIMARY KEY and UNIQUE constraints are defined on table columns. For example, when you create a table and identify a particular column to be the primary key, the Database Engine automatically creates a PRIMARY KEY constraint and index on that column.” [3]

 

Unique indexes (unique constraints)

  • A unique index is created when you create PRIMARY KEY or UNIQUE constraint. [4]
  • You can create a unique index independent of a constraint or when you create an indexed view

 

Filtered indexes

“A filtered index is an optimized nonclustered index especially suited to cover queries that select from a well-defined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered index can improve query performance as well as reduce index maintenance and storage costs compared with full-table indexes.” [5]

  • Use a filtered index when a well defined subset of results are part of a select statement.

 

Index Maintenance

Reorganise

  • Physically reorganises the leaf nodes of the index only

Rebuild

  • Drops the existing index and recreates it

Tips

  • Avoid DBCC SHRINKDB because it increases index fragmentation
  • Put the clustered index on the column that is distinct and increasing in value to avoid fragmentation
  • Check index fragmentation and then
    • If fragmentation is >5% <= 30% then reorganise the index
    • If fragmentation >30% then rebuild the index [6]

 

Beware

  • Too many indexes on a table can reduce performance
    • Execution plan can be less efficient
    • Queries can be less efficient
    • Queries may use the wrong – less efficient – index
  • Duplicate indexes offer no advantages and can reduce performance during inserts, updates and deletions
  • Unused indexes may also reduce performance during inserts, updates and deletions

 

References

[1] Understanding Pages and Extents – SQL 2008 documentation

[2] Heaps (Tables without Clustered Indexes) – SQL 2012 documentation

[3] Clustered and Nonclustered Indexes Described – SQL Server 2012 documentation

[4] Create Unique Indexes - SQL Server 2012 documentation

[5] Create Filtered Indexes - SQL Server 2012 documentation

[6] Reorganize and Rebuild Indexes - SQL Server 2012 documentation

Thursday, 19 December 2013

The Accidental DBA and the SQL Server Maintenance Solution

I recently came across 2 fantastic resources for understanding and implementing the essentials of maintaining SQL Server databases.

The Accidental DBA

As a software developer you often encounter situations where you become the DBA whether you want to or not. It is not uncommon to be working in an environment where there is no dedicated DBA but the applications under development rely on SQL Server as a back-end data store.

So, what do you need to know to keep the SQL Server database up-and-running and to prevent queries from slowing to a standstill? Well, it turns out there is an excellent series of blog posts on www.sqlskills.com called the Accidental DBA, offering 30 days of top tips. You can find the series here:

 

SQL Server Maintenance Solution

Something referred to by the Accidental DBA series is the SQL Server Maintenance Solution put forwards by Ola Hallengren. This awesome set of scripts allows you to run backups, perform integrity checks, and index and statistics maintenance on all editions of Microsoft SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, and SQL Server 2012.

 

UPDATE 24/12/2013 – Replacement for sp_helpindex

Also on sqlskills.com, Kimberly Tripp has a replacement for the standard sp_helpindex stored procedure.

Sunday, 17 November 2013

Action selectors and action filters in ASP.Net MVC

At the time of writing MVC is in version 5.0.

Firstly, the details of what an action is are outside the scope of this post but in essence an action is a public method on a controller class that the framework invokes in response to an incoming request. 

Action selectors

Action selectors are attributes that can be applied to action methods and are used to influence which action method gets invoked in response to a request.

For example the ActionName attribute can be used to change the name to invoke an action method. In the following code snippet the Index() action method will be invoked with the name “List” rather than “Index” in the URL. In fact “Index” would be invalid. Note also that the view, if there is one, must also be called “List” and not “Index” unless you use an overloaded version of the View() method that takes the name of a view as a parameter.

[ActionName("List")]
public ActionResult Index()
{
    return View("Index");
}

 

The ActionVerbs selector is used when we want to control the selection of the action method based on request type. For example you can define which method responds to an HTTP Get and which responds to an HTTP Post. For example:

[AcceptVerbs(HttpVerbs.Get)]
public ActionResult Index()
{
    return View();
}

 

Note that there are some shortcut attributes that do the same thing: [HttpGet] and [HttpPost]. See the previous post Why use the MVC AcceptVerbs attribute?

 

Action filters

Action filters apply pre and post processing logic to an action method and can modify the result. Action filters are typically used to apply cross-cutting concerns, logic that you want to apply to multiple methods but don’t want to duplicate code across controllers. Caching, validation and authorisation are examples of the type of cross-cutting concerns that action filters can be used to implement.

Action filters can be applied to individual methods or to the controller itself. When applied at the controller level the filter will apply to all action methods in that controller.

In ASP.NET MVC there are basically 4 different types of filter:

  • Authorization filters – Implements the IAuthorizationFilter attribute.
  • Action filters – Implements the IActionFilter attribute.
  • Result filters – Implements the IResultFilter attribute.
  • Exception filters – Implements the IExceptionFilter attribute.

 

public class HomeController : Controller
{
    [OutputCache(Duration=10)]
    public string Index()
    {
         return View();
    }
}

 

See Filtering in ASP.NET MVC.

A note about global filters

Filters can be applied globally, that is to every request that is processed by any controller in your application. You can register global filters in the FilterConfig class located under the App_Start folder of your MVC project.

public class FilterConfig
{
    public static void RegisterGlobalFilters(GlobalFilterCollection filters)
    {
        filters.Add(new HandleErrorAttribute());
    }
}

 

Applying a global filter might be a good way of implementing logging. Simply create a new filter attribute that extends ActionFilterAttribute and register it in FilterConfig. Note that there is a default location for filters in the MVC application template – under the Filters folder.

public class LogAttribute : ActionFilterAttribute
{
    public override void OnActionExecuted(ActionExecutedContext filterContext)
    {
        // Do logging here
        base.OnActionExecuted(filterContext);
    }

    public override void OnActionExecuting(ActionExecutingContext filterContext)
    {
        // Do logging here
        base.OnActionExecuting(filterContext);
    }

    public override void OnResultExecuted(ResultExecutedContext filterContext)
    {
        // Do logging here
        base.OnResultExecuted(filterContext);
    }

    public override void OnResultExecuting(ResultExecutingContext filterContext)
    {
        // Do logging here
        base.OnResultExecuting(filterContext);
    }
}

 

Summary

  • Action selectors are implemented as attributes and influence what action methods are selected for invocation in response to an incoming request.
  • Action filters allow pre and post processing logic to be applied to an action method. 

Saturday, 16 November 2013

ASP.Net MVC ActionResult return type

This is just a quick aide-mémoire for me as I pick up some MVC code again. At the time of writing MVC is in version 5.

OK, action methods can return the following ActionResult types:

 

Action Result Helper Method Description

ViewResult

View

Renders a view as a Web page.

PartialViewResult

PartialView

Renders a partial view, which defines a section of a view that can be rendered inside another view.

RedirectResult

Redirect

Redirects to another action method by using its URL.

RedirectToRouteResult

RedirectToAction

RedirectToRoute

Redirects to another action method.

ContentResult

Content

Returns a user-defined content type.

JsonResult

Json

Returns a serialized JSON object.

JavaScriptResult

JavaScript

Returns a script that can be executed on the client.

FileResult

File

Returns binary output to write to the response.

EmptyResult

(None)

Represents a return value that is used if the action method must return a null result (void).

 

See Controllers and Action Methods in ASP.NET MVC Applications for the origin of this table.

Saturday, 16 November 2013

Friday, 8 November 2013

ArcGIS Runtime SDK for WPF – Tips and tricks when using the LocalServer

Usual disclaimer here: these are my notes to help me understand the problem and get a workable model of what’s going on into my head.

I’ve been leading the development of an application using the ArcGIS Runtime SDK for WPF version 10.1.1.0, an application designed to run on tablet devices running Windows 7 or 8. It’s essentially a desktop application optimised for touch screen use. The application uses MVVM supported by Caliburn.Micro.

The application is task driven, meaning it supports field workers in the completion of tasks allocated to them. In order to present the user with lists of tasks to be completed it is necessary for the application to query the local geodatabase via the ESRI APIs during a start-up initialisation phase. To our dismay this phase was taking a very long time, up to several minutes on some tablet devices.

A thorough investigation into potential bottlenecks provided insight into the use of the LocalServer, the local service classes (e.g. LocalFeatureService, LocalMapService, LocalGeometryService, etc.) and the ‘plain’ service classes (e.g. FeatureService, GeometryService).

What follows are the results of some lessons learned.

 

Tip 1 - Understanding the LocalServer, local service classes and the plain service classes

The LocalServer

The LocalServer is not too big a deal here. It’s just like having a web server running locally, a server that can be used to host REST services just like a conventional ArcGIS server. The first task is to get the server running. Now this can happen automatically:

“It is not necessary to work with the LocalServer directly. Instead, creating and starting the individual LocalService classes ( LocalMapService, LocalFeatureService, LocalGeocodeService, LocalGeometryService, and LocalGeoprocessingService) will start the LocalServer if it is not already running.” [1]

I prefer to take control of this and start the server myself using InitializeAsync [1]. In this application I used a Caliburn.Micro IResult along these lines:

 

public class InitialiseLocalServerResult : IResult
{
    public event EventHandler<ResultCompletionEventArgs> Completed;

    public void Execute(ActionExecutionContext context)
    {
        LocalServer.InitializeAsync(ServerInitialised);
    }

    private void ServerInitialised()
    {
        OnCompleted();
    }

    private void OnCompleted()
    {
        var handler = Completed;
        if (handler != null)
        {
            handler(this, new ResultCompletionEventArgs());
        }
    } 
}

 

So the LocalServer is just a local web server used to host REST services for you.

Local service classes

You create services on your LocalServer using the local service classes which can be found in the ESRI.ArcGIS.Client.Local namespace. There are  number of local service classes available such as:

  • LocalMapService - provides access to maps, features, and attribute data contained within a Map Package.
  • LocalFeatureService - forms the basis for feature editing. The feature service is just a map service with the Feature Access capability enabled.
  • LocalGeometryService - a special type of service not based on any specific geographic resource such as a Map Package but instead provide access to geometric operations.
  • LocalGeocodeService – address and postcode searching etc.
  • LocalGeoprocessingService - a local geoprocessing service hosted by the runtime local server.

 

Again I like to take control of when and how the local services are created on the LocalServer and in this application I used an IResult something like this:

 

public class StartLocalFeatureServiceResult : IResult<LocalFeatureService>
{
    private readonly string _mapPackagePath;

    public StartLocalFeatureServiceResult(string mapPackagePath)
    {
        _mapPackagePath = mapPackagePath;
    }

    public event EventHandler<ResultCompletionEventArgs> Completed;

    public LocalFeatureService Result { get; private set; }

    public bool HasError
    {        
        get { return Error != null; }
    }

    public Exception Error { get; set; }

    public void Execute(ActionExecutionContext context)
    {
        var service = new LocalFeatureService(_mapPackagePath);
        service.StartAsync(ServiceStarted);
    }

    private void ServiceStarted(LocalService service)
    {
        if (service.Error == null)
        {
            Result = (LocalFeatureService)service;
        }
        else
        {
            Error = service.Error;
        }

        OnCompleted();
    }

    private void OnCompleted()
    {
        var handler = Completed;
        if (handler != null)
        {
            handler(this, new ResultCompletionEventArgs());
        }
    }
}

 

If you are not using Caliburn.Micro the key point is that instantiating the LocalFeatureService and calling StartAsync on it causes a service to be started on the LocalServer. The use of map packages is outside the scope of this post but essentially the package contains the data to be exposed by the service.

Really useful properties on the local service classes are the URLs. You can use these in conjunction with the layer classes (e.g. FeatureLayer) or classes such as QueryTask.

So the local service classes encapsulate a REST service to be instantiated on your LocalServer.

Plain service classes

I think the biggest confusion arises over the difference between the local service classes (e.g. LocalFeatureService) and the plain service classes (e.g. FeatureService).

In fact the difference is quite straight forwards. Whereas the local service classes are used to create REST services on your LocalServer the plain feature classes are clients used to connect to and query the service running on the LocalServer. It’s as simple as that. The plain service classes are found in the  ESRI.ArcGIS.Client namespace.

 

Tip 2 – Log the local server URLs when you start the LocalServer

I always write the URL of the LocalServer to a log file somewhere. If you grab that URL and paste it into your browser you will see a server management page that shows you exactly what services are running. Very useful if you want to make sure you are not starting more services than you need. Simply log LocalServer.Url and LocalServer.AdminUrl.

You’ll get URLs like this: http://127.0.0.1:50000/glMxtq/arcgis/rest/services. Note there’s a randomly generated string in there (‘glMxtq’ in this case). That’s different every time you start the server so you need to grab the new URL each time.

 

image

Figure 1 – A services directory page for a running LocalServer

 

Watch out for duplicate services. Maybe you don’t need them and your code can be changed to only start those services you actually need.

 

Tip 3 – If you’ve already got a LocalFeatureService maybe you don’t need a LocalMapService too

In my application I needed a LocalMapService for some activities and a LocalFeatureService for others so I naively created an instance of each pointing at the same map package. However, when I checked the server using the services directory (see Tip 2 above) I found that in addition to the feature service there were 2 map services running, not one.

It seems that if you create a local feature service it also creates a local map service for you. In fact the LocalFeatureService class inherits from LocalMapService so you get 2 URLs from an instance of LocalFeatureService: UrlFeatureService from LocalFeatureService and UrlMapService inherited from LocalMapService

If you need to access a map package via a LocalFeatureService and a LocalMapService just create a LocalFeatureService and you’ll get both.

 

Tip 4 – Beware of using code like this

 

LocalMapService localMapService = new LocalMapService(@"Path to ArcGIS map package");
localMapService.StartAsync(delegateService =>
{
    IdentifyTask identifyTask = new IdentifyTask();
    identifyTask.Url = localMapService.UrlMapService;
});

This example comes from the ArcGIS Runtime SDK for WPF documentation. It’s not wrong but it could lead you down the wrong path. It would be all too easy to add code like this to a class and call it many times. Remember, each time you call StartAsync you will create a new instance of the service on the server which you probably don’t want and don’t need.

I prefer to create my local services in separate operations and maintain references to them that I can use later (see the StartLocalFeatureServiceResult result in Tip 1 above). That way I only have the minimum number of services running on the LocalServer. You will see from the example that it’s only the UrlMapService property that’s important. As a rule I’ll create a service for each package I need to access, keep a reference to that service somewhere and access the URL property as and when I need it.

 

References

[1] – ArcGIS Runtime SDK for WPF API reference

Friday, 8 November 2013

Thursday, 24 October 2013

Fixing broken music downloads in iTunes 11.1.1.11

There are few things more infuriating than eagerly purchasing the latest album by your favourite band from iTunes, transferring to your device of choice and heading off for a trip only to find one or more of the tracks have been truncated due to an incomplete download. This seems to happen to me a lot but maybe I’m just unlucky.

Anyway, how to fix the problem? Well firstly, there doesn’t seem to be an obvious way of doing this in iTunes, at least not in version 11.1.1.11. In the past you could visit the store, locate the track in question and choose to purchase the track again. Having figured out that you had already purchased the item iTunes would respond by asking if you would like to download again. You’d just confirm and all would be well.

This doesn’t seem to be possible in iTunes 11.1.1.11 because if you go to the store all you can do is play the track. No good if the track is broken! It won’t download again and the track will remain broken.

 

itunes003

Figure 1 - No option to download or purchase again in the store.

 

The solution

Now that all your music is in the cloud it seems reasonable that you should be able to get at it again. This is how I do it.

Step 1. Find the affected file on your disk drive and delete it.

Step 2. Start iTunes, navigate to the affected track and try to play it. iTunes will respond by saying it can’t find the track and invites you to locate the file. In the image below the track Blood Drive was truncated so I have deleted it from my disk drive.

 

itunes001

Figure 2 – Prompted to locate the missing file.

 

Step 3. Cancel the dialog and the cloud icon will reappear to the right of the track.

 

itunes002

Figure 3 – The cloud icon should now be available.

 

Step 4. Click the cloud icon to download the file again.

Sunday, 2 June 2013

Andy’s list of JavaScript frameworks

Too many JavaScript frameworks. Too little time. This is a list of frameworks for me to keep track. It’s not meant to be exhaustive but contains the frameworks I’m coming across. For a fuller list why not try www.jsdb.io.

Framework Description URL
H5F A JavaScript library that allows you to use the HTML5 Forms chapters new field input types, attributes and constraint validation API in non-supporting browsers. https://github.com/ryanseddon/H5F
Angular JS From Google. Somewhat similar to Knockout. http://angularjs.org/
Backbone Backbone.js gives structure to web applications by providing models with key-value binding and custom events, collections with a rich API of enumerable functions, views with declarative event handling, and connects it all to your existing API over a RESTful JSON interface. http://backbonejs.org/
Bootstrap Sleek, intuitive, and powerful front-end framework for faster and easier web development.

Not just JavaScript. Includes HTML and CSS.
http://twitter.github.io/bootstrap
Breeze Breeze is a JavaScript library that helps you manage data in rich client applications. If you store data in a database, query and save those data as complex object graphs, and share these graphs across multiple screens of your JavaScript client, Breeze is for you. http://www.breezejs.com/
Durandal Durandal is a cross-device, cross-platform client framework written in JavaScript and designed to make Single Page Applications (SPAs) easy to create and maintain. http://durandaljs.com/
Font Awesome The iconic font designed for Bootstrap.

Font Awesome gives you scalable vector icons that can instantly be customized — size, color, drop shadow, and anything that can be done with the power of CSS.
http://fortawesome.github.io/Font-Awesome/
jQuery Mobile A unified, HTML5-based user interface system for all popular mobile device platforms, built on the rock-solid jQuery and jQuery UI foundation. Its lightweight code is built with progressive enhancement, and has a flexible, easily themeable design. http://jquerymobile.com/
jQueryUI jQuery UI is a curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library. http://jqueryui.com/
jsRender jsrender - Next-generation jQuery Templates, optimized for high-performance pure string-based rendering, without DOM or jQuery dependency. https://github.com/BorisMoore/jsrender
Knockout Knockout is a JavaScript library that helps you to create rich, responsive display and editor user interfaces with a clean underlying data model. MVVM! http://knockoutjs.com/
Moment A 5.5kb javascript date library for parsing, validating, manipulating, and formatting dates. http://momentjs.com/
RequireJS RequireJS is a JavaScript file and module loader. It is optimized for in-browser use, but it can be used in other JavaScript environments, like Rhino and Node. Using a modular script loader like RequireJS will improve the speed and quality of your code. http://requirejs.org/
Toastr Simple javascript toast notifications. Contribute to toastr development by creating an account on GitHub. https://github.com/CodeSeven/toastr
Sammy Sammy.js is a tiny JavaScript framework developed to ease the pain and provide a basic structure for developing JavaScript applications. Routing! http://sammyjs.org/
Underscore Underscore is a utility-belt library for JavaScript that provides a lot of the functional programming support that you would expect in Prototype.js (or Ruby), but without extending any of the built-in JavaScript objects. It's the tie to go along with jQuery's tux, and Backbone.js's suspenders. http://underscorejs.org/

Wednesday, 29 May 2013

Problem running a Windows service with Topshelf and Spring.Net

Problem

I had written an application using Spring.Net for dependency injection - and some of the other features it provides - and Topshelf. The application could then be written as a console application and then installed and run as a Windows service using Topshelf’s handy ‘install’ command line parameter.
I was using XML files to configure Spring.Net. This turned out to be significant.

The application worked sweet as a nut as a console application and installed successfully as a Windows service. However, when I tried to run the Windows service using net start all I got was “The service is not responding to the control function”.




Solution

In the app.config file I had a spring configuration section that referenced external files for the spring.context:

<spring>
    <context>
      <resource uri="file://Config/SpringContext.xml" />
      <resource uri="file://Config/SpringDataAccess.xml" />
      <resource uri="file://Config/SpringVelocity.xml" />
    </context>
    <parsers>
      <parser type="Spring.Data.Config.DatabaseNamespaceParser, Spring.Data" />
      <parser type="Spring.Transaction.Config.TxNamespaceParser, Spring.Data" />
      <parser type="Spring.Aop.Config.AopNamespaceParser, Spring.Aop" />
    </parsers>
</spring>

The XML configuration files were set to “Copy always” and had been copied into the application directory correctly.

Poking around in the Event Viewer I spotted an interesting error log message. Essentially it said “Exception: Error creating context 'spring.root': Could not find file 'C:\Windows\system32\Config\SpringContext.xml'.”

That was weird because the path (C:\Windows\system32) is not where I had put the application but clearly it was where the Windows service was being run from.

A quick solution was to reconfigure the application to use embedded resources for the configuration files:

<spring>
    <context>
      <resource uri="assembly://Assembly.Name.Here/Namespace.Here/Config.SpringContext.xml" />
      <resource uri="assembly://Assembly.Name.Here/Namespace.Here/Config.SpringDataAccess.xml" />
      <resource uri="assembly://Assembly.Name.Here/Namespace.Here/Config.SpringVelocity.xml" />
    </context>
    <parsers>
      <parser type="Spring.Data.Config.DatabaseNamespaceParser, Spring.Data" />
      <parser type="Spring.Transaction.Config.TxNamespaceParser, Spring.Data" />
      <parser type="Spring.Aop.Config.AopNamespaceParser, Spring.Aop" />
    </parsers>
</spring>

The service now started correctly.

Sunday, 5 May 2013

JavaScript functions

JavaScript functions can be declared in a number of ways.

Basic declaration

function write(message) {
    var div = document.getElementById('message');
    var para = document.createElement("p");
    var node = document.createTextNode(message);
    para.appendChild(node);
    div.appendChild(para);
}

In the example above the function called write is declared and can be called later in JavaScript (e.g. write("My message here")).

Assigned function

Functions can be assigned to variables. There are basically 2 ways to do this: 1) name the function and assign it to a variable, 2) don’t name the function and assign it to a variable.

var writeFunc = function write(message) {
    var div = document.getElementById('message');
    var para = document.createElement("p");
    var node = document.createTextNode(message);
    para.appendChild(node);

    div.appendChild(para);
};

In the example above the function is named (write) and assigned to a variable (writeFunc). The function must now be called via the variable name (e.g. writeFunc("My message here")). Note the semi-colon at after the last curly brace.

var writeFunc = function (message) {
    var div = document.getElementById('message');
    var para = document.createElement("p");
    var node = document.createTextNode(message);
    para.appendChild(node);

    div.appendChild(para);
};

In the example above the function is not named (it’s an anonymous function) but is still assigned to a variable as before.

Anonymous function immediately invoked

(function (message) {
    var div = document.getElementById('message');
    var para = document.createElement("p");
    var node = document.createTextNode(message);
    para.appendChild(node);

    div.appendChild(para);
})("This is a message.");

In the example above the function is anonymous. However, because it is wrapped in brackets and has an argument passed in (see the last line) it will be immediately invoked.

Function overloading

Function overloading doesn’t work the same way in JavaScript as it does in C#. Declaring functions with the same names but different arguments doesn’t result in overloaded functions, rather the last function to be declared overwrites the others. Note that if you call a function and pass in too many arguments any unnecessary arguments are simply ignored.

Don’t forget that object parameters are passed by reference and primitive types by value.

Use an object to hold arbitrary values

One option is to add an object parameter as the last argument to a function. This object can be used as a bag into which you can put whatever parameters you want.

function functionTest(param1, param2, options) {
    write("param1: " + param1);
    write("param2: " + param2);
    write("options.opt1: " + options.opt1);
    write("options.opt2: " + options.opt2);
}

window.onload = function () {
    functionTest("This is param1", "This is param2", {opt1:"This is option1", opt2:"This is option2"});
}

Use the ‘arguments’ object

Another option is to use the arguments object. This is described as:

“An Array-like object corresponding to the arguments passed to a function.” [1]

function functionTest() {
    write("arguments[0]: " + arguments[0]);
    write("arguments[1]: " + arguments[1]);
    write("arguments[2]: " + arguments[2]);
}

window.onload = function () {
    functionTest("This is arguments[0]", "This is arguments[1]", "This is arguments[2]");
}

 

References

[1] https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Functions_and_function_scope/arguments

Thursday, 18 April 2013

Fixing WSDL addresses in WCF 3.5 hosted on IIS 7 and using SSL

Problem

I needed to host a WCF 3.5 web service on IIS 7 (running on Windows Server 2008) using SSL. In fact the service was configured to use wsHttpBinding with TransportWithMessageCredential. The service was running on a public facing web server with a registered domain name.

The problem was usual one; having navigated to .svc file in a browser the service description page showed the WSDL URL to be using the machine name, not the domain name. When viewing the generated WSDL the service location was also using the machine name.

Solution

The web site bindings had been set up in IIS for HTTP and HTTPS. It appears that when the HTTPS binding is setup the host name is not set. You cannot set this using the Internet Information Services (IIS) Manager so you have to do it by other means.

 

image

To add the host name to the HTTPS binding do the following:

  1. Open the applicationHost file located in C:\Windows\System32\inetsrv\config.
  2. Locate the section for the website in question (you can see the web site name in the Internet Information Services (IIS) Manager).
  3. Find the https binding and change the bindingInformation by adding the domain name after “:443:”.
  4. Save the applicationHost file and restart IIS.

 

Now when you view the service .svc file in a browser and the generated WSDL the domain name will appear in place of the machine name.

Thursday, 18 April 2013,

Wednesday, 30 January 2013

Single file WSDL generation in WCF

Sometimes it is convenient or even necessary (e.g. some interoperability scenarios) to have WCF generate a single WSDL file without references to external schema. Luckily, there are some 3rd party libraries available to help out: WCFExtras and WCFExtrasPlus. These are both available from NuGet. WCFExtrasPlus is based on WCFExtras and is slightly more up-to-date. Note that there seems to be 2 versions of WCFExtras as well. At the time of writing NuGet gives you WCFExtras 2.0.

Steps

My steps here are based on an IIS hosted WCF service.

Firstly, use NuGet to reference WCFExtras in the WCF host project.

Secondly, edit your Web.config file to include the following behaviour extension:

<system.serviceModel>
    <extensions>
      <behaviorExtensions>
        <add name="wsdlExtensions" type="WCFExtrasPlus.Wsdl.WsdlExtensionsConfig, WCFExtrasPlus, Version=2.3.0.2, Culture=neutral, PublicKeyToken=f8633fc5451b43fc"/>
      </behaviorExtensions>
    </extensions>
    ... snip ...
</system.serviceModel>

If the version is different you can use something like Telerik JustDecompile to get the correct information to use in the type attribute of the add element.

Next create an endpoint behaviour referencing the new extensions:

<system.serviceModel>
    ... snip ...
    <behaviors>
      <endpointBehaviors>
        <behavior name="SingleFileBehaviour">
          <wsdlExtensions singleFile="true" />
        </behavior>
      </endpointBehaviors>
    </behaviors>
    ... snip ...
</system.serviceModel>

You now need to update your endpoint definitions to use the new behaviour, something like this:

<endpoint address="" 
binding="wsHttpBinding"
bindingNamespace="http://schemas.example.com/Example"
bindingConfiguration="wsHttpBindingConfiguration"
behaviorConfiguration="SingleFileBehaviour"
contract="Example.ServiceContracts.IExampleService" />

Fixing problems

Most execution problems seem to stem from namespacing issues. In WCFExtras there is a class called WCFExtras.Wsdl.SingleFileExporter that does the work. The first thing it does is check that the number of generated WSDL documents is not greater that 1 and this will be the case is your namespaces are wrongly defined. Here’s my checklist to avoid problems:

1. If you define service contracts in a separate interface ensure the ServiceContract attribute has a namespace.

[ServiceContract(Name = "ExampleService", "Namespace = http://schemas.example.com/Example")]

2. In the service class add a ServiceBehavior attribute also with a namespace (failure to do this will result in the service being given the http://tempuri.org namespace). The namespace must match that of the service contract.

[ServiceBehavior(Namespace = "http://schemas.example.com/Example")]

3. Check the binding namespace on the endpoint configuration in Web.config (see above).

4. Check the namespaces on any DataContract or MessageContract attributes.

5. If you are building at x86 you might want to set the project output path to be bin\ to avoid having multiple folders with different copies of the dlls. 

Wednesday, 30 January 2013

Saturday, 12 January 2013

ReSharper plugins folder on Windows 7

Here’s a quick reminder that applies to ReSharper 5.1 in Visual Studio 2010 on Windows 7.

ReSharper plugins on Windows 7 live in the following location:

C:\Users\<User Name Here>\AppData\Roaming\JetBrains\ReSharper\v5.1\vs10.0\Plugins\