Wednesday, 12 December 2012

NServiceBus 3.3.2 processes and high CPU usage

Problem

I recently ran in to a problem with a simple Windows forms application that I was using to test an NServiceBus endpoint. The forms application simply allowed me to add messages to the endpoint input queue without having to invoke a number of other components in the system; useful for development and testing.

The forms application had been written using NServiceBus 2.6 but had been upgraded to NServiceBus 3.3.2. However, when I ran the upgraded version of the forms application it was using over 40% of available CPU. This didn’t happen when using NServiceBus 2.6.

Solution

The issue turned out to be permissions when the forms application tried to access its input queue. The solution was to delete the existing queues and to configure the application to run the NServiceBus installers on start-up.

In this case the NServiceBus was self-hosted within the forms application so I invoked the installers when I created the bus, something like this:

var bus = NServiceBus.Configure.With()
              .Log4Net()
              .DefaultBuilder()
              .XmlSerializer()
              .MsmqTransport()
              .UnicastBus()
                .LoadMessageHandlers()
              .DisableTimeoutManager()
              .CreateBus()
              .Start(() => Configure.Instance.ForInstallationOn<NServiceBus.Installation.Environments.Windows>().Install());

 

Note that if your process doesn’t actually need an input queue because it only sends messages you can avoid the necessity of creating the input queue altogether by using send-only mode:

var bus = NServiceBus.Configure.With()
              .Log4Net()
              .DefaultBuilder()
              .XmlSerializer()
              .MsmqTransport()
              .UnicastBus()
              .DisableTimeoutManager()
              .SendOnly();

 

See also

Wednesday, 10 October 2012

Using CruiseControl.Net to build branched code from Subversion

WARNING: You will want to treat this post with caution. Subsequent investigation has indicated that the original solution provided is inaccurate. I’ve left the post in place as a record of my investigation into the Subversion Source Control Block in CCNet.  There’s an Update at the bottom of the page that describes how CCNet implements autoGetSource  and cleanCopy for Subversion and the probable cause of our issue.
We recently branched some code in Subversion ready to kick off development for the next phase of a project. We employ continuous integration (CI) and use CruiseControl.Net (CCNet) so we thought it would be a simple matter of cloning the existing CCNet projects and modifying the Subversion repository path to point to the new branch rather than the trunk. However, we discovered that the branch builds were actually getting the source from the trunk even though the repository URL was pointing to the new branch. The solution to this problem turned out to be simple but took a bit of head scratching.
We are using CCNet version 1.6.7981.1.
Firstly, our Subversion repository a structure similar to the following:

Repo
So, we have a repository root under which are a set of projects. Each project has a branches folder and a tags folder. Branches and tags can contain as many branches and tags as necessary. Each project also has a single trunk for the current working copy.
To build the trunk we had CCNet source control configuration like the following (note that the trunkUrl is pointing to the trunk in our repository):
<sourcecontrol type="filtered">
  <sourceControlProvider type="svn">
    <trunkUrl>http://<server name here>/repository/ProjectName/Trunk</trunkUrl>
    <workingDirectory>C:\SomePathToWorkingDirectory</workingDirectory>
    <executable>c:\svn\bin\svn.exe</executable>
    <username>someusername</username>
    <password>somepassword</password>
  </sourceControlProvider>
  <exclusionFilters>
    <pathFilter>
      <pattern>/SomePathToFilter/*.*</pattern>
    </pathFilter>
  </exclusionFilters>
</sourcecontrol>

To build a branch we had CCNet source control configuration like the following (note that the trunkUrl is now pointing to a branch in our repository):
<sourcecontrol type="filtered">
  <sourceControlProvider type="svn">
    <trunkUrl>http://<server name here>/repository/ProjectName/Branches/v2.0.0.0</trunkUrl>
    <workingDirectory>C:\SomePathToWorkingDirectory</workingDirectory>
    <executable>c:\svn\bin\svn.exe</executable>
    <username>someusername</username>
    <password>somepassword</password>
  </sourceControlProvider>
  <exclusionFilters>
    <pathFilter>
      <pattern>/SomePathToFilter/*.*</pattern>
    </pathFilter>
  </exclusionFilters>
</sourcecontrol>

But this configuration failed as described above; we ended up building the trunk code, not the branch.
The solution turned out to be to use the autoGetSource configuration element of the Subversion Source Control Block [1]. There is limited documentation about this element but what we are told is it indicates “whether to retrieve the updates from Subversion for a particular build”.
<sourcecontrol type="filtered">
  <sourceControlProvider type="svn">
    <autoGetSource>true</autoGetSource>
    <trunkUrl>http://<server name here>/repository/ProjectName/Branches/v2.0.0.0</trunkUrl>
    <workingDirectory>C:\SomePathToWorkingDirectory</workingDirectory>
    <executable>c:\svn\bin\svn.exe</executable>
    <username>someusername</username>
    <password>somepassword</password>
  </sourceControlProvider>
  <exclusionFilters>
    <pathFilter>
      <pattern>/SomePathToFilter/*.*</pattern>
    </pathFilter>
  </exclusionFilters>
 </sourcecontrol>
This seems to have solved the problem and the branch builds are now working correctly. However, I’m not all together sure why this works because the documentation for our version of CCNet indicates that autoGetSource is optional but defaults to ‘true’.

Update

Having been confused by this behaviour I’ve had a look at the CCNet source code for the Subversion Source Control Block (ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn class) for the version we are using (1.6.7981.1).
Firstly, the AutoGetSource property is set the ‘true’ in the class constructor and – as far as I can see – it is only referenced in the GetSource(IIntegrationResult result) method.  So, it should be ‘true’ if you don’t set it in your CCNet config file.
public override void GetSource(IIntegrationResult result)
{
    result.BuildProgressInformation.SignalStartRunTask("Getting source from SVN");

    if (!AutoGetSource) return;

    if (DoesSvnDirectoryExist(result) && !CleanCopy)
    {
        UpdateSource(result);
    }
    else
    {
        if (CleanCopy)
        {
            if (WorkingDirectory == null)
            {
                DeleteSource(result.WorkingDirectory);
            }
            else
            {
                DeleteSource(WorkingDirectory);
            }
        }
        CheckoutSource(result);
    }
}
Looking at the code above if you set autoGetSource to ‘false’ CCNet won’t get try to checkout/update the source from Subversion at all.
Next, if the Subversion directory exists and you haven’t set cleanCopy to ‘true’ in CCNet config, CCNet will do a Subversion update on the existing code. Otherwise it will end up doing a Subversion checkout, deleting the working directory first if cleanCopy was set to ‘true’.
It now seems very unlikely that explicitly setting autoGetSource to ‘true’ would have had the effect of fixing our problem. It seems much more likely that the first time the build ran it did a checkout against the the trunk and not the branch (perhaps because the CCNet config trunkUrl was incorrect at that time). Subsequent builds were therefore doing an update against the trunk. As part of trying to resolve the issue we deleted the working directory (and the SVN directory within it) which would have forced a fresh checkout and we can assume that the trunkUrl was then correctly pointing to the branch.

References

[1] CruiseControl.NET : Subversion Source Control Block

Monday, 24 September 2012

Classic and integrated application pool modes in IIS 7

You may have noticed that when creating or editing application pools in IIS 7 you can choose between 2 different modes: Classic and Integrated. So what’s the difference?

Firstly, a quick reminder on how to get to the application pools. Crack open the IIS manager and select Application Pools from the connections tree-view on the left. You’ll see a list of application pools which you can select. If you right-click on an application pool and choose “Basic settings…” in the pop-up menu you can change the “Managed pipeline mode” using a drop-down. [2]

 

Capture

IIS manager showing basic settings for an application pool

 

Microsoft documentation describes an application pool in the following terms:

“An application pool is a group of one or more URLs that are served by a worker process or a set of worker processes. Application pools set boundaries for the applications they contain, which means that any applications that are running outside a given application pool cannot affect the applications in the application pool.” [1]

It goes on to say:

“The application pool mode affects how the server processes requests for managed code. If a managed application runs in an application pool with integrated mode, the server will use the integrated, request-processing pipelines of IIS and ASP.NET to process the request. However, if a managed application runs in an application pool with classic mode, the server will continue to route requests for managed code through Aspnet_isapi.dll, processing requests the same as if the application was running in IIS 6.0.” [1]

In versions of IIS prior to version 7, ASP.NET integrated with IIS via an ISAPI extension (aspnet_isapi.dll) and and ISAPI filter (aspnet_filter.dll). It therefore exposed its own application and request processing model which resulted in “ASP.NET components executing entirely inside the ASP.NET ISAPI extension bubble and only for requests mapped to ASP.NET in the IIS script map configuration” [3]. So, in effect, there were 2 pipelines: one for native ISAPI filters and another for managed application components (ASP.Net). This architecture had limitations:

“The major limitation of this model was that services provided by ASP.NET modules and custom ASP.NET application code were not available to non-ASP.NET requests. In addition, ASP.NET modules were unable to affect certain parts of the IIS request processing that occurred before and after the ASP.NET execution path.” [4]

In IIS 7 the ASP.NET runtime was integrated with the core web server, providing a unified request processing pipeline exposed to both native and managed components.

Some benefits of the new architecture include:

  • Allowing services provided by both native and managed modules to apply to all requests, regardless of handler. For example, managed Forms Authentication can be used for all content, including ASP pages, CGIs, and static files.
  • Empowering ASP.NET components to provide functionality that was previously unavailable to them due to their placement in the server pipeline. For example, a managed module providing request rewriting functionality can rewrite the request prior to any server processing, including authentication.
  • A single place to implement, configure, monitor and support server features such as single module and handler mapping configuration, single custom errors configuration, single url authorization configuration. [3]

There’s a nice description of how ASP.Net is integrated with IIS 7 here.

References

Thursday, 20 September 2012

aspnet_regiis.exe error 0x8007000B on Windows 7

Problem

The following error occurred while registering ASP.Net in IIS on Windows 7 using aspnet_regiis.exe -i:

Operation failed with 0x08007000B

An attempt was made to load a program with an incorrect format.

Capture

Solution

The solution was to run the 64-bit version of aspnet_regiis.exe located in the Framework64 folder.

Thursday, 20 September 2012

Saturday, 18 August 2012

Flash player – stutter during audio playback

Problem

Audio stutter occurs when playing back videos using Flash player. The CPU is not overloaded and network bandwidth is available.

Solution

In my case this was caused by Flash Player 11. I don’t know what they’ve done in that version but it simply won’t playback videos without audio stutter regardless of what web browser hosts the plugin.

The solution was to uninstall Flash 11 and install version 10.3. Links to the installers can be found here:

Where can I find direct downloads of Flash Player 10.3 for Windows or Macintosh?

Thursday, 19 July 2012

Enabling failed request tracing in IIS 7 on Windows Server 2008

Make sure failed request tracing (FRT) is installed

You can tell if FRT is installed because when you open the IIS manager and select a web site the option to Failed Request Tracing… option is missing.

 

screen000

Fig 1 - IIS Manager with no Failed Request Tracing… option

 

To enable this feature:

1. Open the Server Manager.

2. Expand Roles and select Web Server (IIS).

 

screen001

Fig 2 – The Server Manager

 

3. Scroll down to the Role Services section.

4. The Tracing feature will not be installed.

 

screen002

Fig 3 – Tracing feature not installed.

 

4. Click Add Role Services.

5. Enable Tracing.

screen003

Fig 4 – Enable Tracing

 

6. Click Next etc. to install the feature.

7. Close the Server Manager and reopen the IIS Manager and select a web site.

8. Failed Request Tracing… is now available.

 

screen004

Fig 5 – Failed Request Tracing… is now available.

 

9. Click on Failed Request Tracing… and select Enable.

10. Click OK.

 

screen005

Fig 6 – Enable Failed Request Tracing…

Thursday, 19 July 2012

Thursday, 24 May 2012

Collections in NServiceBus 2.6 messages

I needed to create an NServiceBus message type (an implementation of IMessage) that contained a collection of sub items. My initial thought was to expose the collection as an IEnumerable in order to preserve encapsulation - I didn’t want client code to be able to modify the collection. Here’s an example:

public class MyMessage : IMessage
{
    public int SomeCode { get; set; }

    public IEnumerable<ListItem> Items { get; set; }
}
public class ListItem
{
    public string Key { get; set; }
    
    public string Message { get; set; } 
}

The problem was that the collection was turning up empty at the destination.

This turns out to be a feature of the NServiceBus XML serializer. This is described as follows:

“NServiceBus has its own custom XML serializer which is capable of handling both classes and interfaces as well as dictionaries and does not use the WCF DataContractSerializer. Binary serialization is done using the standard .net binary serializer.” - http://nservicebus.com/Performance.aspx

However, it seems the XML serializer isn’t as fully featured as other XML serializers but it is focussed on addressing problems relating to moving messages around quickly and efficiently.

In this case the solution was to change from using IEnumerable<T> to List<T>. Not too painful really.

public class MyMessage : IMessage
{
    public int SomeCode { get; set; }

    public List<ListItem> Items { get; set; }
}

Note that a number of serialization issues in NServiceBus – including this one - can be addressed by using the NServiceBus binary serializer.

NServiceBus built-in configurations and profiles

I’ve been using NServiceBus for quite a while now and am really happy with it. I’ve been making good use of the generic host (NServiceBus.Host.exe) to run services based on NServiceBus and really like the simplicity this provides. To get the most out of the generic host I’ve found you need an understanding of 2 aspects of NServiceBus:

  1. Built-in configurations (provided as interfaces your endpoint configuration classes can implement)
  2. Profiles (command line arguments supplied to the generic host)

What’s the difference? As I see it:

  1. Built-in configurations – Set up the nature of the endpoint in code (pretty much baked in). 
  2. Profiles – Can be changed at runtime (when invoking the generic host).

I keep having to look up what the out-of-the-box configurations and profiles are so I thought I’d create this post as an aide-mémoire.

Built-in configurations

The built-in configuration interfaces are described here: http://nservicebus.com/GenericHost.aspx

Firstly there are 3 configuration interfaces:

  1. AsA_Client
  2. AsA_Server
  3. AsA_Publisher

Each of these interfaces makes use of the XmlSerializer, the MsmqTransport, and the UnicastBus but configures them differently:

  • AsA_Client
    • Sets MsmqTransport to be non-transactional
    • Purges its queue of messages on start-up
    • Processes messages using its own permissions, not those of the message sender
  • AsA_Server
    • Sets the MsmqTransport to be transactional
    • Does not purge messages from its queue on startup (making it fault-tolerant)
    • It processes messages under the permissions of the message sender (called impersonation) which prevents elevation of privilege attacks
  • AsA_Publisher
    • Extends AsA_Server
    • Indicates to the infrastructure that a storage for subscription requests is to be set up (see the NServiceBus profiles page).

I have to say I don’t like the naming conventions here. When I first saw them I assumed a server would send messages and a client receive them. That’s not the case at all. The way I think of things is that a client is a fairly transient endpoint; it doesn’t matter if it looses messages that have been sent to it if it’s restarted. Servers and publishers are endpoints that need to be more fault tolerant (e.g. a message endpoint in front of a business process).

Profiles

NServiceBus generic host profiles are described here: http://nservicebus.com/Profiles.aspx 

And here: http://nservicebus.com/MoreOnProfiles.aspx

Firstly, there are 2 categories of profile:

  • Environment Profiles
    • Help avoid the error prone manual configuration (e.g. when moving from Development to Production via Integration)
    • Enables easy transition of the system without any code changes
  • Feature Profiles
    • Turn on an off NServiceBus features (e.g. turning on and off the Distributor, Gateway and timeout manager)

The 3 environmental profiles are:

  • Lite
    • The default profile
    • Used if no explicit profile is defined
    • Configures all the persistence like sagas, subscriptions, timeouts etc to be InMemory
    • Turns the TimeoutManager and Gateway on by default
    • Installers are always invoked (installers were introduced in NServiceBus 3.0)
    • Logging is done to the console
  • Integration
    • Suitable for running your endpoint in integration and QA environments
    • Storage is persistent using queues or RavenDB
    • Features like TimeoutManager and Gateway are now turned off by default
    • Installers are invoked to make deployment easier to automate
    • Logging is done to the console
  • Production
    • Sets your endpoint up for production use
    • All storage is durable and suitable for scale out
    • Installers are not invoked since your endpoint will probably be installed as a windows service and not running with elevated privileges
    • Installers are only run when you install the host
    • Logging is done to a logfile in the runtime directory since again you’re most likely running as a windows service

The feature related profiles are:

  • MultiSite
    • Turns the the gateway on
  • Time
    • Turns the timeout manger on
  • Master
    • Makes the endpoint a “master node endpoint” (it will run the Gateway for multisite interaction, Timeout manager and the Distributor)
    • It will also start a worker that will enlist with the Distributor
    • Can’t be combined with the Worker or Distributor profiles
  • Worker
    • Makes the current endpoint enlist as a worker with its distributor running on the master node
    • Can’t be combined with the Master or Distributor profiles
  • Distributor
    • Starts the endpoint as a distributor only
    • The endpoint won’t do any actual work and only distribute the load among its enlisted workers
    • Can’t be combined with the Master and Worker profiles
  • PerformanceCounters
    • Turns the NServiceBus specific performance counters on

Don’t forget that calling a profile is done as a command line argument to the generic service host and must be qualified with a namespace:

NServiceBus.Host.exe NServiceBus.Lite

Friday, 9 March 2012

Validation in web services

A colleague recently informed me that doing validation in a web service was a bad idea and should be avoided. I think his reasoning was that in systems requiring massive throughput performing validation at a web service entry point would be a blocking operation that would hold resources and impact negatively on overall system performance. Sorry, but the idea of not validating a request to a web service struck me as stupid. Anyway, I like to get my ducks in a row so I thought I’d take the opportunity to revisit some SOA concerns around validation.

My usual source of reference on matters pertaining to SOA is Thomas Erl. So, I dug out my copy of “SOA – Principles of Service Design” and had a look at what it had to say on the matter.

“Regardless of the extent of indirect coupling a service contract imposes, there will always be the requirement for the consumer program to comply to the data model defined in the technical service contract definitions.

In the case of a Web service, this form of validation coupling refers to the XML schema complex types that represent individual incoming and outgoing messages. Schemas establish data types, constraints, and validation rules based on the size and complexity of the information being exchanged as well as the validation requirements of the service itself.” [1]

So, in the case of a SOAP-based XML Web service the schema specifies the nature and format of the data contained in the request. A Web service will perform validation against the schema and reject the request if it does not comply. Erl goes on to say,

“The extent of validation coupling required by each individual service capability can vary dramatically and is often tied directly to the measure of constraint granularity of service capabilities. As per the validation-related design patterns, each contract can be individually assessed as to the quantity of actual constraints required to increase its longevity.” [2]

When Erl refers to the constraint granularity he is talking about “the amount of detail which a particular constraint is expressed” [3]. Erl provides the example of a constraint to be applied to a product code; you could mandate that it is a string value between 1 and 4 characters (a course grained constraint) or that it is 4 characters with each character being numeric (a fine grained constraint).

Erl’s concern about service longevity appears to be that if constraint granularity is high so is the validation coupling. Changes to validation rules could pose a threat to service longevity because they would require a change to the service contract. As Erl puts it, “By reducing the overall quantity of constraints and especially filtering out those more prone to change, the longevity of a service contract can be extended.” [4]

Erl offers the Validation Abstraction pattern to help alleviate this issue.

So, does all of this mean that performing validation in a Web service is a bad idea. No, of course it doesn’t. What this means is that by reducing the validation detail within the schema can help increase the longevity of service contracts.

However, if you do this you must defer more detailed validation to the underlying service logic, not ignore it altogether. It is worthy of note that some validation logic may be better suited to being performed within the processing boundary of a business service (i.e. moving the validation logic closer to where specific business processing takes place). Use of a dedicated validation service is also an option so validation logic can be maintained on a single location and managed separately.

What I take away from this:

  1. Validation rules are expressed for service contracts using schema.
  2. Keeping validation constraints course grained can help extend service contract longevity.
  3. Consideration should be given to move detailed validation logic into the underlying service logic.
  4. Moving some validation logic within the business process boundary should be considered.
  5. Use of an external validation service is an option.

As for my colleague’s original assertion, I think I’ll stick to performing validation in my Web services as and when required.

References

[1] “SOA – Principles of Service Design”, Thomas Erl, pp190-191

[2] “SOA – Principles of Service Design”, Thomas Erl, p191

[3] “SOA – Principles of Service Design”, Thomas Erl, p117

[4] http://www.soapatterns.org/validation_abstraction.php

 

See also

http://searchsoa.techtarget.com/answer/Validation-abstraction

Friday, 9 March 2012

Login failure when accessing Active Directory

I hate wasting time, especially when it’s because of an obvious error that’s staring me in the face but I just can’t see it. Well, it happened today so I thought I’d make a note so it never happens again. Take the following code:

var principalContext = new PrincipalContext(ContextType.Domain, domain);
var validated = principalContext.ValidateCredentials(userName, password);
var userPrincipal = UserPrincipal.FindByIdentity(principalContext, userName);

My problem was that an exception was being thrown on the last line. “How is this possible?” I asked myself. “The call to ValidateCredentials worked so why the error on FindByIdentity?”

Firstly, the exception:

Exception thrown autenticating the user.
System.DirectoryServices.DirectoryServicesCOMException (0x8007052E): Logon failure: unknown user name or bad password.

   at System.DirectoryServices.DirectoryEntry.Bind(Boolean throwIfFail)
   at System.DirectoryServices.DirectoryEntry.Bind()
   at System.DirectoryServices.DirectoryEntry.get_AdsObject()
   at System.DirectoryServices.PropertyValueCollection.PopulateList()
   at System.DirectoryServices.PropertyValueCollection..ctor(DirectoryEntry entry, String propertyName)
   at System.DirectoryServices.PropertyCollection.get_Item(String propertyName)
   at System.DirectoryServices.AccountManagement.PrincipalContext.DoLDAPDirectoryInitNoContainer()
   at System.DirectoryServices.AccountManagement.PrincipalContext.DoDomainInit()
   at System.DirectoryServices.AccountManagement.PrincipalContext.Initialize()
   at System.DirectoryServices.AccountManagement.PrincipalContext.get_QueryCtx()
   at System.DirectoryServices.AccountManagement.Principal.FindByIdentityWithTypeHelper(PrincipalContext context, Type principalType, Nullable`1 identityType, String identityValue, DateTime refDate)
   at System.DirectoryServices.AccountManagement.Principal.FindByIdentityWithType(PrincipalContext context, Type principalType, String identityValue)
   at System.DirectoryServices.AccountManagement.UserPrincipal.FindByIdentity(PrincipalContext context, String identityValue)
   at GL.AnglianWater.WIRM.Core.Membership.LdapAuthentication.Authenticate(String userName, String password, String domain, String requiredGroup)

Drilling into the exception details in Visual Studio 2010 yielded a bit more information:

ExtendedError = -2146893044
ExtendedErrorMessage = "8009030C: LdapErr: DSID-0C0904DC, comment: AcceptSecurityContext error, data 52e, v1db1"

After scratching my head for an age the solution turned out to be so obvious as to be embarrassing; simply create the PrincipleContext by passing in the username and password too:

var principalContext = new PrincipalContext(ContextType.Domain, domain, userName, password);
var validated = principalContext.ValidateCredentials(userName, password);
var userPrincipal = UserPrincipal.FindByIdentity(principalContext, userName);

Doh!

Sunday, 4 March 2012

XML schema for log4Net

Here’s a quick reminder about the XML schema for log4Net. I use this if I choose to put the log4Net configuration in a separate file and want Visual Studio intellisense to work.
Firstly, to get the log4Net configuration into a separate file I do something like this in the application configuration file (e.g. App.config):
<configuration>
  <configSections>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler,log4net"/>
  </configSections>

  <log4net configSource="Config\Log4Net.config"/>
</configuration>
In the Log4Net.config file I make sure the log4Net element is given a reference to the log4Net schema and and a namespace:
<log4net xsi:noNamespaceSchemaLocation="http://csharptest.net/downloads/schema/log4net.xsd"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  
  <!-- log4Net configuration omitted -->

</log4net>
Hey presto! We now have intellisense for the log4Net configuration file.

Thursday, 1 March 2012

Specifying x86 at the solution level and in CruiseControl.Net

On my latest project I have to ensure all the projects in a solution are built using the x86 platform. To ensure this works as expected I needed to use the Configuration Manager in Visual Studio 2010. I also needed to ensure the solution built correctly in CruiseControl.Net.

Visual Studio Configuration Manager

The configuration manager can be found is 2 ways:

  1. Go to Build > Configuration Manager…
  2. Right-click on the solution in the Solution Explorer and select Properties from the context menu. Click the Configuration Manager… button in the solution property pages dialog.

Once the configuration manager has opened you can change the build configuration for the entire solution. Here’s an example before making changes to the build configuration:

Untitled

You can now select the appropriate build configuration and platform for each project in the solution. If you change the Active solution platform you may find that the Build check boxes are all unticked. Don’t forget to tick them if you want the projects to build (note that you can get caught out here; if these check boxes are cleared and you subsequently try to Clean or Build at the solution level nothing will happen).

 Untitled1

Sometimes the x86 option is not available in the Platform dropdown next to each project. To create it select the <New…> option and in the resulting New Project Platform dialog box do the following:

  1. Select x86 under New platform:
  2. Under Copy setting from: choose Any CPU
  3. Deselect Create new solution platforms
  4. Click OK

Untitled2

CruiseControl.Net configuration

To modify the CruiseControl.Net project configuration change the MsBuild task as follows:

<msbuild>
    <executable>C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe</executable>
    <workingDirectory>C:\<your>\<working>\<directory></workingDirectory>
    <projectFile>Your.Solution.File.Name.sln</projectFile>
    <timeout>600</timeout>
    <buildArgs>/p:Configuration=Release /p:Platform=x86</buildArgs>
    <logger>ThoughtWorks.CruiseControl.MsBuild.XmlLogger,C:\Program Files\CruiseControl.NET\server\ThoughtWorks.CruiseControl.MsBuild.dll</logger>
</msbuild>

Note line 6 where we specify the configuration and the platform.

Monday, 23 January 2012

Counting lines of code

Not that it means a great deal every now and then you get asked for the number of lines of code in a project. Greg D answered a question on StackOverflow about this using PowerShell. I liked his answer a lot because it’s so simple. Here’s his line of PowerShell code:

(dir -include *.cs,*.xaml -recurse | select-string .).Count

Thursday, 12 January 2012

Unable to update Subversion source using TortoiseSVN

I tried to update source code coming from Subversion using TortoiseSVN but got an error. I had tried to update from a folder below the root (the “Implementation” folder in this case).

Error  Working copy ‘C:\source\<project name>\Implementation’ locked.
Error  'C:\source\<project name>' is already locked.

 

Untitled


Trying the TortoiseSVN > Release Lock... command did nothing in this case because I had not taken a lock out on any file. The solution was to use TortoiseSVN > Cleanup... from the root folder (‘C:\source\<project name>’ in this case.

Untitled3

Wednesday, 11 January 2012

APSDaemon.exe – Apple’s machine killer

I have 2 iPod Nanos that I use all the time and as such have to use iTunes. I’m a Windows user and I have to say that iTunes on Windows sucks. It wouldn’t be so bad if all you got was iTunes but you don’t; you also get a bunch of other services that drag your machine to its knees. In my case they are services I just don’t want, don’t need and wish would go away.

Anyway, APSDaemon.exe is one such piece of nonsense that is officially known as Apple Push. In my case it takes up so much CPU time my machine becomes totally unusable. It’s not the newest machine but it does have an AMD Athlon 64 3500+ processor. APSDaemon.exe will happily sit there chewing on 50% of my CPU. I have also noticed that when APSDaemon.exe is working hard so is Kaspersky Internet Security. Between the 2 of them 100% CPU is utilised. If I kill APSDaemon.exe Kaspersky also settles back down.

APSDaemon.exe seems to be invoked under 2 conditions, both of which must be addressed to stop the thing from starting:

  1. At system startup.
  2. When iTunes is started.

How to stop APSDaemon.exe from starting

  1. Pop open the task manager, find APSDaemon.exe and kill it.
  2. Go to Start > Run and type msconfig.
  3. Go to the Startup tab and find ASPDaemon.exe. Uncheck it. This will stop the application from starting when your system does.
  4. Go to C:\Program Files\Common Files\Apple\Apple Application Support and rename ASPDaemon.exe to something else (e.g. ASPDaemon.I_dont_want_this_to_run). This will stop iTunes from being able to start it.

iTunes will still start and run normally but ASPDaemon.exe will not. Result.