Load-Balancing the Build Farm with CruiseControl.NET

§ April 6, 2009 02:25 by beefarino |

Our CI system isn't terribly complicated, thankfully.  It evolved from a batch file on a single machine to a farm of VMs running CruiseControl.NET and a single Subversion repository.  The triggering mechanism hasn't changed during this transition though: it's a custom beast, consisting of a post-commit hook on our repository logs revisions into a database, and a custom sourcecontrol task is used to poll the database for revisions that haven't been built yet.  It works fine and naturally creates a balanced build farm: several VMs can be configured to build the same project, and the database sourcecontrol task prevents more than one VM from building the same revision.

As well as it works, it has some problems.  First, the post-commit hook relies heavily on SVN properties, which are a pain to maintain and make it impossible to simply "add a project to the build" without going through a lot of unnecessary configuration.  Moreover, the hook is starting to seriously hold up commits, sometimes as long as 20 seconds.  

Second, and more irritating, the system builds every individual commit.  By that, I mean it builds each and every revision committed to the repository - which may not sound like a bad thing, except that it includes every revision, even those that occurred before the project was added to the CI farm - all the way back to the revision that created the project or branch.  I have to manually fudge the database, adding fake build results to prevent the builds from occurring.  It's not hard, but it is a pain in the ass.  And with the team's overuse of branching I'm finding myself having to fudge more and more. 

I'm really trying to move the system towards what "everyone else does," by which I mean trigger builds by polling source control for changes.  No more database, no more post-commit hook, no more SVN property configuration, just Subversion and CruiseControl.NET.  It would be easy enough to do - simply change our CC.NET project configurations to use the standard SVN source control task.  The problem is that without the database, the farm is no longer automagically load-balancing - every VM in the farm would end up building the same source, which defeats the purpose of the farm.

I figured that I could recoup the load-balancing if I had an "edge" server between Subversion and the build farm.  This server could monitor source control and when necessary trigger a build on one of the VMs in the build farm.  So instead of each farm server determining when a build should occur, there is a single edge server making that decision.

CC.NET ships with the ability to split the build across machines - that is, for a build on one machine (like the edge server) to trigger a build on another machine (like a farm server); however, there is no load-balancing logic available.  So I made some of my own...

Edge Server CC.NET Plugin

The edge server plugin operates on a very simple algorithm:

  1. Get a list of farm servers from configuration;
  2. Determine which of farm servers are not currently building the project;
  3. Fail the build if no farm server is available for the project;
  4. Force a build of the project on the first available farm server.

If all you want is the project source code, here it is: ccnet.edgeserver.zip (607.87 kb)

Take a look at the configuration of the plugin; I think it will make the code easier to digest.

Edge Server Configuration

The edge server consists of little more than a source control trigger and a list of farm servers:

<cruisecontrol>
  <project name="MyProject">
    <triggers>
      <intervalTrigger seconds="10" />
    </triggers>
    <sourcecontrol type="svn">
      <trunkUrl>svn://sourcecontrol/Trunk</trunkUrl>
      <autoGetSource>false</autoGetSource>
    </sourcecontrol>
    <labeller type="lastChangeLabeller" prefix="MyProject_"/>
    <tasks>
      <farmBuild>
        <farmServers>
          <farmServer priority="1" uri="tcp://build-vm-1:21234/CruiseManager.rem" />
          <farmServer priority="2" uri="tcp://build-vm-2:21234/CruiseManager.rem" />
          <farmServer priority="3" uri="tcp://build-vm-3:21234/CruiseManager.rem" />
        </farmServers>
      </farmBuild>
    </tasks>
    <publishers>
      <nullTask />
    </publishers>
  </project>
</cruisecontrol>

When CC.NET is run with this configuration, it will monitor the subversion repository for changes to the "MyProject" trunk (lines 6-9); note that since autoGetSource is false, no checkout will occur.  The edge server will never have a working copy of the source.

The load-balancing is configured in lines 12-18; in this example, three farm servers are configured in the farm for "MyProject", with build-vm-1 having the highest priority for the build (meaning it will be used first when all three servers are available).  When a change is committed to the repository, the edge server will choose one of these servers based on its availability and priority, and then force it to build the project.

Farm Server Configuration

The farm server is configured just as a normal CC.NET build, except for two key differences: first, it is configured with no trigger; second, a remoteProjectLabeller is used to label the build.  Here's a sample configuration, with mundane build tasks omitted for brevity:

<cruisecontrol>
  <project name="MyProject">
    
    <triggers/>
    <sourcecontrol type="svn">
      <trunkUrl>svn://sourcecontrol/MyProject/Trunk</trunkUrl>
      <autoGetSource>true</autoGetSource>
    </sourcecontrol>
    <labeller type="remoteProjectLabeller">
      <project>MyProject</project>
      <serverUri>tcp://edgeServer:21234/CruiseManager.rem</serverUri>
    </labeller>
    <tasks>
      <!--
              ... 
      -->
    </tasks>
    <publishers>
      <!--
              ... 
      -->
    </publishers>
  </project>
</cruisecontrol> 

Details to note here are:

  • the labeller points to the edge server to obtain the build label; this is necessary because labels are generated during the build trigger, which on the farm server is always forced and won't include any source revision information;
  • the project name on the farm server matches exactly the project name on the edge server; this is a convention assumed by the plugin.

Source Code Overview

I need a FarmServer type to support the CC.NET configuration layer:

using System;
using System.Collections.Generic;
using System.Text;
using ThoughtWorks.CruiseControl.Core;
using ThoughtWorks.CruiseControl.Remote;
using Exortech.NetReflector;
using ThoughtWorks.CruiseControl.Core.Publishers;
using System.Collections;
using ThoughtWorks.CruiseControl.Core.Util;
namespace CCNET.EdgeServer
{
    [ReflectorType( "farmServer" )]
    public class FarmServer
    {
        [ReflectorProperty( "uri" )]
        public string Uri;
 
        [ReflectorProperty( "priority" )]
        public int Priority;
    }
}

No real surprises here.  Each FarmServer instance holds a URI to a CC.NET farm server and it's priority in the balance algorithm. 

The real meat is in the FarmPublisher class:

using System;
using System.Collections.Generic;
using System.Text;
using ThoughtWorks.CruiseControl.Core;
using ThoughtWorks.CruiseControl.Remote;
using Exortech.NetReflector;
using ThoughtWorks.CruiseControl.Core.Publishers;
using System.Collections;
using ThoughtWorks.CruiseControl.Core.Util;
namespace CCNET.EdgeServer
{
    [ReflectorType( "farmBuild" )]
    public class FarmPublisher : ITask
    {
        ICruiseManagerFactory factory;
 
        [ReflectorProperty( "Name", Required = false )]
        public string EnforcerName;
 
        [ReflectorHash( "farmServers", "uri", Required = true )]
        public Hashtable FarmServers;
 
        public FarmPublisher() : this( new RemoteCruiseManagerFactory() ) { }
 
        public FarmPublisher( ICruiseManagerFactory factory )
        {
            this.factory = factory;
            this.EnforcerName = Environment.MachineName;
        }
 
        public void Run( IIntegrationResult result )
        {
            // build a list of available farm servers
            //  based off of the plugin configuration
            Dictionary<int, ICruiseManager> servers = new Dictionary<int, ICruiseManager>();
            FindAvailableFarmServers( result, servers );
 
            if( 0 == servers.Count )
            {
                Log.Info( "No servers are available for this project at this time" );
                result.Status = IntegrationStatus.Failure;
                return;
            }
 
            // sort the available servers by priority
            List<int> keys = new List<int>( servers.Keys );
            keys.Sort();
 
            // force a build on the server with the highest 
            //  priority
            ICruiseManager availableServer = servers[ keys[ 0 ] ];
            Log.Info( "forcing build on server ..." );
            availableServer.ForceBuild( result.ProjectName, EnforcerName );
        }
        ...

FarmPublisher is configured with a list of FarmServer objects (lines 20-21).  The Run method (starting on line 31) implements the simple load-balancing algorithm:

  1. a list of farm servers which are available to build the project is constructed (lines 33-36);
  2. if no server is available to build the project, the edge server reports a build failure (line 38-43);
  3. the list of available farm servers is sorted by priority (lines 45-47);
  4. the project build is started on the farm server configured with the highest priority (line 53).

Determining a list of available farm servers is pretty straightforward:

void FindAvailableFarmServers( IIntegrationResult result, IDictionary<int, ICruiseManager> servers )
{
    // predicate to locate a server that isn't actively building 
    // the current project
    Predicate<ProjectStatus> predicate = delegate( ProjectStatus prj )
    {
        return IsServerAvailableToBuildProject( result, prj );
    };
 
    // check the status of each configured farm server
    foreach( FarmServer server in FarmServers.Values )
    {
        ICruiseManager manager = null;
        try
        {
            manager = ( ICruiseManager )factory.GetCruiseManager( server.Uri );
 
            // get a local copy of server's current project status snapshot
            List<ProjectStatus> projects = new List<ProjectStatus>( manager.GetProjectStatus() );
            if( null != projects.Find( predicate ) )
            {
                // add the farm server to the list of available servers, 
                //  keyed by its configured priority
                servers[ server.Priority ] = manager;
            }
        }
        catch( Exception e )
        {
            Log.Warning( e );
        }
    }
}

Available servers are saved in the servers dictionary, keyed by their configured priority.  The availability of each farm server listed in the task configuration is checked by obtaining the status of the farm server's projects, and passing them to the IsServerAvaialbleToBuildProject method:

bool IsServerAvailableToBuildProject( IIntegrationResult result, ProjectStatus prj )
{
    if( null == prj || null == result )
    {
        return false;
    }
    bool status = (          
        // project name must match
        StringComparer.InvariantCultureIgnoreCase.Equals( result.ProjectName, prj.Name ) &&
 
        // integrator status must be "running"
        prj.Status == ProjectIntegratorState.Running &&                
 
        // build activity must be "sleeping"
        prj.Activity.IsSleeping()
    );
    return status;
} 

which simply returns true when:

  • the farm server configuration contains the project,
  • the project is currently running, and
  • the project isn't currently building.

Download

This code is basically spike-quality at this point.  I fully expect to throw this away in favor of something better (or get my manager to splurge for TeamCity).  There's still a lot of stuff to do.  E.g., the algorithm assumes that a farm server is capable of building more than one project at a time - that is, if a farm server is busy building one project, it can still be available to build another concurrently.  My assumption is that I'll manage this with the farm server priority configuration.  I'd like to leverage the queuing features available in CC.NET; however, I see no way of querying the queue status of a farm server in the CC.NET API.  But at least I can start disabling the post-commit hook on our repository.

The project contains the code, a test/demo, and just a few basic unit tests; I'll update the download if/when the project matures.  If you use it, or have any suggestions, please let me know in the comments of this post.  Enjoy!

ccnet.edgeserver.zip (607.87 kb)



Delegates and Native API Callbacks

§ March 27, 2009 02:07 by beefarino |

For one of my contracts I'm implementing a .NET layer over a native third-party SDK.  The SDK makes heavy use of function pointers, which means, in terms of p/invoke and interoperability, I've got a lot of delegates flying around.  I recently hit a rather nasty bug in my .NET layer - see if you can figure this one out.

Consider this truncated example derived from the SDK:

typedef void (__stdcall *PCALLBACK)( HANDLE deviceHandle, LONG32 deviceID );
// ...
API_CALL RESULT_TYPE __stdcall Open( HANDLE deviceHandle, PCALLBACK pfnCallback );

In a nutshell, when Open is called, the pfnCallback function pointer is registered, along with the handle.  The Open function returns almost immediately, and the native library invokes the callback function periodically in response to user interaction with a device.

Here is the C# that enables the Open method to be called from .NET:

public delegate void Callback( IntPtr deviceHandle, int deviceID );
// ...
[DllImport( "ExternalLibrary.dll")]
public static extern ResultCode Open( IntPtr deviceHandle, Callback callback );

And here it is in use:

Result Code result = ExternalLibraryAPI.Open(
    deviceHande,
    delegate( IntPtr handle, int deviceId )
    {
        // ...
    }
);
VerifyResult( result ); 

The call to Open succeeds, and the callback is invoked whenever the user pushes the appropriate button on the device.  Eventually though, a nasty exception is raised as the user continues to fiddle with the device. 

I will tell you that:

  • there is no context or stack information to the exception;
  • there is not a bug in the third-party SDK;
  • this post has all the information you need to find and fix the problem - i.e., I'm not hiding something from you.

So my questions to you are:

  1. What's the exception?
  2. How do you avoid it?
Leave your answers as comments. I'll post the answer in a few days.


Automation Framework pt 6: PowerShell Integration for Free

§ March 11, 2009 03:47 by beefarino |

Now that I have a fluent interface hiding a lot of the complexities of my automation framework, I wanted to focus on getting the framework integrated with PowerShell.  My desire is to leverage all the features of PowerShell, including the command pipeline and existing functions.  After folding a few methods into PowerShell, I recognized the general pattern; I came up with a way to package the framework in a PowerShell Module that automagically generates wrapper functions around the fluent interfaces.  So moving forward, as the framework expands I don't need to do anything to get deep PowerShell integration.

I stared out using the fluent interfaces directly:

$context = new-object pokerroomshell.commands.framework.context;
$executor = [PokerRoomShell.Commands.Fluency.Execute]::Using( $context );
$properties = @{ 
    address = "12 Main St";
    city = "Anywheretownvilleton";
    state = 'Texahomasippi';
    zip = '75023';
};
[PokerRoomShell.Commands.Execute]::using( $context )
    .for( $player )
    .setProperties( $properties )
    .deposit( 500 ); 

which works, but becomes very cumbersome when I want to process multiple players or existing PowerShell commands:

$players | %{
    [PokerRoomShell.Commands.Execute]::using( $context )
        .for( $_ )
        .setProperties( $properties )
        .deposit( 500 );
    $_; # put player back in pipeline
} | export-clixml players.xml; 

What I really want is to make the framework look more like it was designed for PowerShell.  Or perhaps a better way to say it: I want to use PowerShell to drive my system, but I don't want to do a lot of work to get there.  I started tinkering, implementing a few of the methods from the AccountCommands fluent interface to see what it would take to use the methods in a pipeline.  In order to do something like this:

$players | set-properties $properties | 
    new-deposit 500 | 
    export-clixml players.xml; 

I need these functions:

function new-deposit
{
    [CmdletBinding()]
    param(
        [Parameter(Position=0,ValueFromPipeline=$true,Mandatory=$true)]
        [PokerRoomShell.Commands.Framework.Account]
        $account,
        [Parameter(Position=1,Mandatory=$true)]
        [int]
        $amount
    )
    process
    {        
        $script:accountCommands = $executor.forPlayer( $account ).deposit( $amount );
        $script:accountCommands.User;        
    }
}
function set-properties
{
    [CmdletBinding()]
    param(
        [Parameter(Position=0,ValueFromPipeline=$true,Mandatory=$true)]
        [PokerRoomShell.Commands.Framework.Account]
        $account,
        [Parameter(Position=1,Mandatory=$true)]
        [Hashtable]
        $properties
    )
    process
    {        
        $script:accountCommands = $executor.for( $account ).setProperties( $properties );
        $script:accountCommands.Account;        
    }
} 

Once I had a few of these functions under my belt, the pattern became evident.  Each method gets its own PowerShell wrapper function.  Each PowerShell wrapper function can be reduced to a matter of:

  • accepting an Account reference from the pipeline;
  • accepting any parameters needed by the AccountCommands method;
  • creating an AccountCommands instance around the Account reference;
  • calling the method on the AccountCommands instance;
  • returning the Account object back to the pipeline
It was obvious that these wrappers would consist of mostly boilerplate, and that they could simply be generated if I had a little extra metadata available on the fluent command objects.  I defined three simple attributes to this end:
  • the CommandPipelineAttribute identifies objects as candidates for PowerShell integration;
  • the PipelineInputAttribute marks the property of the object that will be used as pipeline input and output;
  • the CommandBindingAttribute defines the verb-noun name of the PowerShell wrapper function.

The attributes are markers I can place in my fluent command objects to indicate how the object methods should be wrapped in PowerShell:

[CommandPipeline]
public class AccountCommands
{        
    // ...
    [PipelineInput]
    public Account Account
    {
        get;
        set;
    }
    // commands
    [CommandBinding( Verb.Find, Noun.Player )]
    public AccountCommands Lookup()
    {
        // ...
    }
    [CommandBinding( Verb.New, Noun.Player )]
    public AccountCommands Create()
    {
        // ...
    }
    [CommandBinding( Verb.New, Noun.Deposit )]
    public AccountCommands Deposit( decimal amount )
    {
        // ...
    }
    [CommandBinding( Verb.Set, Noun.Properties )]
    public AccountCommands SetProperties( Hashtable properties )
    {
        // ...
    }
    // ...
} 

With these markers, generating PowerShell wrappers is a simple matter of snooping out this metadata and filling in the blanks of function template.  After a few minutes of hacking I had a working function to accomplish the task:

function generate-function
{
    [CmdletBinding()]
    param(
         [Parameter(Position=0)]
         [system.reflection.assembly] $assembly
    )
    process
    {
        # find all types marked with the CommandPipeline attribute
        foreach( $type in get-markedTypes( $assembly ) )
        {
            # find all methods marked with the CommandBinding attribute
            foreach( $method in ( get-markedMethods $type ) )
            {
                # create a script block wrapping the method
                $method.ScriptBlock = create-wrapperScriptBlock $method;
                return $method;
            }                     
        }
    }
}

In a nutshell, generate-function finds all public types marked with the CommandPipelineAttribute, then creates wrapper ScriptBlocks around the methods on those types marked with the CommandBindingAttribute (the details are described below).  I can use this to create the PowerShell wrapper functions dynamically, using the new-item cmdlet against the built-in PowerShell Function provider:

foreach( $script:m in generate-function $assemblyName )
{
    # only create functions that don't exist yet
    # this will allow for command proxies if necessary 
    if( !( test-path $_.path ) )
    { 
       ni -path $script:m.path -value ( iex $script:m.ScriptBlock ) -name $script:m.Name;
    }
}

Now when my automation framework expands, I need to do zero work to update the PowerShell layer get the deep PowerShell integration I want.  Kick ass!

Example Generated Function

Here is a PowerShell session that demonstrates the function generation, and shows what the resulting function looks like:

PS >gi function:set-pin
Get-Item : Cannot find path 'Function:\set-pin' because it does not exist.
At line:1 char:3
+ gi <<<<  function:set-pin
    + CategoryInfo          : ObjectNotFound: (Function:\sset-pin:String) [Get-Item], ItemNotFoundException
    + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.GetItemCommand
    
PS >generate-function "pokerroomshell.commands" | 
    ? { !( test-path $_.path ) } | 
    % { ni -path $_.Path -value ( iex $_.ScriptBlock ) -name $_.Name }
    
PS >(gi function:set-pin).definition
    [CmdletBinding()]
    param( 
        [Parameter(Mandatory=$true,ValueFromPipeline=$true)]
        [PokerRoomShell.Commands.Framework.Account]
        $user,
        [Parameter(Position=0,Mandatory=$true)]
        [System.String]
        $newPin
    )
process {
        $script:ctx = $executor.for( $user ).ResetPin( $newPin );
        $user;    
}

Gory Details

You really want to see the code?  You asked for it....

Finding types marked with the CommandPipelineAttribute is simple:

# find all types marked with the CommandPipeline attribute
function get-markedTypes( $asm )
{
    $asm.getExportedTypes() |
        ? { $_.getCustomAttributes( [pokerRoomShell.commands.framework.commandPipelineAttribute], $true ) };
}

Finding the methods on those types marked with the CommandBindingAttribute is just as easy; however, to simplify the ScriptBlock template processing, I preprocess each method and build up a little data structure with my necessities:

# find all methods marked with the CommandBinding attribute
function get-markedMethods( $type )
{
    # find the property to use as pipeline input / command output
    $pipelineInput =  $type.GetProperties() | ? { 
        $_.getCustomAttributes( [pokerRoomShell.commands.framework.pipelineInputAttribute], $true )
    } | coalesce;
    # find methods marked with the CommandBinding attribute
    $type.GetMethods() | % { 
        $attr = $_.getCustomAttributes( [pokerRoomShell.commands.framework.commandBindingAttribute], $true ) | coalesce;
    # build a hash table of method data for the scriptblock template
        if( $attr )
        {                      
            # return a hash table of data needed to define the wrapper function
            @{
                Method = $_;
                Binding = $attr;
                Input = @{ 
                    Name = $pipelineInput.Name;
                    Type = $pipelineInput.propertyType;
                };
                Parameters = $_.GetParameters();
                Name = "" + $attr.verb + "-" + $attr.noun;
                Path = "function:" + $attr.verb + "-" + $attr.noun;                
            };
        }
    }
}

And then comes the real nut: the function that creates the scriptblock; this looks a bit ugly - lots of escaped $'s and evaluation $()'s and here-strings, but it works:

# create a script block wrapping the method 
function create-wrapperScriptBlock( $method )
{   
    $parameterPosition = 0
    
    # collection of parameter declarations
    $params = new-object system.collections.arraylist;
   
   # define the pipeline command input parameter
    $params.add(
@"
        [Parameter(Mandatory=`$true,ValueFromPipeline=`$true)]
        [$($method.input.type.fullName)]
        `$$($method.input.name)
"@
    ) | out-null; # eat the output of add()
   
    #add any parameters required by the method being wrapped
    $params.addRange(
        @( $method.parameters | 
            %{         
@"
        [Parameter(Position=$parameterPosition,Mandatory=`$true)]
        [$($_.ParameterType.fullName)]
        `$$($_.Name)
"@;
            ++$parameterPosition;
            } 
        ) 
    );
   
    # join the $params collection to a single string   
    $params = $params -join ",`n";        
    # define the method call arguments    
    $callArgs = ( $method.parameters | %{      
        "`$$($_.Name)";
        } 
    ) -join ", ";   
# return the wrapper script block as a string
@"
{
    [CmdletBinding()]
    param( 
        $($params.Trim())
    )
    
    process
    {        
        `$script:ctx = `$executor.for( `$$($method.input.name) ).$($method.Method.Name)( $($callArgs.trim()) );
        `$$($method.Input.Name);        
    }
}
"@;
}

There's quite a bit going on in there, but none of it is rocket science.  First, a list of function parameters is built, with the pipeline input parameter loaded first followed by any arguments required by the automation framework method.  This list is joined into a flat string and surrounded by a param() declaration.  A second list of parameters - those that will be passed to the automation framework method - is built and flattened, then wrapped in a call to the actual framework method.

The resulting scriptblock makes a few assumptions, most notably the existence of a global (or module-scoped) $executor variable that is declared like so:

$context = new-object pokerroomshell.commands.framework.context;
$executor = [PokerRoomShell.Commands.Fluency.Execute]::Using( $context );

But those little static details can be wrapped up in a module. 



Extension Methods for Deferred Log4Net Message Formatting

§ March 9, 2009 16:07 by beefarino |

In a recent post I described some initial impressions of the MS Live Labs Simple Logging Framework (SLF).  One of the things I really like about SLF is the use of lambas as a way to defer execution of potentially slow or unchecked message formatting code.  I also railed on the idea of creating a completely new effort when log4net is an active open-source project.

So, here I am putting my money where my mouth is: I've created some extension methods for instances of ILog that accept lambdas - now if you use log4net in a C# 3.0+ environment, you can defer message formatting as easily as you can in SLF.  The code is quite simple:

using log4net;
namespace Log4NetExtensions
{
    public static class Log4NetExtensionMethods
    {
        public static void Debug( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsDebugEnabled )
            {
                log.Debug( formattingCallback() );
            }
        }
        public static void Info( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsInfoEnabled )
            {
                log.Info( formattingCallback() );
            }
        }
        public static void Warn( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsWarnEnabled )
            {
                log.Warn( formattingCallback() );
            }
        }
        public static void Error( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsErrorEnabled )
            {
                log.Error( formattingCallback() );
            }
        }
        public static void Fatal( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsFatalEnabled )
            {
                log.Fatal( formattingCallback() );
            }
        }
     }
} 

Each method of the ILog interface gets its own override in the form of an extension method.  The method accepts a Func<string> that allows you to capture log message formatting in a lamba expression, and hence defer its execution until absolutely necessary (or avoid it altogether).  Here are some quick and dirty unit tests to demonstrate the basic use and functionality:

using NUnit.Framework;
namespace Log4NetExtensions.Tests
{
    [TestFixture]
    public class Log4NetExtensionTests
    {
        log4net.ILog Log;
        log4net.Appender.MemoryAppender appender;
        [TestFixtureSetUp]
        public void FixtureSetUp()
        {
            Log = log4net.LogManager.GetLogger( System.Reflection.MethodBase.GetCurrentMethod().DeclaringType );
            appender = new log4net.Appender.MemoryAppender 
            {
                Name = "Memory Appender",
                Threshold = log4net.Core.Level.Debug
            };
            log4net.Config.BasicConfigurator.Configure( appender );                        
        }
        [SetUp]
        public void TestSetUp()
        {
            appender.Clear();            
            log4net.Core.LoggingEvent[] events = appender.GetEvents();
            Assert.That( 0 == events.Length, "failed to clear appender of log events" );
            Log.Logger.Repository.Threshold = log4net.Core.Level.Debug;
        }
        [Test]
        public void LogViaLambdaTest()
        {
            Log.Debug( () => "Hello World!" );
            log4net.Core.LoggingEvent[] events = appender.GetEvents();            
            
            Assert.That( 1 == events.Length, "failed to log via lamba" );
            Assert.That( 
                StringComparer.CurrentCulture.Equals( 
                    "Hello World!", 
                    events[ 0 ].RenderedMessage                   
                ),
                "rendered message does not match (via lambda)"
            );
        }
        [Test]
        public void LogWithLocalVariableReference()
        {
            string value = "World!";
            Log.Debug( () => "Hello " + value );
            log4net.Core.LoggingEvent[] events = appender.GetEvents();
            Assert.That( 1 == events.Length, "failed to log with local variable reference" );
            Assert.That(
                StringComparer.CurrentCulture.Equals(
                    "Hello World!",
                    events[ 0 ].RenderedMessage
                ),
                "rendered message does not match (local variable reference)"
            );
        }
        [Test]
        public void LambdaIsNotEvaluatedAtInactiveLogLevel()
        {            
            Log.Logger.Repository.Threshold = log4net.Core.Level.Error;         
            bool evaluated = false;
            Log.Debug( () => 
                { 
                    evaluated = true;
                    return "Hello World!";
                }
            );
            Assert.That( ! evaluated, "lamba was evaluated at inactive log level" );
            Log.Error( () =>
                {
                    evaluated = true;
                    return "Hello World!";
                }
            );
            Assert.That( evaluated, "lamba was not evaluated at active log level" );
        }
    }
}

Play with it, see if you like it.  I will to.  Any suggestions or comments are encouraged.  If there is enough interest I'll see about submitting the extensions to the apache project.