Automation Framework pt 6: PowerShell Integration for Free

§ March 11, 2009 03:47 by beefarino |

Now that I have a fluent interface hiding a lot of the complexities of my automation framework, I wanted to focus on getting the framework integrated with PowerShell.  My desire is to leverage all the features of PowerShell, including the command pipeline and existing functions.  After folding a few methods into PowerShell, I recognized the general pattern; I came up with a way to package the framework in a PowerShell Module that automagically generates wrapper functions around the fluent interfaces.  So moving forward, as the framework expands I don't need to do anything to get deep PowerShell integration.

I stared out using the fluent interfaces directly:

$context = new-object pokerroomshell.commands.framework.context;
$executor = [PokerRoomShell.Commands.Fluency.Execute]::Using( $context );
$properties = @{ 
    address = "12 Main St";
    city = "Anywheretownvilleton";
    state = 'Texahomasippi';
    zip = '75023';
};
[PokerRoomShell.Commands.Execute]::using( $context )
    .for( $player )
    .setProperties( $properties )
    .deposit( 500 ); 

which works, but becomes very cumbersome when I want to process multiple players or existing PowerShell commands:

$players | %{
    [PokerRoomShell.Commands.Execute]::using( $context )
        .for( $_ )
        .setProperties( $properties )
        .deposit( 500 );
    $_; # put player back in pipeline
} | export-clixml players.xml; 

What I really want is to make the framework look more like it was designed for PowerShell.  Or perhaps a better way to say it: I want to use PowerShell to drive my system, but I don't want to do a lot of work to get there.  I started tinkering, implementing a few of the methods from the AccountCommands fluent interface to see what it would take to use the methods in a pipeline.  In order to do something like this:

$players | set-properties $properties | 
    new-deposit 500 | 
    export-clixml players.xml; 

I need these functions:

function new-deposit
{
    [CmdletBinding()]
    param(
        [Parameter(Position=0,ValueFromPipeline=$true,Mandatory=$true)]
        [PokerRoomShell.Commands.Framework.Account]
        $account,
        [Parameter(Position=1,Mandatory=$true)]
        [int]
        $amount
    )
    process
    {        
        $script:accountCommands = $executor.forPlayer( $account ).deposit( $amount );
        $script:accountCommands.User;        
    }
}
function set-properties
{
    [CmdletBinding()]
    param(
        [Parameter(Position=0,ValueFromPipeline=$true,Mandatory=$true)]
        [PokerRoomShell.Commands.Framework.Account]
        $account,
        [Parameter(Position=1,Mandatory=$true)]
        [Hashtable]
        $properties
    )
    process
    {        
        $script:accountCommands = $executor.for( $account ).setProperties( $properties );
        $script:accountCommands.Account;        
    }
} 

Once I had a few of these functions under my belt, the pattern became evident.  Each method gets its own PowerShell wrapper function.  Each PowerShell wrapper function can be reduced to a matter of:

  • accepting an Account reference from the pipeline;
  • accepting any parameters needed by the AccountCommands method;
  • creating an AccountCommands instance around the Account reference;
  • calling the method on the AccountCommands instance;
  • returning the Account object back to the pipeline
It was obvious that these wrappers would consist of mostly boilerplate, and that they could simply be generated if I had a little extra metadata available on the fluent command objects.  I defined three simple attributes to this end:
  • the CommandPipelineAttribute identifies objects as candidates for PowerShell integration;
  • the PipelineInputAttribute marks the property of the object that will be used as pipeline input and output;
  • the CommandBindingAttribute defines the verb-noun name of the PowerShell wrapper function.

The attributes are markers I can place in my fluent command objects to indicate how the object methods should be wrapped in PowerShell:

[CommandPipeline]
public class AccountCommands
{        
    // ...
    [PipelineInput]
    public Account Account
    {
        get;
        set;
    }
    // commands
    [CommandBinding( Verb.Find, Noun.Player )]
    public AccountCommands Lookup()
    {
        // ...
    }
    [CommandBinding( Verb.New, Noun.Player )]
    public AccountCommands Create()
    {
        // ...
    }
    [CommandBinding( Verb.New, Noun.Deposit )]
    public AccountCommands Deposit( decimal amount )
    {
        // ...
    }
    [CommandBinding( Verb.Set, Noun.Properties )]
    public AccountCommands SetProperties( Hashtable properties )
    {
        // ...
    }
    // ...
} 

With these markers, generating PowerShell wrappers is a simple matter of snooping out this metadata and filling in the blanks of function template.  After a few minutes of hacking I had a working function to accomplish the task:

function generate-function
{
    [CmdletBinding()]
    param(
         [Parameter(Position=0)]
         [system.reflection.assembly] $assembly
    )
    process
    {
        # find all types marked with the CommandPipeline attribute
        foreach( $type in get-markedTypes( $assembly ) )
        {
            # find all methods marked with the CommandBinding attribute
            foreach( $method in ( get-markedMethods $type ) )
            {
                # create a script block wrapping the method
                $method.ScriptBlock = create-wrapperScriptBlock $method;
                return $method;
            }                     
        }
    }
}

In a nutshell, generate-function finds all public types marked with the CommandPipelineAttribute, then creates wrapper ScriptBlocks around the methods on those types marked with the CommandBindingAttribute (the details are described below).  I can use this to create the PowerShell wrapper functions dynamically, using the new-item cmdlet against the built-in PowerShell Function provider:

foreach( $script:m in generate-function $assemblyName )
{
    # only create functions that don't exist yet
    # this will allow for command proxies if necessary 
    if( !( test-path $_.path ) )
    { 
       ni -path $script:m.path -value ( iex $script:m.ScriptBlock ) -name $script:m.Name;
    }
}

Now when my automation framework expands, I need to do zero work to update the PowerShell layer get the deep PowerShell integration I want.  Kick ass!

Example Generated Function

Here is a PowerShell session that demonstrates the function generation, and shows what the resulting function looks like:

PS >gi function:set-pin
Get-Item : Cannot find path 'Function:\set-pin' because it does not exist.
At line:1 char:3
+ gi <<<<  function:set-pin
    + CategoryInfo          : ObjectNotFound: (Function:\sset-pin:String) [Get-Item], ItemNotFoundException
    + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.GetItemCommand
    
PS >generate-function "pokerroomshell.commands" | 
    ? { !( test-path $_.path ) } | 
    % { ni -path $_.Path -value ( iex $_.ScriptBlock ) -name $_.Name }
    
PS >(gi function:set-pin).definition
    [CmdletBinding()]
    param( 
        [Parameter(Mandatory=$true,ValueFromPipeline=$true)]
        [PokerRoomShell.Commands.Framework.Account]
        $user,
        [Parameter(Position=0,Mandatory=$true)]
        [System.String]
        $newPin
    )
process {
        $script:ctx = $executor.for( $user ).ResetPin( $newPin );
        $user;    
}

Gory Details

You really want to see the code?  You asked for it....

Finding types marked with the CommandPipelineAttribute is simple:

# find all types marked with the CommandPipeline attribute
function get-markedTypes( $asm )
{
    $asm.getExportedTypes() |
        ? { $_.getCustomAttributes( [pokerRoomShell.commands.framework.commandPipelineAttribute], $true ) };
}

Finding the methods on those types marked with the CommandBindingAttribute is just as easy; however, to simplify the ScriptBlock template processing, I preprocess each method and build up a little data structure with my necessities:

# find all methods marked with the CommandBinding attribute
function get-markedMethods( $type )
{
    # find the property to use as pipeline input / command output
    $pipelineInput =  $type.GetProperties() | ? { 
        $_.getCustomAttributes( [pokerRoomShell.commands.framework.pipelineInputAttribute], $true )
    } | coalesce;
    # find methods marked with the CommandBinding attribute
    $type.GetMethods() | % { 
        $attr = $_.getCustomAttributes( [pokerRoomShell.commands.framework.commandBindingAttribute], $true ) | coalesce;
    # build a hash table of method data for the scriptblock template
        if( $attr )
        {                      
            # return a hash table of data needed to define the wrapper function
            @{
                Method = $_;
                Binding = $attr;
                Input = @{ 
                    Name = $pipelineInput.Name;
                    Type = $pipelineInput.propertyType;
                };
                Parameters = $_.GetParameters();
                Name = "" + $attr.verb + "-" + $attr.noun;
                Path = "function:" + $attr.verb + "-" + $attr.noun;                
            };
        }
    }
}

And then comes the real nut: the function that creates the scriptblock; this looks a bit ugly - lots of escaped $'s and evaluation $()'s and here-strings, but it works:

# create a script block wrapping the method 
function create-wrapperScriptBlock( $method )
{   
    $parameterPosition = 0
    
    # collection of parameter declarations
    $params = new-object system.collections.arraylist;
   
   # define the pipeline command input parameter
    $params.add(
@"
        [Parameter(Mandatory=`$true,ValueFromPipeline=`$true)]
        [$($method.input.type.fullName)]
        `$$($method.input.name)
"@
    ) | out-null; # eat the output of add()
   
    #add any parameters required by the method being wrapped
    $params.addRange(
        @( $method.parameters | 
            %{         
@"
        [Parameter(Position=$parameterPosition,Mandatory=`$true)]
        [$($_.ParameterType.fullName)]
        `$$($_.Name)
"@;
            ++$parameterPosition;
            } 
        ) 
    );
   
    # join the $params collection to a single string   
    $params = $params -join ",`n";        
    # define the method call arguments    
    $callArgs = ( $method.parameters | %{      
        "`$$($_.Name)";
        } 
    ) -join ", ";   
# return the wrapper script block as a string
@"
{
    [CmdletBinding()]
    param( 
        $($params.Trim())
    )
    
    process
    {        
        `$script:ctx = `$executor.for( `$$($method.input.name) ).$($method.Method.Name)( $($callArgs.trim()) );
        `$$($method.Input.Name);        
    }
}
"@;
}

There's quite a bit going on in there, but none of it is rocket science.  First, a list of function parameters is built, with the pipeline input parameter loaded first followed by any arguments required by the automation framework method.  This list is joined into a flat string and surrounded by a param() declaration.  A second list of parameters - those that will be passed to the automation framework method - is built and flattened, then wrapped in a call to the actual framework method.

The resulting scriptblock makes a few assumptions, most notably the existence of a global (or module-scoped) $executor variable that is declared like so:

$context = new-object pokerroomshell.commands.framework.context;
$executor = [PokerRoomShell.Commands.Fluency.Execute]::Using( $context );

But those little static details can be wrapped up in a module. 



Extension Methods for Deferred Log4Net Message Formatting

§ March 9, 2009 16:07 by beefarino |

In a recent post I described some initial impressions of the MS Live Labs Simple Logging Framework (SLF).  One of the things I really like about SLF is the use of lambas as a way to defer execution of potentially slow or unchecked message formatting code.  I also railed on the idea of creating a completely new effort when log4net is an active open-source project.

So, here I am putting my money where my mouth is: I've created some extension methods for instances of ILog that accept lambdas - now if you use log4net in a C# 3.0+ environment, you can defer message formatting as easily as you can in SLF.  The code is quite simple:

using log4net;
namespace Log4NetExtensions
{
    public static class Log4NetExtensionMethods
    {
        public static void Debug( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsDebugEnabled )
            {
                log.Debug( formattingCallback() );
            }
        }
        public static void Info( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsInfoEnabled )
            {
                log.Info( formattingCallback() );
            }
        }
        public static void Warn( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsWarnEnabled )
            {
                log.Warn( formattingCallback() );
            }
        }
        public static void Error( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsErrorEnabled )
            {
                log.Error( formattingCallback() );
            }
        }
        public static void Fatal( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsFatalEnabled )
            {
                log.Fatal( formattingCallback() );
            }
        }
     }
} 

Each method of the ILog interface gets its own override in the form of an extension method.  The method accepts a Func<string> that allows you to capture log message formatting in a lamba expression, and hence defer its execution until absolutely necessary (or avoid it altogether).  Here are some quick and dirty unit tests to demonstrate the basic use and functionality:

using NUnit.Framework;
namespace Log4NetExtensions.Tests
{
    [TestFixture]
    public class Log4NetExtensionTests
    {
        log4net.ILog Log;
        log4net.Appender.MemoryAppender appender;
        [TestFixtureSetUp]
        public void FixtureSetUp()
        {
            Log = log4net.LogManager.GetLogger( System.Reflection.MethodBase.GetCurrentMethod().DeclaringType );
            appender = new log4net.Appender.MemoryAppender 
            {
                Name = "Memory Appender",
                Threshold = log4net.Core.Level.Debug
            };
            log4net.Config.BasicConfigurator.Configure( appender );                        
        }
        [SetUp]
        public void TestSetUp()
        {
            appender.Clear();            
            log4net.Core.LoggingEvent[] events = appender.GetEvents();
            Assert.That( 0 == events.Length, "failed to clear appender of log events" );
            Log.Logger.Repository.Threshold = log4net.Core.Level.Debug;
        }
        [Test]
        public void LogViaLambdaTest()
        {
            Log.Debug( () => "Hello World!" );
            log4net.Core.LoggingEvent[] events = appender.GetEvents();            
            
            Assert.That( 1 == events.Length, "failed to log via lamba" );
            Assert.That( 
                StringComparer.CurrentCulture.Equals( 
                    "Hello World!", 
                    events[ 0 ].RenderedMessage                   
                ),
                "rendered message does not match (via lambda)"
            );
        }
        [Test]
        public void LogWithLocalVariableReference()
        {
            string value = "World!";
            Log.Debug( () => "Hello " + value );
            log4net.Core.LoggingEvent[] events = appender.GetEvents();
            Assert.That( 1 == events.Length, "failed to log with local variable reference" );
            Assert.That(
                StringComparer.CurrentCulture.Equals(
                    "Hello World!",
                    events[ 0 ].RenderedMessage
                ),
                "rendered message does not match (local variable reference)"
            );
        }
        [Test]
        public void LambdaIsNotEvaluatedAtInactiveLogLevel()
        {            
            Log.Logger.Repository.Threshold = log4net.Core.Level.Error;         
            bool evaluated = false;
            Log.Debug( () => 
                { 
                    evaluated = true;
                    return "Hello World!";
                }
            );
            Assert.That( ! evaluated, "lamba was evaluated at inactive log level" );
            Log.Error( () =>
                {
                    evaluated = true;
                    return "Hello World!";
                }
            );
            Assert.That( evaluated, "lamba was not evaluated at active log level" );
        }
    }
}

Play with it, see if you like it.  I will to.  Any suggestions or comments are encouraged.  If there is enough interest I'll see about submitting the extensions to the apache project.



A Look at the Live Labs Simple Logging Framework

§ March 6, 2009 05:47 by beefarino |

Earlier this week Scott Hanselman tweeted a link to the MS Live Labs Simple Logging Framework (SLF).  I thought I'd check it out and post some comparisons to log4net for posterity.  Note this isn't a performance comparison, just a look at how the frameworks are used.

Spoiler: There are some neat ideas in here.  Check out SLF if you're new to logging, but my guess is that you'll quickly outgrow its capabilities.  If you're already logging, I don't see any reason to switch from more mature frameworks anytime soon.

A Quick Comparison

SLF is remarkably similar to log4net in its design.  SLF uses the concepts of appenders (SLF calls them "logs"), loggers (SLF calls them "loggers" too), and logging levels just like log4net.  In fact, there was virtually no change when porting my "hello world" log4net example to the SLF:

 

using System;
using LiveLabs.Logging;
namespace SimpleLoggingFrameworkExample
{
    class Program
    {
        static void Main( string[] args )
        {
            Logger Log = new Logger( typeof( Program ) );
            Log.Level = LogLevel.Debug;            
            
            Log.Debug( "Hello World!" );  
            Log.Inform( "I'm a simple log4net tutorial." );  
            Log.Warn( "... better be careful ..." );  
            Log.Error( "ruh-roh: an error occurred" );  
            Log.Fatal( "OMG we're dooooooomed!" );
            
            Console.ReadLine();
        }
    }
}

The most annoying change (and the cause of a failed build) was the rename of log4net's ILog.Info method to SLF's Logger.Inform.  Seems silly to me to change that one method name when the rest of the interface is identical....

Simple Formatting Control

SLF defaults to outputting to the process STDERR stream at a log level of Inform.  Line 10 drops the log level to Debug so we can see all of the output, which is formatted rather heinously IMO:

2009-03-06T20:49:57|   |No Thread Name|SimpleLoggingFrameworkExample.Program|Hello World!
2009-03-06T20:49:57|   |No Thread Name|SimpleLoggingFrameworkExample.Program|I'm a simple log4net tutorial.
2009-03-06T20:49:57|   |No Thread Name|SimpleLoggingFrameworkExample.Program|...better be careful ...
2009-03-06T20:49:57|   |No Thread Name|SimpleLoggingFrameworkExample.Program|ruh-roh: an error occurred
2009-03-06T20:49:57|   |No Thread Name|SimpleLoggingFrameworkExample.Program|OMG we're dooooooomed! 

Fortunately SLF provides a straightforward way to alter the message format. Every log object has a MessageFormatter property that can be set to a delegate to format messages for the log. Adding a bit of code to the example:

            Log.Level = LogLevel.Debug;                        
            Log.MessageFormatter = ( msg, log, level ) => String.Format( "{0} [{1}]: {2}", log, level, msg );
            Log.Debug( "Hello World!" );  
makes the output far more readable:
SimpleLoggingFrameworkExample.Program [Debug]: Hello World!
SimpleLoggingFrameworkExample.Program [Inform]: I'm a simple log4net tutorial.
SimpleLoggingFrameworkExample.Program [Warn]: ... better be careful ...
SimpleLoggingFrameworkExample.Program [Error]: ruh-roh: an error occurred
SimpleLoggingFrameworkExample.Program [Fatal]: OMG we're dooooooomed! 

A similar callback exists to format exception objects.  It is certainly easier to achieve this level of formatting control in SLF than it is in log4net, where you have to define custom formatter objects and layout patterns in your configuration. 

Configuration

SLF doesn't require an explicit call to initialize it with a configuration the way log4net does.  As I've pointed out SLF defaults to logging to STDERR at an Inform logging level.

If you do want some control, SLF does offer a Settings class that provides programmatic access to configure defaults and specifics for any of the loggers you've created:

Settings.Default.DefaultLogLevel = LogLevel.Debug; 

However, your options are limited.  You can set a default log level and log for all logger objects, and you can register message and exception formatters.  One glaring omission from the configuration space is the ability to configure default message and exception formatters for all loggers.  Looks like you're stuck setting them individually for now.

Language Feature Support

Another area where SLF improves on log4net is in the use of newer C# language features (log4net is based on the 2.0 version of the language).  Every log method in SLF accepts a lamba expression as an argument:

Log.Debug( () => {
        IDictionary envs = Environment.GetEnvironmentVariables();
        return "Current Environment: " + String.Join( ";", 
            (
                from string k in envs.Keys
                select ( k + "=" + envs[ k ].ToString() )
            ).ToArray() 
        );
    }
);

This is an excellent way to isolate complex logging logic, and to avoid expensive calls to obtain log message formatting arguments when no logging will actually take place; e.g.:

Log.Debug( () => 
    String.Format( "Result: [{0}]", obj.AVeryExpensiveOperation() ) ); 

To achieve the same effect in log4net, you have to explicitly wrap the code in a check of current log level:

if( Log.IsDebugEnabled )
{
    Log.DebugFormat( "Result: [{0}]", obj.AVeryExpensiveOperation() );
}

Personally, I prefer the former.  

Appender Options

This is where log4net beats SLF into a pulp.  log4net ships with a cornucopia of appender options, including the console, files, the event log, the debugger, the .NET trace subsystem, MSMQ, remoting, network endpoints, and various databases.  SLF ships with basically the equivalent of the log4net ConsoleAppender, FileAppender, and RollingFileAppender.

In addition,  SLF offers no built-in decoupling of the log formatting from your logging code the way log4net does.  I've shown in this post how easily you can override the message or exception format on a SLF logger object in your code, but consider also that there is no way to change this message format without changing your code.  E.g., if you someday decide to move from text to an XML log, you have to touch your code.  The log4net formatting configuration can be isolated to the app.config, making it touchable while leaving your code alone.

Summary

To sum up, the major differences between log4net and SLF are:

  • log4net offers far more appender options and configurability.
  • log4net has a larger user-base and more maturity.
  • SLF provides easier access to simple formatting options for messages and exceptions.
  • SLF supports the latest language features, while log4net does not.

I'll be honest - I like some of the ideas I see in the SLF project, but I really can't figure out the impetus for a new project when a mature and open project already exists.  There is this blurb from the project home page on codeplex:

Existing logging libraries are either too feature-light or too complex. That means you might end up spending as much time debugging your logging code as you do your application code. Or you may not even know that your “log” to database is even working until it’s too late. 

Yes, SLF makes a few things easier, but on the whole log4net offers far more for features for the same effort.  And I can't say I've encountered these difficulties with log4net - I don't consider log4net complex (in comparison to, say, the MS Enterprise Logging Framework), but that's probably because I've been using log4net for years. So let's say for argument's sake that log4net is overly complex and difficult to use.  The question at the front of my mind is this:

Why start over when you can openly contribute to the log4net project and make it better?

There is nothing earth-shaking in SLF that I see mandating a new project effort.  And by not contributing to log4net and starting over, you're dismissing the features and maturity it has to offer.

[insert "MS hubris!  They don't 'get' the open source thing...."  comment here]



Software License Agreement Survey

§ March 5, 2009 07:52 by beefarino |

I'm wondering how many people actually take the time to read the legalese to which they bind themselves when they install new software.

So please take a moment to fill out this survey.  I'll post the results in few weeks.