Inaugural Meeting of ALT.NET Charlotte

§ March 29, 2009 15:48 by beefarino |

The Charlotte ALT.NET user group is having its first meeting this Thursday, April 2 at 6:30pm in the CPCC Central Campus IT building. 

If you are at all interested, please register, participate, learn, present, and support the local developer community.  If you are unsure what ALT.NET is all about, check out these sites:

and then register.

Special thanks to unfuddle for sponsorship (and for making awesome hosted project management software). 

If you are interested in sponsoring an event, please drop me a line and I'll get you in touch with the right people.



Delegates and Native API Callbacks

§ March 27, 2009 02:07 by beefarino |

For one of my contracts I'm implementing a .NET layer over a native third-party SDK.  The SDK makes heavy use of function pointers, which means, in terms of p/invoke and interoperability, I've got a lot of delegates flying around.  I recently hit a rather nasty bug in my .NET layer - see if you can figure this one out.

Consider this truncated example derived from the SDK:

typedef void (__stdcall *PCALLBACK)( HANDLE deviceHandle, LONG32 deviceID );
// ...
API_CALL RESULT_TYPE __stdcall Open( HANDLE deviceHandle, PCALLBACK pfnCallback );

In a nutshell, when Open is called, the pfnCallback function pointer is registered, along with the handle.  The Open function returns almost immediately, and the native library invokes the callback function periodically in response to user interaction with a device.

Here is the C# that enables the Open method to be called from .NET:

public delegate void Callback( IntPtr deviceHandle, int deviceID );
// ...
[DllImport( "ExternalLibrary.dll")]
public static extern ResultCode Open( IntPtr deviceHandle, Callback callback );

And here it is in use:

Result Code result = ExternalLibraryAPI.Open(
    deviceHande,
    delegate( IntPtr handle, int deviceId )
    {
        // ...
    }
);
VerifyResult( result ); 

The call to Open succeeds, and the callback is invoked whenever the user pushes the appropriate button on the device.  Eventually though, a nasty exception is raised as the user continues to fiddle with the device. 

I will tell you that:

  • there is no context or stack information to the exception;
  • there is not a bug in the third-party SDK;
  • this post has all the information you need to find and fix the problem - i.e., I'm not hiding something from you.

So my questions to you are:

  1. What's the exception?
  2. How do you avoid it?
Leave your answers as comments. I'll post the answer in a few days.


Automation Framework pt 6: PowerShell Integration for Free

§ March 11, 2009 03:47 by beefarino |

Now that I have a fluent interface hiding a lot of the complexities of my automation framework, I wanted to focus on getting the framework integrated with PowerShell.  My desire is to leverage all the features of PowerShell, including the command pipeline and existing functions.  After folding a few methods into PowerShell, I recognized the general pattern; I came up with a way to package the framework in a PowerShell Module that automagically generates wrapper functions around the fluent interfaces.  So moving forward, as the framework expands I don't need to do anything to get deep PowerShell integration.

I stared out using the fluent interfaces directly:

$context = new-object pokerroomshell.commands.framework.context;
$executor = [PokerRoomShell.Commands.Fluency.Execute]::Using( $context );
$properties = @{ 
    address = "12 Main St";
    city = "Anywheretownvilleton";
    state = 'Texahomasippi';
    zip = '75023';
};
[PokerRoomShell.Commands.Execute]::using( $context )
    .for( $player )
    .setProperties( $properties )
    .deposit( 500 ); 

which works, but becomes very cumbersome when I want to process multiple players or existing PowerShell commands:

$players | %{
    [PokerRoomShell.Commands.Execute]::using( $context )
        .for( $_ )
        .setProperties( $properties )
        .deposit( 500 );
    $_; # put player back in pipeline
} | export-clixml players.xml; 

What I really want is to make the framework look more like it was designed for PowerShell.  Or perhaps a better way to say it: I want to use PowerShell to drive my system, but I don't want to do a lot of work to get there.  I started tinkering, implementing a few of the methods from the AccountCommands fluent interface to see what it would take to use the methods in a pipeline.  In order to do something like this:

$players | set-properties $properties | 
    new-deposit 500 | 
    export-clixml players.xml; 

I need these functions:

function new-deposit
{
    [CmdletBinding()]
    param(
        [Parameter(Position=0,ValueFromPipeline=$true,Mandatory=$true)]
        [PokerRoomShell.Commands.Framework.Account]
        $account,
        [Parameter(Position=1,Mandatory=$true)]
        [int]
        $amount
    )
    process
    {        
        $script:accountCommands = $executor.forPlayer( $account ).deposit( $amount );
        $script:accountCommands.User;        
    }
}
function set-properties
{
    [CmdletBinding()]
    param(
        [Parameter(Position=0,ValueFromPipeline=$true,Mandatory=$true)]
        [PokerRoomShell.Commands.Framework.Account]
        $account,
        [Parameter(Position=1,Mandatory=$true)]
        [Hashtable]
        $properties
    )
    process
    {        
        $script:accountCommands = $executor.for( $account ).setProperties( $properties );
        $script:accountCommands.Account;        
    }
} 

Once I had a few of these functions under my belt, the pattern became evident.  Each method gets its own PowerShell wrapper function.  Each PowerShell wrapper function can be reduced to a matter of:

  • accepting an Account reference from the pipeline;
  • accepting any parameters needed by the AccountCommands method;
  • creating an AccountCommands instance around the Account reference;
  • calling the method on the AccountCommands instance;
  • returning the Account object back to the pipeline
It was obvious that these wrappers would consist of mostly boilerplate, and that they could simply be generated if I had a little extra metadata available on the fluent command objects.  I defined three simple attributes to this end:
  • the CommandPipelineAttribute identifies objects as candidates for PowerShell integration;
  • the PipelineInputAttribute marks the property of the object that will be used as pipeline input and output;
  • the CommandBindingAttribute defines the verb-noun name of the PowerShell wrapper function.

The attributes are markers I can place in my fluent command objects to indicate how the object methods should be wrapped in PowerShell:

[CommandPipeline]
public class AccountCommands
{        
    // ...
    [PipelineInput]
    public Account Account
    {
        get;
        set;
    }
    // commands
    [CommandBinding( Verb.Find, Noun.Player )]
    public AccountCommands Lookup()
    {
        // ...
    }
    [CommandBinding( Verb.New, Noun.Player )]
    public AccountCommands Create()
    {
        // ...
    }
    [CommandBinding( Verb.New, Noun.Deposit )]
    public AccountCommands Deposit( decimal amount )
    {
        // ...
    }
    [CommandBinding( Verb.Set, Noun.Properties )]
    public AccountCommands SetProperties( Hashtable properties )
    {
        // ...
    }
    // ...
} 

With these markers, generating PowerShell wrappers is a simple matter of snooping out this metadata and filling in the blanks of function template.  After a few minutes of hacking I had a working function to accomplish the task:

function generate-function
{
    [CmdletBinding()]
    param(
         [Parameter(Position=0)]
         [system.reflection.assembly] $assembly
    )
    process
    {
        # find all types marked with the CommandPipeline attribute
        foreach( $type in get-markedTypes( $assembly ) )
        {
            # find all methods marked with the CommandBinding attribute
            foreach( $method in ( get-markedMethods $type ) )
            {
                # create a script block wrapping the method
                $method.ScriptBlock = create-wrapperScriptBlock $method;
                return $method;
            }                     
        }
    }
}

In a nutshell, generate-function finds all public types marked with the CommandPipelineAttribute, then creates wrapper ScriptBlocks around the methods on those types marked with the CommandBindingAttribute (the details are described below).  I can use this to create the PowerShell wrapper functions dynamically, using the new-item cmdlet against the built-in PowerShell Function provider:

foreach( $script:m in generate-function $assemblyName )
{
    # only create functions that don't exist yet
    # this will allow for command proxies if necessary 
    if( !( test-path $_.path ) )
    { 
       ni -path $script:m.path -value ( iex $script:m.ScriptBlock ) -name $script:m.Name;
    }
}

Now when my automation framework expands, I need to do zero work to update the PowerShell layer get the deep PowerShell integration I want.  Kick ass!

Example Generated Function

Here is a PowerShell session that demonstrates the function generation, and shows what the resulting function looks like:

PS >gi function:set-pin
Get-Item : Cannot find path 'Function:\set-pin' because it does not exist.
At line:1 char:3
+ gi <<<<  function:set-pin
    + CategoryInfo          : ObjectNotFound: (Function:\sset-pin:String) [Get-Item], ItemNotFoundException
    + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.GetItemCommand
    
PS >generate-function "pokerroomshell.commands" | 
    ? { !( test-path $_.path ) } | 
    % { ni -path $_.Path -value ( iex $_.ScriptBlock ) -name $_.Name }
    
PS >(gi function:set-pin).definition
    [CmdletBinding()]
    param( 
        [Parameter(Mandatory=$true,ValueFromPipeline=$true)]
        [PokerRoomShell.Commands.Framework.Account]
        $user,
        [Parameter(Position=0,Mandatory=$true)]
        [System.String]
        $newPin
    )
process {
        $script:ctx = $executor.for( $user ).ResetPin( $newPin );
        $user;    
}

Gory Details

You really want to see the code?  You asked for it....

Finding types marked with the CommandPipelineAttribute is simple:

# find all types marked with the CommandPipeline attribute
function get-markedTypes( $asm )
{
    $asm.getExportedTypes() |
        ? { $_.getCustomAttributes( [pokerRoomShell.commands.framework.commandPipelineAttribute], $true ) };
}

Finding the methods on those types marked with the CommandBindingAttribute is just as easy; however, to simplify the ScriptBlock template processing, I preprocess each method and build up a little data structure with my necessities:

# find all methods marked with the CommandBinding attribute
function get-markedMethods( $type )
{
    # find the property to use as pipeline input / command output
    $pipelineInput =  $type.GetProperties() | ? { 
        $_.getCustomAttributes( [pokerRoomShell.commands.framework.pipelineInputAttribute], $true )
    } | coalesce;
    # find methods marked with the CommandBinding attribute
    $type.GetMethods() | % { 
        $attr = $_.getCustomAttributes( [pokerRoomShell.commands.framework.commandBindingAttribute], $true ) | coalesce;
    # build a hash table of method data for the scriptblock template
        if( $attr )
        {                      
            # return a hash table of data needed to define the wrapper function
            @{
                Method = $_;
                Binding = $attr;
                Input = @{ 
                    Name = $pipelineInput.Name;
                    Type = $pipelineInput.propertyType;
                };
                Parameters = $_.GetParameters();
                Name = "" + $attr.verb + "-" + $attr.noun;
                Path = "function:" + $attr.verb + "-" + $attr.noun;                
            };
        }
    }
}

And then comes the real nut: the function that creates the scriptblock; this looks a bit ugly - lots of escaped $'s and evaluation $()'s and here-strings, but it works:

# create a script block wrapping the method 
function create-wrapperScriptBlock( $method )
{   
    $parameterPosition = 0
    
    # collection of parameter declarations
    $params = new-object system.collections.arraylist;
   
   # define the pipeline command input parameter
    $params.add(
@"
        [Parameter(Mandatory=`$true,ValueFromPipeline=`$true)]
        [$($method.input.type.fullName)]
        `$$($method.input.name)
"@
    ) | out-null; # eat the output of add()
   
    #add any parameters required by the method being wrapped
    $params.addRange(
        @( $method.parameters | 
            %{         
@"
        [Parameter(Position=$parameterPosition,Mandatory=`$true)]
        [$($_.ParameterType.fullName)]
        `$$($_.Name)
"@;
            ++$parameterPosition;
            } 
        ) 
    );
   
    # join the $params collection to a single string   
    $params = $params -join ",`n";        
    # define the method call arguments    
    $callArgs = ( $method.parameters | %{      
        "`$$($_.Name)";
        } 
    ) -join ", ";   
# return the wrapper script block as a string
@"
{
    [CmdletBinding()]
    param( 
        $($params.Trim())
    )
    
    process
    {        
        `$script:ctx = `$executor.for( `$$($method.input.name) ).$($method.Method.Name)( $($callArgs.trim()) );
        `$$($method.Input.Name);        
    }
}
"@;
}

There's quite a bit going on in there, but none of it is rocket science.  First, a list of function parameters is built, with the pipeline input parameter loaded first followed by any arguments required by the automation framework method.  This list is joined into a flat string and surrounded by a param() declaration.  A second list of parameters - those that will be passed to the automation framework method - is built and flattened, then wrapped in a call to the actual framework method.

The resulting scriptblock makes a few assumptions, most notably the existence of a global (or module-scoped) $executor variable that is declared like so:

$context = new-object pokerroomshell.commands.framework.context;
$executor = [PokerRoomShell.Commands.Fluency.Execute]::Using( $context );

But those little static details can be wrapped up in a module. 



Extension Methods for Deferred Log4Net Message Formatting

§ March 9, 2009 16:07 by beefarino |

In a recent post I described some initial impressions of the MS Live Labs Simple Logging Framework (SLF).  One of the things I really like about SLF is the use of lambas as a way to defer execution of potentially slow or unchecked message formatting code.  I also railed on the idea of creating a completely new effort when log4net is an active open-source project.

So, here I am putting my money where my mouth is: I've created some extension methods for instances of ILog that accept lambdas - now if you use log4net in a C# 3.0+ environment, you can defer message formatting as easily as you can in SLF.  The code is quite simple:

using log4net;
namespace Log4NetExtensions
{
    public static class Log4NetExtensionMethods
    {
        public static void Debug( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsDebugEnabled )
            {
                log.Debug( formattingCallback() );
            }
        }
        public static void Info( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsInfoEnabled )
            {
                log.Info( formattingCallback() );
            }
        }
        public static void Warn( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsWarnEnabled )
            {
                log.Warn( formattingCallback() );
            }
        }
        public static void Error( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsErrorEnabled )
            {
                log.Error( formattingCallback() );
            }
        }
        public static void Fatal( this ILog log, Func<string> formattingCallback )
        {
            if( log.IsFatalEnabled )
            {
                log.Fatal( formattingCallback() );
            }
        }
     }
} 

Each method of the ILog interface gets its own override in the form of an extension method.  The method accepts a Func<string> that allows you to capture log message formatting in a lamba expression, and hence defer its execution until absolutely necessary (or avoid it altogether).  Here are some quick and dirty unit tests to demonstrate the basic use and functionality:

using NUnit.Framework;
namespace Log4NetExtensions.Tests
{
    [TestFixture]
    public class Log4NetExtensionTests
    {
        log4net.ILog Log;
        log4net.Appender.MemoryAppender appender;
        [TestFixtureSetUp]
        public void FixtureSetUp()
        {
            Log = log4net.LogManager.GetLogger( System.Reflection.MethodBase.GetCurrentMethod().DeclaringType );
            appender = new log4net.Appender.MemoryAppender 
            {
                Name = "Memory Appender",
                Threshold = log4net.Core.Level.Debug
            };
            log4net.Config.BasicConfigurator.Configure( appender );                        
        }
        [SetUp]
        public void TestSetUp()
        {
            appender.Clear();            
            log4net.Core.LoggingEvent[] events = appender.GetEvents();
            Assert.That( 0 == events.Length, "failed to clear appender of log events" );
            Log.Logger.Repository.Threshold = log4net.Core.Level.Debug;
        }
        [Test]
        public void LogViaLambdaTest()
        {
            Log.Debug( () => "Hello World!" );
            log4net.Core.LoggingEvent[] events = appender.GetEvents();            
            
            Assert.That( 1 == events.Length, "failed to log via lamba" );
            Assert.That( 
                StringComparer.CurrentCulture.Equals( 
                    "Hello World!", 
                    events[ 0 ].RenderedMessage                   
                ),
                "rendered message does not match (via lambda)"
            );
        }
        [Test]
        public void LogWithLocalVariableReference()
        {
            string value = "World!";
            Log.Debug( () => "Hello " + value );
            log4net.Core.LoggingEvent[] events = appender.GetEvents();
            Assert.That( 1 == events.Length, "failed to log with local variable reference" );
            Assert.That(
                StringComparer.CurrentCulture.Equals(
                    "Hello World!",
                    events[ 0 ].RenderedMessage
                ),
                "rendered message does not match (local variable reference)"
            );
        }
        [Test]
        public void LambdaIsNotEvaluatedAtInactiveLogLevel()
        {            
            Log.Logger.Repository.Threshold = log4net.Core.Level.Error;         
            bool evaluated = false;
            Log.Debug( () => 
                { 
                    evaluated = true;
                    return "Hello World!";
                }
            );
            Assert.That( ! evaluated, "lamba was evaluated at inactive log level" );
            Log.Error( () =>
                {
                    evaluated = true;
                    return "Hello World!";
                }
            );
            Assert.That( evaluated, "lamba was not evaluated at active log level" );
        }
    }
}

Play with it, see if you like it.  I will to.  Any suggestions or comments are encouraged.  If there is enough interest I'll see about submitting the extensions to the apache project.