Automation Framework pt 6: PowerShell Integration for Free

§ March 11, 2009 03:47 by beefarino |

Now that I have a fluent interface hiding a lot of the complexities of my automation framework, I wanted to focus on getting the framework integrated with PowerShell.  My desire is to leverage all the features of PowerShell, including the command pipeline and existing functions.  After folding a few methods into PowerShell, I recognized the general pattern; I came up with a way to package the framework in a PowerShell Module that automagically generates wrapper functions around the fluent interfaces.  So moving forward, as the framework expands I don't need to do anything to get deep PowerShell integration.

I stared out using the fluent interfaces directly:

$context = new-object pokerroomshell.commands.framework.context;
$executor = [PokerRoomShell.Commands.Fluency.Execute]::Using( $context );
$properties = @{ 
    address = "12 Main St";
    city = "Anywheretownvilleton";
    state = 'Texahomasippi';
    zip = '75023';
};
[PokerRoomShell.Commands.Execute]::using( $context )
    .for( $player )
    .setProperties( $properties )
    .deposit( 500 ); 

which works, but becomes very cumbersome when I want to process multiple players or existing PowerShell commands:

$players | %{
    [PokerRoomShell.Commands.Execute]::using( $context )
        .for( $_ )
        .setProperties( $properties )
        .deposit( 500 );
    $_; # put player back in pipeline
} | export-clixml players.xml; 

What I really want is to make the framework look more like it was designed for PowerShell.  Or perhaps a better way to say it: I want to use PowerShell to drive my system, but I don't want to do a lot of work to get there.  I started tinkering, implementing a few of the methods from the AccountCommands fluent interface to see what it would take to use the methods in a pipeline.  In order to do something like this:

$players | set-properties $properties | 
    new-deposit 500 | 
    export-clixml players.xml; 

I need these functions:

function new-deposit
{
    [CmdletBinding()]
    param(
        [Parameter(Position=0,ValueFromPipeline=$true,Mandatory=$true)]
        [PokerRoomShell.Commands.Framework.Account]
        $account,
        [Parameter(Position=1,Mandatory=$true)]
        [int]
        $amount
    )
    process
    {        
        $script:accountCommands = $executor.forPlayer( $account ).deposit( $amount );
        $script:accountCommands.User;        
    }
}
function set-properties
{
    [CmdletBinding()]
    param(
        [Parameter(Position=0,ValueFromPipeline=$true,Mandatory=$true)]
        [PokerRoomShell.Commands.Framework.Account]
        $account,
        [Parameter(Position=1,Mandatory=$true)]
        [Hashtable]
        $properties
    )
    process
    {        
        $script:accountCommands = $executor.for( $account ).setProperties( $properties );
        $script:accountCommands.Account;        
    }
} 

Once I had a few of these functions under my belt, the pattern became evident.  Each method gets its own PowerShell wrapper function.  Each PowerShell wrapper function can be reduced to a matter of:

  • accepting an Account reference from the pipeline;
  • accepting any parameters needed by the AccountCommands method;
  • creating an AccountCommands instance around the Account reference;
  • calling the method on the AccountCommands instance;
  • returning the Account object back to the pipeline
It was obvious that these wrappers would consist of mostly boilerplate, and that they could simply be generated if I had a little extra metadata available on the fluent command objects.  I defined three simple attributes to this end:
  • the CommandPipelineAttribute identifies objects as candidates for PowerShell integration;
  • the PipelineInputAttribute marks the property of the object that will be used as pipeline input and output;
  • the CommandBindingAttribute defines the verb-noun name of the PowerShell wrapper function.

The attributes are markers I can place in my fluent command objects to indicate how the object methods should be wrapped in PowerShell:

[CommandPipeline]
public class AccountCommands
{        
    // ...
    [PipelineInput]
    public Account Account
    {
        get;
        set;
    }
    // commands
    [CommandBinding( Verb.Find, Noun.Player )]
    public AccountCommands Lookup()
    {
        // ...
    }
    [CommandBinding( Verb.New, Noun.Player )]
    public AccountCommands Create()
    {
        // ...
    }
    [CommandBinding( Verb.New, Noun.Deposit )]
    public AccountCommands Deposit( decimal amount )
    {
        // ...
    }
    [CommandBinding( Verb.Set, Noun.Properties )]
    public AccountCommands SetProperties( Hashtable properties )
    {
        // ...
    }
    // ...
} 

With these markers, generating PowerShell wrappers is a simple matter of snooping out this metadata and filling in the blanks of function template.  After a few minutes of hacking I had a working function to accomplish the task:

function generate-function
{
    [CmdletBinding()]
    param(
         [Parameter(Position=0)]
         [system.reflection.assembly] $assembly
    )
    process
    {
        # find all types marked with the CommandPipeline attribute
        foreach( $type in get-markedTypes( $assembly ) )
        {
            # find all methods marked with the CommandBinding attribute
            foreach( $method in ( get-markedMethods $type ) )
            {
                # create a script block wrapping the method
                $method.ScriptBlock = create-wrapperScriptBlock $method;
                return $method;
            }                     
        }
    }
}

In a nutshell, generate-function finds all public types marked with the CommandPipelineAttribute, then creates wrapper ScriptBlocks around the methods on those types marked with the CommandBindingAttribute (the details are described below).  I can use this to create the PowerShell wrapper functions dynamically, using the new-item cmdlet against the built-in PowerShell Function provider:

foreach( $script:m in generate-function $assemblyName )
{
    # only create functions that don't exist yet
    # this will allow for command proxies if necessary 
    if( !( test-path $_.path ) )
    { 
       ni -path $script:m.path -value ( iex $script:m.ScriptBlock ) -name $script:m.Name;
    }
}

Now when my automation framework expands, I need to do zero work to update the PowerShell layer get the deep PowerShell integration I want.  Kick ass!

Example Generated Function

Here is a PowerShell session that demonstrates the function generation, and shows what the resulting function looks like:

PS >gi function:set-pin
Get-Item : Cannot find path 'Function:\set-pin' because it does not exist.
At line:1 char:3
+ gi <<<<  function:set-pin
    + CategoryInfo          : ObjectNotFound: (Function:\sset-pin:String) [Get-Item], ItemNotFoundException
    + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.GetItemCommand
    
PS >generate-function "pokerroomshell.commands" | 
    ? { !( test-path $_.path ) } | 
    % { ni -path $_.Path -value ( iex $_.ScriptBlock ) -name $_.Name }
    
PS >(gi function:set-pin).definition
    [CmdletBinding()]
    param( 
        [Parameter(Mandatory=$true,ValueFromPipeline=$true)]
        [PokerRoomShell.Commands.Framework.Account]
        $user,
        [Parameter(Position=0,Mandatory=$true)]
        [System.String]
        $newPin
    )
process {
        $script:ctx = $executor.for( $user ).ResetPin( $newPin );
        $user;    
}

Gory Details

You really want to see the code?  You asked for it....

Finding types marked with the CommandPipelineAttribute is simple:

# find all types marked with the CommandPipeline attribute
function get-markedTypes( $asm )
{
    $asm.getExportedTypes() |
        ? { $_.getCustomAttributes( [pokerRoomShell.commands.framework.commandPipelineAttribute], $true ) };
}

Finding the methods on those types marked with the CommandBindingAttribute is just as easy; however, to simplify the ScriptBlock template processing, I preprocess each method and build up a little data structure with my necessities:

# find all methods marked with the CommandBinding attribute
function get-markedMethods( $type )
{
    # find the property to use as pipeline input / command output
    $pipelineInput =  $type.GetProperties() | ? { 
        $_.getCustomAttributes( [pokerRoomShell.commands.framework.pipelineInputAttribute], $true )
    } | coalesce;
    # find methods marked with the CommandBinding attribute
    $type.GetMethods() | % { 
        $attr = $_.getCustomAttributes( [pokerRoomShell.commands.framework.commandBindingAttribute], $true ) | coalesce;
    # build a hash table of method data for the scriptblock template
        if( $attr )
        {                      
            # return a hash table of data needed to define the wrapper function
            @{
                Method = $_;
                Binding = $attr;
                Input = @{ 
                    Name = $pipelineInput.Name;
                    Type = $pipelineInput.propertyType;
                };
                Parameters = $_.GetParameters();
                Name = "" + $attr.verb + "-" + $attr.noun;
                Path = "function:" + $attr.verb + "-" + $attr.noun;                
            };
        }
    }
}

And then comes the real nut: the function that creates the scriptblock; this looks a bit ugly - lots of escaped $'s and evaluation $()'s and here-strings, but it works:

# create a script block wrapping the method 
function create-wrapperScriptBlock( $method )
{   
    $parameterPosition = 0
    
    # collection of parameter declarations
    $params = new-object system.collections.arraylist;
   
   # define the pipeline command input parameter
    $params.add(
@"
        [Parameter(Mandatory=`$true,ValueFromPipeline=`$true)]
        [$($method.input.type.fullName)]
        `$$($method.input.name)
"@
    ) | out-null; # eat the output of add()
   
    #add any parameters required by the method being wrapped
    $params.addRange(
        @( $method.parameters | 
            %{         
@"
        [Parameter(Position=$parameterPosition,Mandatory=`$true)]
        [$($_.ParameterType.fullName)]
        `$$($_.Name)
"@;
            ++$parameterPosition;
            } 
        ) 
    );
   
    # join the $params collection to a single string   
    $params = $params -join ",`n";        
    # define the method call arguments    
    $callArgs = ( $method.parameters | %{      
        "`$$($_.Name)";
        } 
    ) -join ", ";   
# return the wrapper script block as a string
@"
{
    [CmdletBinding()]
    param( 
        $($params.Trim())
    )
    
    process
    {        
        `$script:ctx = `$executor.for( `$$($method.input.name) ).$($method.Method.Name)( $($callArgs.trim()) );
        `$$($method.Input.Name);        
    }
}
"@;
}

There's quite a bit going on in there, but none of it is rocket science.  First, a list of function parameters is built, with the pipeline input parameter loaded first followed by any arguments required by the automation framework method.  This list is joined into a flat string and surrounded by a param() declaration.  A second list of parameters - those that will be passed to the automation framework method - is built and flattened, then wrapped in a call to the actual framework method.

The resulting scriptblock makes a few assumptions, most notably the existence of a global (or module-scoped) $executor variable that is declared like so:

$context = new-object pokerroomshell.commands.framework.context;
$executor = [PokerRoomShell.Commands.Fluency.Execute]::Using( $context );

But those little static details can be wrapped up in a module. 



Automation Framework pt 4: Sharing State in Commands

§ February 10, 2009 11:51 by beefarino |

We're already getting a lot of use out of the framework, but I'm constantly seeking out ways to make it easier to use and extend. 

There is one particular aspect of the framework code is leaving a bad taste in my mouth.   After trying a few approaches I've settled on one that I feel is the best option.  Not everyone agrees, and I'd appreciate some alternative approaches.

It has to do with sharing state in a batch of commands.  Consider the following powershell script:

new-deposit -Name Stan -Amount 500;  

which, after some magical binding and command-building logic, breaks down into a complex sequence of simple commands (composites and chains are exploded as sub-items):

  1. FindOrCreateUserAccount for identifier "Stan" (chain):
    1. LoadUserAccount for user named "Stan"
    2. CreateUserAccount for user named "Stan" (composite):
      1. AcquireAuthTicket for creating a user account
      2. CreateUserAccountWithTicket for user named "Stan"
  2.  MakeDeposit in the amount of $500 to Stan's account (composite):
    1. AcquireAuthTicket for making a deposit
    2. MakeDepositWithTicket to move $500 into Stan's account

The powershell function translates into a batch of eight command objects to perform the actual work.  The commands need to share some state to accomplish the overall goal - for instance, the FindOrCreateUserAccount command will need to produce a UserAccount object on which the MakeDeposit command can operate.  This is a bit of a conundrum - I want each command object to know only of its own duties, so the FindOrCreateUserAccount command isn't able to directly pass the UserAccount object to the MakeDeposit command. So how do I get the UserAccount object created by the FindOrCreateUserAccount command to the MakeDeposit command?

I've tried a few approaches.  

Using a Command Context

After completing my first end-to-end use of the framework, I jotted down some concerns, many of which orbit around the need to consolidate access to all of the system services I'm automating.  To address this, I changed the ICommand.Execute() method signature to accept a single parameter of type ICommandContext:

public interface ICommandContext
{    
    IUserService UserService { get; }
    IGameService GameService { get; }
    // ...    
}
public interface ICommand 
{
    bool Execute( ICommandContext context );
} 

So now anyone executing a command must supply a command context.  I did this for a few reasons:

  • it allows each command easy access to the various services that comprise the production system without a lot of plumbing code;
  • it gives me a single point of extension for all command types.  E.g., if the system expands to include another service, I can modify ICommandContext without breaking any of the other commands or configuration;
  • it provides an abstraction against which the command objects run.  For example, I can execute commands against a "test" context to verify their behavior, or a "record" context to build up transcripts of system activity;
  • it isolates configuration to a single object, so instead of having to manage a large configuration across dozens of command objects, I only need to focus on configuring one object.  Bonus.

Anyway, a collegue suggested I just add a table of named objects to the command context, something like this:

public interface ICommandContext
{
    IUserService UserService { get; }
    IGameService GameService { get; }
    Dictionary< string, object > Data { get; }
    
    // ...    
} 

The idea is that one command could load an object with a specific name:

public class LoadUserAccountCommand : ICommand
{
    public string UserName { get; set; }
    
    public bool Execute( ICommandContext context )
    {
        UserAccount account = null;
       
        // this call populates the user account object 
        context.UserService.GetUserAccount( UserName, account );
       
        context.Data[ "UserAccount" ] = account;
       
        return true;
    }
}
and another could consume it using the same name:
public class MakeDepositWithTicketCommand : ICommand
{
    public decimal Amount { get; set; }    
    public bool Execute( ICommandContext context )
    {
        UserAccount account = context.Data[ "UserAccount" ] = account;
        
        // make the deposit into the account ..
        context.UserService.Deposit( account.Id, Amount );
       
        return true;
    }
}

I tried this for a little while, it has some charm in its simplicity but I'll be blunt: I hate it.  I think it's fine and simple for a hack job but will become unmaintainable very quickly:

  • the hashtable hides the inputs of the command - e.g., there is no way to look at a command object and determine that it needs in the way of input to do its job without deciphering code;
  • the sheer number of entries required during a command session could  become quite long, and even assuming we use best practices and have an enum of magic Data keys it becomes difficult to use;
  • along those lines, as the number of entires grows, the names start to loose their simplicity.  "UserAccount" is no longer sufficient, so you have the "NewlyCreatedUserAccount" item, the "CachedUserAccount" item, etc.  Or worse, the team gets lazy and we have "UserAccount", "UserAccount2", etc;
  • this actually creates a high (and, ironically, hidden) level of coupling between commands - e.g., the MakeDepositWithTicketCommand can only work against the "UserAccount" data item, which will limit the scope of commands with which it can operate to those that know to fill the "UserAccount" data item.  

I'm convinced that a general purpose variable stack or hashtable will make the framework too cumbersome to use.  I came up with an alternative that feels better, but still has some ugly parts.

Using Shared Property References

The easiest way to explain this is by example.  In this rewrite of the sample from the previous section, note how both command objects expose an Account property:

public class LoadUserAccountCommand : ICommand
{
    public UserAccount Account { get; set; }
    public string UserName { get; set; }
    public bool Execute( ICommandContext context )
    {
        // this call populates the user account object 
        context.UserService.GetUserAccount( UserName, Account );
       
        return true;
    }
}
public class MakeDepositWithTicketCommand : ICommand
{
    public UserAccount Account { get; set; }
    public decimal Amount { get; set; }
    
    public bool Execute( ICommandContext context )
    {
        // make the deposit into the account ..
        context.UserService.Deposit( Account.Id, Amount );
       
        return true;
    }
}

If both Account properties are set to the same object reference, the commands implicitly share the Account state:

//...
UserAccount account = new UserAccount();
CompositeCommand cmd = new CompositeCommand(
    new LoadUserAccountCommand { UserName = userName, Account = account },
    new MakeDepositWithTicketCommand { Amount = depositAmount, Account = account }
);
bool result = cmd.Execute( context );
// ... 

The LoadUserAccountCommand fills the account data into the object, and the MakeDepositWithTicketCommand uses the object to deposit money.  I like this a lot better than the other solution:

  • the needs of each command are expressed in its public members;
  • each command operates in isolation and there is no hidden coupling - e.g., there is no assumption made by the MakeDepositWithTicketCommand that will prevent it from working with other command objects;
  • it's simple;
  • it feels right;

After using this for a while, I've found a few drawbacks.  First, it adds some extra setup to every command batch.  Second, this mechanism obviously doesn't work for value types, you have to wrap the value in a reference type which can feel a bit awkward.  I've also been told that the state-sharing mechanism isn't obvious, but I don't agree. 

I'd appreciate some feedback on my choices here - is there another approach I haven't considered?



Automation Framework pt 2: End-to-End Example

§ January 27, 2009 00:03 by beefarino |

Having covered the vision and napkin design of an automation framework for our product's core services, it's time for a working end-to-end example.  My goal is to be able to drive one function of our core product: creating a user account.  In addition, I will drive it from both PowerShell and FitNesse to see how well the framework meets the needs from the initial vision.

Getting to Red

I broke ground with this test:

[Test] 
public void CreateUserAccountCommandExecution()
{
    ICommand command = new CreateUserAccountCommand { Name = "joe" }; 
    bool result = command.Execute(); 
    Assert.IsTrue( result ); 
} 

Simple enough - a textbook command pattern; note:

  • an ICommand interface defines the command contract;
  • at the moment, the only member of ICommand is an Execute() method.  It accepts no arguments and returns a boolean to indicate success or failure;
  • CreatePlayerAccountCommand is a concrete implementation of the ICommand contract;
  • CreatePlayerAccountCommand has a Name property that identifies the user name.

Getting to Green

First thing's first - I need the command contract:

public interface ICommand
{ 
    bool Execute(); 
} 

Then I can implement the concrete CreatePlayerAccountCommand type:

public class CreateUserAccountCommand : ICommand 
{ 
    public string Name { get; set; }  
    public bool Execute() 
    { 
        IUserService clientInterface = new RemoteUserService( "http://beefarino:8089" );  
        Credentials credentials = new Credentials( "user-manager", "password" ); 
        Ticket authTicket = clientInterface.Authenticate( credentials );  
        UserData userProperties = new UserData();  
        userProperties.FirstName = Name; 
        userProperties.LastName = "Smyth"; 
        userProperties.Nickname = Name; 
        userProperties.DateOfBirth = System.DateTime.Now - TimeSpan.FromDays( 365.0d * 22.0d );
         
        string userId = clientInterface.CreateUser( authTicket, userProperties );  
        Ticket userTicket; 
        clientInterface.CreateUserTicket( authTicket, userId,  out userTicket );  
        return null != userTicket;
    } 
} 

I'm not going to discuss this code except to explain that:

  • the logic in the Execute() method performs the minimum amount of activity necessary to create a user account;
  • I'm making assumptions about a lot of the data I need (e.g., the age of the user).  I'm trying to keep the command as simple as unconfigurable as possible, and there are many, many more UserData fields available for account configuration that I'm not using;
  • the command object method does nothing outside of it's intended scope: it creates a user account, that's it.

Use it from PowerShell

Now that I have the command working, I want to see it working in PowerShell.  I'm taking a minimalist approach starting out.  Once I implement a few more commands and plug them into PowerShell, I'll see what implementation patterns emerge and replace this approach with something cleaner.  But for now, this mess will do:

[System.Reflection.Assembly]::LoadFrom( 'automation.commands.dll' ); 
function new-useraccount() 
{ 
    param( [string] name ); 
     
    $cmd = new-object automationcommands.createuseraccountcommand; 
    $cmd.Name = $name;
    $cmd.Execute(); 
}  
new-useraccount -name 'scott'; 

Hmmm ... runs silent, no output ... but looking at the system backend, I can see that it works.  

Use it from FitNesse

I downloaded the latest stable version of FitNesse from http://www.fitnesse.org/ and followed Cory Foy's short tutorial on using it against .NET assemblies (which is still accurate after 3+ years, #bonus) to get things running.  I created a new page and entered the following wikitext and table:

!contents -R2 -g -p -f -h 
!define COMMAND_PATTERN {%m %p} 
!define TEST_RUNNER {dotnet\FitServer.exe} 
!define PATH_SEPARATOR {;} 
!path dotnet\*.dll 
!path C:\dev\spikes\Automation\PokerRoom.Fixtures\bin\Debug\pokerroom.fixtures.dll  
A simple test of the CreateUser command: 
|!-PokerRoom.Fixtures.CreateUserAccount-!| 
|name|created?| 
|phil|true| 
|bob|true| 
|alice|true| 

I hacked up a quick fixture to support the table...

namespace PokerRoom.Fixtures 
{ 
    public class CreateUserAccount : fit.ColumnFixture 
    { 
        public string name { get; set; }  
        public bool created() 
        { 
            ICommand cmd = new CreateUserAccountCommand { Player = name };
    
            return cmd.Execute();
        } 
    } 
} 

... build it, and the FitNesse tests are green ...

After verifying that the users are actually created in the live system using our proprietary tools, I'm satisfied.

Moving Forward

So far so good.  It's very dirty, but it's working.  w00t * 2!

While developing this today I noted a few areas of concern:

  1. In the command object, there are several dependencies that obviously should to be injected.  Namely, the IUserService instance and the authority credentials;
  2. These dependencies are only really needed in the Execute() method;
  3. Looking ahead, I know I'm going to have many of these services, and it will be a pain to inject them all for each command instantiation;
  4. Compositing commands into complex behavior will eventually lead to the need to share state between commands.  I have an idea of how to manage this, but I'm concerned it will be cumbersome;
  5. There needs to be some kind of feedback when using the command from PowerShell; not sure where this should live or what it should look like at the moment...
  6. PowerShell will have a lot more to offer if I integrate with it more deeply.  I'll have to think about what this will look like, so as to minimize the amount of custom scripting necessary to run commands while accessing the full PowerShell feature set;
  7. I need to learn a lot more about FitNesse :).  I've already given the elevator speech to a coworker and demonstrated the fixture - he had a lot more questions than I had answers...

My next few posts will detail how I address these and other concerns.  Next post will detail some prefactoring to take care of items 1-4, maybe demonstrate command compositing.



Automation Framework pt 1: Napkin Design

§ January 20, 2009 02:03 by beefarino |

Automating the core components of our product won't be too difficult.  My biggest obstacle at this point is time: with another round of "org chart refactorings" at the office, I've had tech writing added to my list of responsibilities so my time is scarce.  I want to get a usable and extensible framework to the team as quickly as possible.

The team has done a decent job of piecing apart functional system components into a set of core services and clients.  Almost no logic exists on the clients, and they communicate to the services through a set of relatively firm interfaces, although the transports vary wildly:

At this point, my only area of automation interest is the core components, as they contain the core logic of the product and are most impacted by our recent stability and performance issues.  I want the framework to support the following usage scenarios:

  • scripted system QA testing;
  • acceptance testing of specific features and performance metrics;
  • providing support for realtime load-testing of a production system;

So it needs to be fairly agnostic with regard to input - scripting could be done via PowerShell to take care of a lot of the heavy lifting of defining complicated tests, acceptance testing driven by a framework like FitNesse, and load-testing via a GUI. 

It'd be a real pain to try and hook up all of those core services to each of those input vectors.  Plus there may be other vectors I haven't considered (ooOOoo - like a DSL created with MGrammar).  An approach that I've found very appropriate to this situation has been to use the Command design pattern.

In a nutshell, the Command pattern aims to encapsulate a parameterized request as an object; e.g., a direct service method invocation:

...
service.CreateUser( userName, userType );
...


could be captured as a command object:

...
ICommand cmd = new CreateUserCommand( service, userName, userType );
cmd.Execute();
...


Command objects often support an Execute() semantic, but not always; sometimes Command objects are passed through an Executor object that will perform the action. 

If you're not experienced with this pattern, you may be wondering why you'd want to go through these hoops when you could just call the service directly.  Well, using command objects has a few significant benefits that are not readily apparent:

  1. Command objects provide encapsulation between (in my case) the service and the client; if the service contract changes, only the commands needs to change.  If I hard-wired 500 FIT fixtures to the service and it changes in the next build, I'd be crying.
  2. Command objects offer a quick way to persist a set of parameterized operations.  In other words, you can de/serialize command objects, save them to a database or a message queue, etc.  This also makes them highly accessible to multiple input forms, like XML, scripts, and FIT fixtures.
  3. Once you have a few simple commands implemented, you can very quickly piece them together to create more complex behavior.  Again, using some form of object serialization makes this easy and, more important, dynamic - something that a hard-wire approach would not be able to do.
  4. It makes supporting transactions and undo semantics a lot easier.  E.g., a Command could support Execute(), Commit(), and Rollback() methods.
  5. The Command pattern works well with the Composite and Chain of Responsibility patterns, again simplifying the creation of complex commands from simple atomic ones.

In short, the Command pattern brings ninja skills to a knife fight.  Revisiting the flow chart above:

Each input vector needs to focus only on creating a set of Command objects representing the actions to be taken, then passing them through a Command Executor that will execute the action against the core system services using the existing service interfaces.

Example forthcoming...