Automation Framework pt 4: Sharing State in Commands

§ February 10, 2009 11:51 by beefarino |

We're already getting a lot of use out of the framework, but I'm constantly seeking out ways to make it easier to use and extend. 

There is one particular aspect of the framework code is leaving a bad taste in my mouth.   After trying a few approaches I've settled on one that I feel is the best option.  Not everyone agrees, and I'd appreciate some alternative approaches.

It has to do with sharing state in a batch of commands.  Consider the following powershell script:

new-deposit -Name Stan -Amount 500;  

which, after some magical binding and command-building logic, breaks down into a complex sequence of simple commands (composites and chains are exploded as sub-items):

  1. FindOrCreateUserAccount for identifier "Stan" (chain):
    1. LoadUserAccount for user named "Stan"
    2. CreateUserAccount for user named "Stan" (composite):
      1. AcquireAuthTicket for creating a user account
      2. CreateUserAccountWithTicket for user named "Stan"
  2.  MakeDeposit in the amount of $500 to Stan's account (composite):
    1. AcquireAuthTicket for making a deposit
    2. MakeDepositWithTicket to move $500 into Stan's account

The powershell function translates into a batch of eight command objects to perform the actual work.  The commands need to share some state to accomplish the overall goal - for instance, the FindOrCreateUserAccount command will need to produce a UserAccount object on which the MakeDeposit command can operate.  This is a bit of a conundrum - I want each command object to know only of its own duties, so the FindOrCreateUserAccount command isn't able to directly pass the UserAccount object to the MakeDeposit command. So how do I get the UserAccount object created by the FindOrCreateUserAccount command to the MakeDeposit command?

I've tried a few approaches.  

Using a Command Context

After completing my first end-to-end use of the framework, I jotted down some concerns, many of which orbit around the need to consolidate access to all of the system services I'm automating.  To address this, I changed the ICommand.Execute() method signature to accept a single parameter of type ICommandContext:

public interface ICommandContext
{    
    IUserService UserService { get; }
    IGameService GameService { get; }
    // ...    
}
public interface ICommand 
{
    bool Execute( ICommandContext context );
} 

So now anyone executing a command must supply a command context.  I did this for a few reasons:

  • it allows each command easy access to the various services that comprise the production system without a lot of plumbing code;
  • it gives me a single point of extension for all command types.  E.g., if the system expands to include another service, I can modify ICommandContext without breaking any of the other commands or configuration;
  • it provides an abstraction against which the command objects run.  For example, I can execute commands against a "test" context to verify their behavior, or a "record" context to build up transcripts of system activity;
  • it isolates configuration to a single object, so instead of having to manage a large configuration across dozens of command objects, I only need to focus on configuring one object.  Bonus.

Anyway, a collegue suggested I just add a table of named objects to the command context, something like this:

public interface ICommandContext
{
    IUserService UserService { get; }
    IGameService GameService { get; }
    Dictionary< string, object > Data { get; }
    
    // ...    
} 

The idea is that one command could load an object with a specific name:

public class LoadUserAccountCommand : ICommand
{
    public string UserName { get; set; }
    
    public bool Execute( ICommandContext context )
    {
        UserAccount account = null;
       
        // this call populates the user account object 
        context.UserService.GetUserAccount( UserName, account );
       
        context.Data[ "UserAccount" ] = account;
       
        return true;
    }
}
and another could consume it using the same name:
public class MakeDepositWithTicketCommand : ICommand
{
    public decimal Amount { get; set; }    
    public bool Execute( ICommandContext context )
    {
        UserAccount account = context.Data[ "UserAccount" ] = account;
        
        // make the deposit into the account ..
        context.UserService.Deposit( account.Id, Amount );
       
        return true;
    }
}

I tried this for a little while, it has some charm in its simplicity but I'll be blunt: I hate it.  I think it's fine and simple for a hack job but will become unmaintainable very quickly:

  • the hashtable hides the inputs of the command - e.g., there is no way to look at a command object and determine that it needs in the way of input to do its job without deciphering code;
  • the sheer number of entries required during a command session could  become quite long, and even assuming we use best practices and have an enum of magic Data keys it becomes difficult to use;
  • along those lines, as the number of entires grows, the names start to loose their simplicity.  "UserAccount" is no longer sufficient, so you have the "NewlyCreatedUserAccount" item, the "CachedUserAccount" item, etc.  Or worse, the team gets lazy and we have "UserAccount", "UserAccount2", etc;
  • this actually creates a high (and, ironically, hidden) level of coupling between commands - e.g., the MakeDepositWithTicketCommand can only work against the "UserAccount" data item, which will limit the scope of commands with which it can operate to those that know to fill the "UserAccount" data item.  

I'm convinced that a general purpose variable stack or hashtable will make the framework too cumbersome to use.  I came up with an alternative that feels better, but still has some ugly parts.

Using Shared Property References

The easiest way to explain this is by example.  In this rewrite of the sample from the previous section, note how both command objects expose an Account property:

public class LoadUserAccountCommand : ICommand
{
    public UserAccount Account { get; set; }
    public string UserName { get; set; }
    public bool Execute( ICommandContext context )
    {
        // this call populates the user account object 
        context.UserService.GetUserAccount( UserName, Account );
       
        return true;
    }
}
public class MakeDepositWithTicketCommand : ICommand
{
    public UserAccount Account { get; set; }
    public decimal Amount { get; set; }
    
    public bool Execute( ICommandContext context )
    {
        // make the deposit into the account ..
        context.UserService.Deposit( Account.Id, Amount );
       
        return true;
    }
}

If both Account properties are set to the same object reference, the commands implicitly share the Account state:

//...
UserAccount account = new UserAccount();
CompositeCommand cmd = new CompositeCommand(
    new LoadUserAccountCommand { UserName = userName, Account = account },
    new MakeDepositWithTicketCommand { Amount = depositAmount, Account = account }
);
bool result = cmd.Execute( context );
// ... 

The LoadUserAccountCommand fills the account data into the object, and the MakeDepositWithTicketCommand uses the object to deposit money.  I like this a lot better than the other solution:

  • the needs of each command are expressed in its public members;
  • each command operates in isolation and there is no hidden coupling - e.g., there is no assumption made by the MakeDepositWithTicketCommand that will prevent it from working with other command objects;
  • it's simple;
  • it feels right;

After using this for a while, I've found a few drawbacks.  First, it adds some extra setup to every command batch.  Second, this mechanism obviously doesn't work for value types, you have to wrap the value in a reference type which can feel a bit awkward.  I've also been told that the state-sharing mechanism isn't obvious, but I don't agree. 

I'd appreciate some feedback on my choices here - is there another approach I haven't considered?



Automation Framework pt 3: Command Composites and Chains

§ February 4, 2009 12:44 by beefarino |

On Monday I gave a quick overview and demo of the automation framework.  I had prepared a handful of command objects to drive features of the system that were simple to implement, but are cumbersome manual operations:

  • creating new user accounts;
  • resetting user credentials;
  • validating user credentials;
  • depositing to and withdrawing from a user's account;
  • querying the user's current balance;

I showed off the commands in an interactive PowerShell session that manipulated a live production system.  The interactive demo culminated in me creating 100 new user accounts on a production system, giving each a $500 deposit, in a one-liner:

@( 0..99 ) | % { "p" + $_ } | new-player | new-deposit 500; 

Using our standard production tools, this would have taken hours to accomplish and left plenty of room for user error; the script ran in under 30 seconds and produced zero defects.

I then re-used the same commands in several FitNesse fixtures, explaining how we could drive a lot of our testing effort using table-driven examples.  The reception was very positive from pigs and chickens alike, which made me a very happy camper.

One of the design aspects of the framework that went over well was that each command object encapsulated an atomic unit of work on our system - simple, a dozen or so lines of effort with clear inputs and outputs and a single goal.  These units could be combined to create complex behavior very quickly by compositing and chaining command objects together.

An Example of Compositing

Let's say you want to encapsulate the task of resetting user credentials, and let's say that process is comprised of the following atomic steps:

  1. Locate the user account from the data store;
  2. Acquire an authorization ticket to manage the user;
  3. Reset the user credentials using the ticket;
  4. Verify the new user credentials.

Some of these steps are dependent on others, but each step represents a reusable unit of work performed by the system.  E.g., step 2 will need to be repeated every time I want to touch a user account, not just when changing user credentials.  It would be good to encapsulate these atomic units as individual command objects so I can reuse them:

public interface ICommand
{
    bool Execute();
}

public class LoadUserAccountCommand : ICommand
{
    public bool Execute()
    {
        // ...
    }
}

public class AcquireAuthorizationTicketCommand : ICommand
{
    public bool Execute()
    {
        // ...
    }
}

public class SetUserCredentialsCommand : ICommand
{
    public bool Execute()   
    {
        // ...
    }
}

public class VerifyUserCredentialsCommand : ICommand
{
    public bool Execute()   
    {
        // ...
    }
}

At the same time, I don't want to have to remember to execute four commands every time I need to perform a simple system task.  So I'll encapsulate that too, using a Composite pattern:

public CompositeCommand : List< ICommand >, ICommand
{
    public CompositeCommand( params ICommand[] cmds ) : base( cmds )
    {
    }
   
    public bool Execute()
    {
        foreach( var cmd in this )
        {
            if( ! cmd.Execute() )
            {
                // halt on first failed command
                return false;
            }           
        }
        return true;
    }
}

public class ResetUserCredentialsCommand : CompositeCommand
{
    public ResetUserCredentialsCommand()
        : base(
            new LoadUserAccountCommand(),
            new AcquireAuthorizationTicketCommand(),
            new SetUserCredentialsCommand(),
            new VerifyUserCredentialsCommand()
        )
    {
    }
}

In essence, a composite command allows me to treat a sequential list of commands as if it were a single command.  The composite is a list of ICommand objects, and it supports the ICommand contract so it looks like any other command object to the framework.  The implementation of the Execute() method simply iterates over each command in the composite, executing each in turn until a command fails (returns false) or the end of the command collection is reached.

Sidebar: if you want to get sticky, in a true composite pattern the iteration over the child ICommand objects would not be contingent on the result of the previous command's execution.  That makes this more of a strategy than composite methinks.  However, I'm not that kind of pattern dude anymore and I think the intent is enough to call it a composite.  If you're a member of the pattern police and want to bust my chops, please leave a comment.

An Example of Chaining

Take another example: I need an account for a specific user.  That user may or may not exist yet, I don't care - I just want to run my test fixture.  In this case, I have two steps:

  1. Load an existing user account;
  2. Create a new user account.

Unlike the composite, these actions are not meant to be run in sequence - step two should only execute if step 1 fails to find an existing user.  In these cases, I use a chain of commands to encapsulate the logic:

public ChainOfCommand : List< ICommand >, ICommand
{
    public ChainOfCommand( params ICommand[] cmds ) : base( cmds )
    {
    }
    
    public bool Execute()
    {
        foreach( var cmd in this )
        {
            if( cmd.Execute() )
            {
                // halt on first successful command
                return true;
            }            
        }
        return false;
    }
}

public class LoadOrCreateUserAccountCommand : ChainOfCommand
{
    public LoadOrCreateUserAccountCommand()
        : base(
            new LoadUserAccountCommand(),
            new CreateUserAccountCommand()
        )
    {
    }
}

The code is almost identical to the CompositeCommand class.  The key difference is in the ChainOfCommand.Execute() method - where CompositeCommand executes each child command until one fails, ChainOfCommand executes each child command until one succeeds.  For example, when the LoadOrCreateUserAccountCommand is executed, no new user account is created if one can be loaded.

Again, sticklers will point out that this isn't true to a chain of responsibility pattern, and I'm ok with that.  It's just damn useful, whatever you call it.

Seeking Advice

My next post on this spike will be a request for advice, so please tune in ...



Automation Framework pt 2: End-to-End Example

§ January 27, 2009 00:03 by beefarino |

Having covered the vision and napkin design of an automation framework for our product's core services, it's time for a working end-to-end example.  My goal is to be able to drive one function of our core product: creating a user account.  In addition, I will drive it from both PowerShell and FitNesse to see how well the framework meets the needs from the initial vision.

Getting to Red

I broke ground with this test:

[Test] 
public void CreateUserAccountCommandExecution()
{
    ICommand command = new CreateUserAccountCommand { Name = "joe" }; 
    bool result = command.Execute(); 
    Assert.IsTrue( result ); 
} 

Simple enough - a textbook command pattern; note:

  • an ICommand interface defines the command contract;
  • at the moment, the only member of ICommand is an Execute() method.  It accepts no arguments and returns a boolean to indicate success or failure;
  • CreatePlayerAccountCommand is a concrete implementation of the ICommand contract;
  • CreatePlayerAccountCommand has a Name property that identifies the user name.

Getting to Green

First thing's first - I need the command contract:

public interface ICommand
{ 
    bool Execute(); 
} 

Then I can implement the concrete CreatePlayerAccountCommand type:

public class CreateUserAccountCommand : ICommand 
{ 
    public string Name { get; set; }  
    public bool Execute() 
    { 
        IUserService clientInterface = new RemoteUserService( "http://beefarino:8089" );  
        Credentials credentials = new Credentials( "user-manager", "password" ); 
        Ticket authTicket = clientInterface.Authenticate( credentials );  
        UserData userProperties = new UserData();  
        userProperties.FirstName = Name; 
        userProperties.LastName = "Smyth"; 
        userProperties.Nickname = Name; 
        userProperties.DateOfBirth = System.DateTime.Now - TimeSpan.FromDays( 365.0d * 22.0d );
         
        string userId = clientInterface.CreateUser( authTicket, userProperties );  
        Ticket userTicket; 
        clientInterface.CreateUserTicket( authTicket, userId,  out userTicket );  
        return null != userTicket;
    } 
} 

I'm not going to discuss this code except to explain that:

  • the logic in the Execute() method performs the minimum amount of activity necessary to create a user account;
  • I'm making assumptions about a lot of the data I need (e.g., the age of the user).  I'm trying to keep the command as simple as unconfigurable as possible, and there are many, many more UserData fields available for account configuration that I'm not using;
  • the command object method does nothing outside of it's intended scope: it creates a user account, that's it.

Use it from PowerShell

Now that I have the command working, I want to see it working in PowerShell.  I'm taking a minimalist approach starting out.  Once I implement a few more commands and plug them into PowerShell, I'll see what implementation patterns emerge and replace this approach with something cleaner.  But for now, this mess will do:

[System.Reflection.Assembly]::LoadFrom( 'automation.commands.dll' ); 
function new-useraccount() 
{ 
    param( [string] name ); 
     
    $cmd = new-object automationcommands.createuseraccountcommand; 
    $cmd.Name = $name;
    $cmd.Execute(); 
}  
new-useraccount -name 'scott'; 

Hmmm ... runs silent, no output ... but looking at the system backend, I can see that it works.  

Use it from FitNesse

I downloaded the latest stable version of FitNesse from http://www.fitnesse.org/ and followed Cory Foy's short tutorial on using it against .NET assemblies (which is still accurate after 3+ years, #bonus) to get things running.  I created a new page and entered the following wikitext and table:

!contents -R2 -g -p -f -h 
!define COMMAND_PATTERN {%m %p} 
!define TEST_RUNNER {dotnet\FitServer.exe} 
!define PATH_SEPARATOR {;} 
!path dotnet\*.dll 
!path C:\dev\spikes\Automation\PokerRoom.Fixtures\bin\Debug\pokerroom.fixtures.dll  
A simple test of the CreateUser command: 
|!-PokerRoom.Fixtures.CreateUserAccount-!| 
|name|created?| 
|phil|true| 
|bob|true| 
|alice|true| 

I hacked up a quick fixture to support the table...

namespace PokerRoom.Fixtures 
{ 
    public class CreateUserAccount : fit.ColumnFixture 
    { 
        public string name { get; set; }  
        public bool created() 
        { 
            ICommand cmd = new CreateUserAccountCommand { Player = name };
    
            return cmd.Execute();
        } 
    } 
} 

... build it, and the FitNesse tests are green ...

After verifying that the users are actually created in the live system using our proprietary tools, I'm satisfied.

Moving Forward

So far so good.  It's very dirty, but it's working.  w00t * 2!

While developing this today I noted a few areas of concern:

  1. In the command object, there are several dependencies that obviously should to be injected.  Namely, the IUserService instance and the authority credentials;
  2. These dependencies are only really needed in the Execute() method;
  3. Looking ahead, I know I'm going to have many of these services, and it will be a pain to inject them all for each command instantiation;
  4. Compositing commands into complex behavior will eventually lead to the need to share state between commands.  I have an idea of how to manage this, but I'm concerned it will be cumbersome;
  5. There needs to be some kind of feedback when using the command from PowerShell; not sure where this should live or what it should look like at the moment...
  6. PowerShell will have a lot more to offer if I integrate with it more deeply.  I'll have to think about what this will look like, so as to minimize the amount of custom scripting necessary to run commands while accessing the full PowerShell feature set;
  7. I need to learn a lot more about FitNesse :).  I've already given the elevator speech to a coworker and demonstrated the fixture - he had a lot more questions than I had answers...

My next few posts will detail how I address these and other concerns.  Next post will detail some prefactoring to take care of items 1-4, maybe demonstrate command compositing.



Automation Framework pt 1: Napkin Design

§ January 20, 2009 02:03 by beefarino |

Automating the core components of our product won't be too difficult.  My biggest obstacle at this point is time: with another round of "org chart refactorings" at the office, I've had tech writing added to my list of responsibilities so my time is scarce.  I want to get a usable and extensible framework to the team as quickly as possible.

The team has done a decent job of piecing apart functional system components into a set of core services and clients.  Almost no logic exists on the clients, and they communicate to the services through a set of relatively firm interfaces, although the transports vary wildly:

At this point, my only area of automation interest is the core components, as they contain the core logic of the product and are most impacted by our recent stability and performance issues.  I want the framework to support the following usage scenarios:

  • scripted system QA testing;
  • acceptance testing of specific features and performance metrics;
  • providing support for realtime load-testing of a production system;

So it needs to be fairly agnostic with regard to input - scripting could be done via PowerShell to take care of a lot of the heavy lifting of defining complicated tests, acceptance testing driven by a framework like FitNesse, and load-testing via a GUI. 

It'd be a real pain to try and hook up all of those core services to each of those input vectors.  Plus there may be other vectors I haven't considered (ooOOoo - like a DSL created with MGrammar).  An approach that I've found very appropriate to this situation has been to use the Command design pattern.

In a nutshell, the Command pattern aims to encapsulate a parameterized request as an object; e.g., a direct service method invocation:

...
service.CreateUser( userName, userType );
...


could be captured as a command object:

...
ICommand cmd = new CreateUserCommand( service, userName, userType );
cmd.Execute();
...


Command objects often support an Execute() semantic, but not always; sometimes Command objects are passed through an Executor object that will perform the action. 

If you're not experienced with this pattern, you may be wondering why you'd want to go through these hoops when you could just call the service directly.  Well, using command objects has a few significant benefits that are not readily apparent:

  1. Command objects provide encapsulation between (in my case) the service and the client; if the service contract changes, only the commands needs to change.  If I hard-wired 500 FIT fixtures to the service and it changes in the next build, I'd be crying.
  2. Command objects offer a quick way to persist a set of parameterized operations.  In other words, you can de/serialize command objects, save them to a database or a message queue, etc.  This also makes them highly accessible to multiple input forms, like XML, scripts, and FIT fixtures.
  3. Once you have a few simple commands implemented, you can very quickly piece them together to create more complex behavior.  Again, using some form of object serialization makes this easy and, more important, dynamic - something that a hard-wire approach would not be able to do.
  4. It makes supporting transactions and undo semantics a lot easier.  E.g., a Command could support Execute(), Commit(), and Rollback() methods.
  5. The Command pattern works well with the Composite and Chain of Responsibility patterns, again simplifying the creation of complex commands from simple atomic ones.

In short, the Command pattern brings ninja skills to a knife fight.  Revisiting the flow chart above:

Each input vector needs to focus only on creating a set of Command objects representing the actions to be taken, then passing them through a Command Executor that will execute the action against the core system services using the existing service interfaces.

Example forthcoming...