Automation Framework pt 5: Hiding Complexty in a Fluent Interface

§ March 2, 2009 11:12 by beefarino |

In my last Automation Framework post I asked for some advice on managing state between commands.  I didn't get any public feedback on the blog, but an old college of mine named Jon Lester tinkered with the blogged examples and sent me some of his code.  The nut of his idea was to use extension methods on an accumulator object to separate the domain logic from the testing apparatus.  I liked his approach, it wasn't something that would have popped into my head; it also put a spotlight on a simple and elegant solution to my state sharing problem.

Before I dive into Jon's idea, take a look at how I'm currently sharing state between my command objects:

//...
UserAccount account = new UserAccount();
CompositeCommand cmd = new CompositeCommand(
    new LoadUserAccountCommand { UserName = userName, Account = account },
    new MakeDepositWithTicketCommand { Amount = depositAmount, Account = account }
);
bool result = cmd.Execute( context );
// ... 

This example represents a single task on the system under test - specifically making a deposit into a user account.  The task effort is distributed across the LoadUserAccountCommand and MakeDepositWithTicketCommand objects, which must share a common Account object in order to accomplish the ultimate goal.

As I described previously, I like this approach okay, especially compared to some of the alternatives I've tried.  I think it's simple enough to understand, but at the very least, it requires some explanation which is an API FAIL.  And although you can make it work for value and immutable types, it takes an ugly hack.

My college's solution was to isolate the shared states from the commands, and expose the units of work in a fluent interface wrapping the shared state.  I whittled down his approach into a lean and clean interface - here's an example of the end result:

// ...
Execute.Using( context )
    .ForPlayer( account )
    .Deposit( 500m );
// ... 

Same unit of work, but a lot less code and far easier to understand IMO.  

Implementation

The root of the fluency is implemented in the Execute class, which encapsulates a command context (which I describe here):

public class Execute 
{
    IContext context;
    private Execute( IContext ctx )
    {
        context = ctx;
    }
    
    public static Execute Using( IContext context )
    {
        Execute e = new Execute( context );
        return e;
    }
    public AccountCommands ForPlayer( Account account )
    {
        return new AccountCommands( user, Process );
    }
    
    public GameCommands ForGame( Game game )
    {
        return new GameCommands( game, Process );
    }
    
    bool Process( ICommand[] commands )
    {
        foreach( var command in commands )
        {
            if( ! command.Execute( context ) )
            {
                return false;
            }
        }
        return true;
    }
}

As you can see, the only inroad to this class is the Using method, which returns a new instance of the execute class initialized with the Command Context.  The various flavors of the For() method are used to capture the shared state for a set of commands.  They each return a object supporting a redundant, fluent interface of commands around the state.  For example, here is some of the AccountCommands class:

public class AccountCommands
{
    Func< bool, ICommand[] > processCommands;
    
    public AccountCommands( Account account, Func< bool, ICommand[] > callback )
    {
        processCommands = callback;
        this.Account = account;
        
        processCommands(
            Chain(
                new LoadUserAccountCommand { Account = Account },
                new CreateUserAccountCommand { Account = Account }
            )
        );
    }
    
    public Account Account { get; set; }
    
    public AccountCommands Deposit( decimal amount )
    {
        processCommands(
            new MakeDepositWithTicketCommand {
                Amount = amount,
                Account = account
            }
        );
        
        return this; 
    }
    
    public AccountCommands Withdraw( decimal amount )
    {
        processCommands(
            new MakeWithdrawalWithTicketCommand {
                Amount = amount,
                Account = account
            }
        );
        
        return this;
    }
    
    public AccountCommands SetProperties( Hashtable properties )
    {
        processCommands(
            Compose(
                new LoadAccountPropertiesCommand {
                    Account = account
                },
                new SetAccountPropertiesCommand {
                    Properties = properties,
                    Account = account
                }
            )
        );
        
        return this;
    }    
    
    // etc ...
    ICommand Chain( params ICommand[] commands)
    {
        return new ChainOfResponsibilityCommand(
            commands
        );
    }
    ICommand Compose( params ICommand[] commands )
    {
        return new CompositeCommand(
            commands
        );
    }
}


Items of note:

  • The constructor accepts two arguments: an Account object that represents the state shared by every member of the class, and a Func<> delegate that accepts an array of command objects and returns a boolean;
  • Each public method of the class represents a single task one can perform against an account;
  • Every public method simply composes one or more Command objects, which are passed to the processCommand callback for actual processing.

Huge Bennies

It still needs some work, but there are many things I like about this approach.  The thing I like most is that the fluent interface hides the complexities of composing command objects with shared state to perform system tasks.  I get all the benefits of the command pattern with minimal hassle.

With a bit of refactoring, I can easily reuse the *Commands objects from this fluent interface to do things besides execute the task.  E.g., perhaps I want to build up a system scenario and persist it, something like this:

var commands = new List< ICommand >();
BuildUp.Into( commands )
    .ForEachAccount
    .Deposit( 500m )
    .SetImage( PlayerImages.Face, testBitmap )
    .SetProperties(
        new {
            Address = "12 Main St.";
            City = "Anywheretownvilleton"
            State = "Texhomasippi"
            Zip = "75023"
        }
    );
// now the commands variable contains the defined 
// command structure and can be re-used against new
// players in the future, persisted to disk, etc.

Another big benefit of this approach is that it gives me a stable binding point for PowerShell - and by that I mean that I can deeply integrate this automation framework with PowerShell, leveraging all of the built-in freebies with virtually no effort.  But that is another post...

 



Automation Framework pt 3: Command Composites and Chains

§ February 4, 2009 12:44 by beefarino |

On Monday I gave a quick overview and demo of the automation framework.  I had prepared a handful of command objects to drive features of the system that were simple to implement, but are cumbersome manual operations:

  • creating new user accounts;
  • resetting user credentials;
  • validating user credentials;
  • depositing to and withdrawing from a user's account;
  • querying the user's current balance;

I showed off the commands in an interactive PowerShell session that manipulated a live production system.  The interactive demo culminated in me creating 100 new user accounts on a production system, giving each a $500 deposit, in a one-liner:

@( 0..99 ) | % { "p" + $_ } | new-player | new-deposit 500; 

Using our standard production tools, this would have taken hours to accomplish and left plenty of room for user error; the script ran in under 30 seconds and produced zero defects.

I then re-used the same commands in several FitNesse fixtures, explaining how we could drive a lot of our testing effort using table-driven examples.  The reception was very positive from pigs and chickens alike, which made me a very happy camper.

One of the design aspects of the framework that went over well was that each command object encapsulated an atomic unit of work on our system - simple, a dozen or so lines of effort with clear inputs and outputs and a single goal.  These units could be combined to create complex behavior very quickly by compositing and chaining command objects together.

An Example of Compositing

Let's say you want to encapsulate the task of resetting user credentials, and let's say that process is comprised of the following atomic steps:

  1. Locate the user account from the data store;
  2. Acquire an authorization ticket to manage the user;
  3. Reset the user credentials using the ticket;
  4. Verify the new user credentials.

Some of these steps are dependent on others, but each step represents a reusable unit of work performed by the system.  E.g., step 2 will need to be repeated every time I want to touch a user account, not just when changing user credentials.  It would be good to encapsulate these atomic units as individual command objects so I can reuse them:

public interface ICommand
{
    bool Execute();
}

public class LoadUserAccountCommand : ICommand
{
    public bool Execute()
    {
        // ...
    }
}

public class AcquireAuthorizationTicketCommand : ICommand
{
    public bool Execute()
    {
        // ...
    }
}

public class SetUserCredentialsCommand : ICommand
{
    public bool Execute()   
    {
        // ...
    }
}

public class VerifyUserCredentialsCommand : ICommand
{
    public bool Execute()   
    {
        // ...
    }
}

At the same time, I don't want to have to remember to execute four commands every time I need to perform a simple system task.  So I'll encapsulate that too, using a Composite pattern:

public CompositeCommand : List< ICommand >, ICommand
{
    public CompositeCommand( params ICommand[] cmds ) : base( cmds )
    {
    }
   
    public bool Execute()
    {
        foreach( var cmd in this )
        {
            if( ! cmd.Execute() )
            {
                // halt on first failed command
                return false;
            }           
        }
        return true;
    }
}

public class ResetUserCredentialsCommand : CompositeCommand
{
    public ResetUserCredentialsCommand()
        : base(
            new LoadUserAccountCommand(),
            new AcquireAuthorizationTicketCommand(),
            new SetUserCredentialsCommand(),
            new VerifyUserCredentialsCommand()
        )
    {
    }
}

In essence, a composite command allows me to treat a sequential list of commands as if it were a single command.  The composite is a list of ICommand objects, and it supports the ICommand contract so it looks like any other command object to the framework.  The implementation of the Execute() method simply iterates over each command in the composite, executing each in turn until a command fails (returns false) or the end of the command collection is reached.

Sidebar: if you want to get sticky, in a true composite pattern the iteration over the child ICommand objects would not be contingent on the result of the previous command's execution.  That makes this more of a strategy than composite methinks.  However, I'm not that kind of pattern dude anymore and I think the intent is enough to call it a composite.  If you're a member of the pattern police and want to bust my chops, please leave a comment.

An Example of Chaining

Take another example: I need an account for a specific user.  That user may or may not exist yet, I don't care - I just want to run my test fixture.  In this case, I have two steps:

  1. Load an existing user account;
  2. Create a new user account.

Unlike the composite, these actions are not meant to be run in sequence - step two should only execute if step 1 fails to find an existing user.  In these cases, I use a chain of commands to encapsulate the logic:

public ChainOfCommand : List< ICommand >, ICommand
{
    public ChainOfCommand( params ICommand[] cmds ) : base( cmds )
    {
    }
    
    public bool Execute()
    {
        foreach( var cmd in this )
        {
            if( cmd.Execute() )
            {
                // halt on first successful command
                return true;
            }            
        }
        return false;
    }
}

public class LoadOrCreateUserAccountCommand : ChainOfCommand
{
    public LoadOrCreateUserAccountCommand()
        : base(
            new LoadUserAccountCommand(),
            new CreateUserAccountCommand()
        )
    {
    }
}

The code is almost identical to the CompositeCommand class.  The key difference is in the ChainOfCommand.Execute() method - where CompositeCommand executes each child command until one fails, ChainOfCommand executes each child command until one succeeds.  For example, when the LoadOrCreateUserAccountCommand is executed, no new user account is created if one can be loaded.

Again, sticklers will point out that this isn't true to a chain of responsibility pattern, and I'm ok with that.  It's just damn useful, whatever you call it.

Seeking Advice

My next post on this spike will be a request for advice, so please tune in ...



Confessions of a Design Pattern Junkie

§ April 25, 2008 16:28 by beefarino |

Me: "Hello, my name is Jim, and I'm a pattern junkie."

Group: "Hello. Jim."

Yes, I humbly admit it.  I read this book, that book, and this other one too, and now I'm pattern-tastically pattern-smitten with a pattern-obsession.  I'm that guy on the team - the one who starts the design with strategies and factories and commands and decorators before a lick of code is compiled.  The one who creates decorator and composite base classes for every interface because "I'll prolly need 'em."  The one who, at the end of the project, has produced Faulknerean code for lolcat functionality.  

But I confess: I am not the least bit ashamed.  I acknowledge my approach has been overbearing and self-indulgent. I know I need to change to be a better engineer.  Spending time as Scrum Master has shown me what pattern hysteria looks like from the outside.  It's WTFBBQ smothered in bloat sauce.

But the experience of being a pattern junkie has been irreplaceable, for a number of reasons.  Patterns are valuable to know, for reasons I'll expound on in a bit.  Taking the time to (over-)apply them to real projects has been the best way for me to learn how they work and interact.  My biggest problem is that I want to apply them as much as possible at the design stage of a project. I've come to terms with the fact that it's a bad idea, which has given me the chance to learn something and improve myself.

So, in the words of the good witch: "What have you learned Dorothy?"

First, let's talk about how misusing patterns has inhibited me.

Bad: Using a pattern leads me to using another. 

Using a strategy pattern precipitates the use of decorators and adapters on the strategy.  Using commands leads to the use of composites, iterators, and chain of responsibility.  The complexity of managing the patterns and dependency injection leads to the use of facades, factories, builders, and singletons.  Things become extraordinarily convoluted very quickly.  When I design against patterns a priori, when they don't service an existing need, the code I have to write explodes, and once it's written, maintaining it becomes a real chore.

Bad: Thinking in patterns makes me lose focus of the problem.

Using patterns makes me itch to break down problems into very atomic units, which is generally good, but I take it to the point of comedy.  Consider this example, which is an actual approach I used because I thought it was a good idea at the time.  I was working on an object that operates on an XML document. To supply the XML document to my object, I chose to define the IXMLDocumentProvider interface as an abstraction for loading the XML.  Why?  Because I was thinking about patterns and not the problem I was trying to solve.  My logic was roughly  this: if I use another strategy to manage the load behavior the XML source could be a file at runtime and an in-memory document in my unit tests, and I could use a decorator on the strategy to validate an XMLDSIG on the document in production if I need to.  In the end, all the program needed was the XML, which could have easily been supplied in a constructor or parameter.  There is but one instance of IXMLDocumentProvider in the project, and all it does is hand out an instance of an XML document supplied to its constructor.  I filled a non-existent need because I was focusing on the pattern and not the problem.

It isn't all bad; let's look at how using patterns has helped me.

Good: Using patterns yields testable code.

Using patterns extensively has helped me write highly testable code.  Patterns and dependency injection go together like peanut butter and chocolate.  Having patterns peppered throughout the design, my code is highly decoupled.  Unit testing is a breeze in such a scenario, and unit tests are good.

Good: Using patterns makes complex code understandable.

Patterns isolate concerns.  This makes large codebases more digestible, and it tends to break complex relationships into lots of smaller objects.  I know many people would disagree with me here, but I find it easier to work with 50 small class definitions that a) follow well-understood patterns and b) adhere to the single responsibility principle than 5 classes that have been incrementally expanded to 20,000+ lines of code containing a succotash of concerns.  A coherent class diagram will tell me more about a system than a list of 200+ method names.

Good: Using patterns makes complex systems extensible.

Again, patterns isolate concerns, which makes extending a system very simple once you are familiar with the system design.  For example, adding a decorator is easier, in my opinion, to altering the behavior of an existing class.  Folding new features into a well-designed strategy, command, or visitor pattern is cake.  Patterns help you grow a system by extending it, not altering it, which is a good idea.

My two-step program to better pattern application

I've learned from my mistakes.  I've come to the conclusion that patterns are a tool best applied to existing working and testable code.  My personal commitment is to stop using patterns at the design phase, but continue employing patterns when they make sense.  How will I do this?

My two steps are simple - when I work on a software feature, I promise to do the following:

  1. Design and code the feature to a working state as quickly and simply as possible.  At this phase I promise not employ patterns a priori, although I may employ Dependency Injection to make testing easier.
  2. Refactor my code to separate concerns, remove duplication, and improve readability.  At this phase, I will employ patterns NOT wherever possible, but only as necessary to meet my goal.  That means I'll pull them in when I need to separate concerns, when I need to untangle spaghetti code, when I need to make the code understandable.  

I'll let you know how the rehab goes.  Until then, there's no place like code ... there's no place like code .... there's no place like code .....