Automation Framework pt 5: Hiding Complexty in a Fluent Interface

§ March 2, 2009 11:12 by beefarino |

In my last Automation Framework post I asked for some advice on managing state between commands.  I didn't get any public feedback on the blog, but an old college of mine named Jon Lester tinkered with the blogged examples and sent me some of his code.  The nut of his idea was to use extension methods on an accumulator object to separate the domain logic from the testing apparatus.  I liked his approach, it wasn't something that would have popped into my head; it also put a spotlight on a simple and elegant solution to my state sharing problem.

Before I dive into Jon's idea, take a look at how I'm currently sharing state between my command objects:

//...
UserAccount account = new UserAccount();
CompositeCommand cmd = new CompositeCommand(
    new LoadUserAccountCommand { UserName = userName, Account = account },
    new MakeDepositWithTicketCommand { Amount = depositAmount, Account = account }
);
bool result = cmd.Execute( context );
// ... 

This example represents a single task on the system under test - specifically making a deposit into a user account.  The task effort is distributed across the LoadUserAccountCommand and MakeDepositWithTicketCommand objects, which must share a common Account object in order to accomplish the ultimate goal.

As I described previously, I like this approach okay, especially compared to some of the alternatives I've tried.  I think it's simple enough to understand, but at the very least, it requires some explanation which is an API FAIL.  And although you can make it work for value and immutable types, it takes an ugly hack.

My college's solution was to isolate the shared states from the commands, and expose the units of work in a fluent interface wrapping the shared state.  I whittled down his approach into a lean and clean interface - here's an example of the end result:

// ...
Execute.Using( context )
    .ForPlayer( account )
    .Deposit( 500m );
// ... 

Same unit of work, but a lot less code and far easier to understand IMO.  

Implementation

The root of the fluency is implemented in the Execute class, which encapsulates a command context (which I describe here):

public class Execute 
{
    IContext context;
    private Execute( IContext ctx )
    {
        context = ctx;
    }
    
    public static Execute Using( IContext context )
    {
        Execute e = new Execute( context );
        return e;
    }
    public AccountCommands ForPlayer( Account account )
    {
        return new AccountCommands( user, Process );
    }
    
    public GameCommands ForGame( Game game )
    {
        return new GameCommands( game, Process );
    }
    
    bool Process( ICommand[] commands )
    {
        foreach( var command in commands )
        {
            if( ! command.Execute( context ) )
            {
                return false;
            }
        }
        return true;
    }
}

As you can see, the only inroad to this class is the Using method, which returns a new instance of the execute class initialized with the Command Context.  The various flavors of the For() method are used to capture the shared state for a set of commands.  They each return a object supporting a redundant, fluent interface of commands around the state.  For example, here is some of the AccountCommands class:

public class AccountCommands
{
    Func< bool, ICommand[] > processCommands;
    
    public AccountCommands( Account account, Func< bool, ICommand[] > callback )
    {
        processCommands = callback;
        this.Account = account;
        
        processCommands(
            Chain(
                new LoadUserAccountCommand { Account = Account },
                new CreateUserAccountCommand { Account = Account }
            )
        );
    }
    
    public Account Account { get; set; }
    
    public AccountCommands Deposit( decimal amount )
    {
        processCommands(
            new MakeDepositWithTicketCommand {
                Amount = amount,
                Account = account
            }
        );
        
        return this; 
    }
    
    public AccountCommands Withdraw( decimal amount )
    {
        processCommands(
            new MakeWithdrawalWithTicketCommand {
                Amount = amount,
                Account = account
            }
        );
        
        return this;
    }
    
    public AccountCommands SetProperties( Hashtable properties )
    {
        processCommands(
            Compose(
                new LoadAccountPropertiesCommand {
                    Account = account
                },
                new SetAccountPropertiesCommand {
                    Properties = properties,
                    Account = account
                }
            )
        );
        
        return this;
    }    
    
    // etc ...
    ICommand Chain( params ICommand[] commands)
    {
        return new ChainOfResponsibilityCommand(
            commands
        );
    }
    ICommand Compose( params ICommand[] commands )
    {
        return new CompositeCommand(
            commands
        );
    }
}


Items of note:

  • The constructor accepts two arguments: an Account object that represents the state shared by every member of the class, and a Func<> delegate that accepts an array of command objects and returns a boolean;
  • Each public method of the class represents a single task one can perform against an account;
  • Every public method simply composes one or more Command objects, which are passed to the processCommand callback for actual processing.

Huge Bennies

It still needs some work, but there are many things I like about this approach.  The thing I like most is that the fluent interface hides the complexities of composing command objects with shared state to perform system tasks.  I get all the benefits of the command pattern with minimal hassle.

With a bit of refactoring, I can easily reuse the *Commands objects from this fluent interface to do things besides execute the task.  E.g., perhaps I want to build up a system scenario and persist it, something like this:

var commands = new List< ICommand >();
BuildUp.Into( commands )
    .ForEachAccount
    .Deposit( 500m )
    .SetImage( PlayerImages.Face, testBitmap )
    .SetProperties(
        new {
            Address = "12 Main St.";
            City = "Anywheretownvilleton"
            State = "Texhomasippi"
            Zip = "75023"
        }
    );
// now the commands variable contains the defined 
// command structure and can be re-used against new
// players in the future, persisted to disk, etc.

Another big benefit of this approach is that it gives me a stable binding point for PowerShell - and by that I mean that I can deeply integrate this automation framework with PowerShell, leveraging all of the built-in freebies with virtually no effort.  But that is another post...

 



Why I Hate IServiceProvider

§ February 20, 2009 09:18 by beefarino |

I've worked with a lot of code that uses IServiceProvider as a way to disconnect an object from its dependencies.  I've come to loathe this interface for many reasons and have opted for a systematic pattern of dependency injection.

First reason IServiceProvider sucks: it hides the dependencies of an object while decoupling from them.  What do I mean by that?  Pretend you're using this blackbox component from your code:

public class BlackBoxComponent
{
    public BlackBoxComponent( IServiceProvider services );
    public void DoAllTheWork();
} 

Can you tell what services are going to be requested from the service provider?  Me neither.  Now you need another way to discover what dependencies need to be available to the BlackBoxComponent - documentation, source code, something out-of-band that takes you away from your work at hand.

Compare that with some simple constructor injection:

public class BlackBoxComponent
{
    public BlackBoxComponent( IRepository< Thing > thingRepository, ILogManager logManager );
    public void DoAllTheWork();
}

With this, you know exactly what a BlackBoxComponent needs to do its job just from looking at the constructor.

Second reason IServiceProvider sucks: it adds a lot of code.  Fetching the services is cumbersome at best:

    // ...    
    public BlackBoxComponent( IServiceProvider services )
    {
        thingRepository = ( IRepository< Thing > ) services.GetService( typeof( IRepository< Thing > ) );
        logManager  = ( ILogManager ) services.GetService( typeof( ILogManager ) );
    }
    // ...

Sure you can use some syntactic sugar to work around the typeof'ing and naked casting:

public static class ServiceProviderExtension
{
    public static T GetService< T >( this IServiceProvider serviceProvider )
    {
        return ( T ) serviceProvider.GetService( typeof( T ) );
    }
}

which cleans up the code a bit:

// ...    
public BlackBoxComponent( IServiceProvider services )
{
    thingRepository = services.GetService< IRepository< Thing > >();
    logManager  = services.GetService< ILogManager >();
}
// ...

but you're still stuck having to reach out and grab every dependency you need from the service container - which implies that somewhere, some other piece of code is responsible for filling up that service container:

//...
ServiceContainer services = new ServiceContainer();
services.AddService( 
    typeof( ILogManager ),
    new Log4NetLogManager()
);
services.AddService( 
    typeof( IRepository< Thing > ),
    new ThingRepository()
);
//...

More code to write, all of it unnecessary and obsolete given the state of the art in dependency injection frameworks. 



Lack of Consistency in PowerShell

§ February 16, 2009 02:40 by beefarino |

Y'all know I love powershell.

But I'm getting pretty tired of the lack of consistency in the product.  I'm not speaking of quality here - just about how to get things done.  Case in point: at the moment I'm trying to figure out the new modules feature, which so far hasn't been difficult.  The most annoying thing is that I keep trying to get the list of available modules by typing this:

dir module:

which works for other powershell internals like variables:

dir variable:

and functions:

dir function:

but not for modules.  Why doesn't it work?  Well, those little drive-letter-type-monikers need something called a provider to enable them.  There's one built-in to powershell to enable this feature for variables and functions, but not for modules.

Not a big deal really, but one of the original selling points of powershell was its consistency - files, registry, certificates, etc., they all look like a little file system when you work with them.  So the act of adding, removing, moving, renaming these things always looks the same.  Why should I build up this expectation when it's availability is spotty?  And I'm not sure why a provider isn't managing this - modules are stored on the filesystem anyway, in a few specific places, and outside of using them you have all the basic provider operations: create, delete, rename, etc.  Having a provider around them should be a no-brainer.  The fact that one doesn't exist tells me that either it's too much effort (which having done so a few times I can say is probably the case) or goes against the grain of powershell "philosophy of use".

Oh well, it's still CTP3, maybe they'll have it in the RTM, right?  Or maybe I just don't "get" when something should have a provider and when it shouldn't.  Am I missing the point, or is this a case of powershell not eating its own dogfood?

Update

About two seconds after positing this I saw this line at the top of the Modules module.psm1 file:

# Create a drive for My Modules
New-PsDrive -Scope Global -Name MyMod -PSProvider FileSystem -Root (($env:PSMODULEPATH -split ";")[0]) 

which meets my general spelunking needs.

*sigh*

foot | mouth;


The Thing about Best Practices

§ February 12, 2009 03:34 by beefarino |

There's been a big stinky vine growing on the interwebs lately, rooted in the some comments made by Joel and Jeff during a podcast and aggravated by a recent post on Coding Horror. If you've been under a rock for the past week, a good summary can be found here.  Or go google stackoverflow podcast rant and bathe in it. 

I'm pretty late to the game on this meme, but since my experience and attitude seems to differ a bit from what I'm reading elsewhere I thought I'd share it.

I think I'm an above-average software engineer.  It's something I take seriously, and I spend a great deal of my time learning, experimenting, reading, and seeking out new knowledge in this area.  It's one of the things I would pursue even if I wasn't getting paid to do it.  That said, I routinely choose not to apply good engineering principles on software projects where it yields no benefit.  E.g., I don't do as much TDD or refactoring on spikes when I know the code is going to be thrown away.  

I also think I'm a mediocre woodworker, at best.  I enjoy doing it, but I don't feel the same compulsion to learn as much as I can, to hone those skills, sacrifice the time and resources necessary to take those skills to the next level.  I'm content doing it as poorly as I do.  However, two years ago when I was working on the two-story tree house that would be holding my girls eight feet off the ground, you can bet your ass I learned all I could about proper tree house design and construction and applied every bit of that knowledge to my project.

What I'm trying to say is this:

  • you don't always have to apply the best practices, but you can't make that choice without knowing what they are;
  • you don't have to be very experienced to realize what you don't know and when you need to learn it.