The Thing about Best Practices

§ February 12, 2009 03:34 by beefarino |

There's been a big stinky vine growing on the interwebs lately, rooted in the some comments made by Joel and Jeff during a podcast and aggravated by a recent post on Coding Horror. If you've been under a rock for the past week, a good summary can be found here.  Or go google stackoverflow podcast rant and bathe in it. 

I'm pretty late to the game on this meme, but since my experience and attitude seems to differ a bit from what I'm reading elsewhere I thought I'd share it.

I think I'm an above-average software engineer.  It's something I take seriously, and I spend a great deal of my time learning, experimenting, reading, and seeking out new knowledge in this area.  It's one of the things I would pursue even if I wasn't getting paid to do it.  That said, I routinely choose not to apply good engineering principles on software projects where it yields no benefit.  E.g., I don't do as much TDD or refactoring on spikes when I know the code is going to be thrown away.  

I also think I'm a mediocre woodworker, at best.  I enjoy doing it, but I don't feel the same compulsion to learn as much as I can, to hone those skills, sacrifice the time and resources necessary to take those skills to the next level.  I'm content doing it as poorly as I do.  However, two years ago when I was working on the two-story tree house that would be holding my girls eight feet off the ground, you can bet your ass I learned all I could about proper tree house design and construction and applied every bit of that knowledge to my project.

What I'm trying to say is this:

  • you don't always have to apply the best practices, but you can't make that choice without knowing what they are;
  • you don't have to be very experienced to realize what you don't know and when you need to learn it.


Automation Framework pt 3: Command Composites and Chains

§ February 4, 2009 12:44 by beefarino |

On Monday I gave a quick overview and demo of the automation framework.  I had prepared a handful of command objects to drive features of the system that were simple to implement, but are cumbersome manual operations:

  • creating new user accounts;
  • resetting user credentials;
  • validating user credentials;
  • depositing to and withdrawing from a user's account;
  • querying the user's current balance;

I showed off the commands in an interactive PowerShell session that manipulated a live production system.  The interactive demo culminated in me creating 100 new user accounts on a production system, giving each a $500 deposit, in a one-liner:

@( 0..99 ) | % { "p" + $_ } | new-player | new-deposit 500; 

Using our standard production tools, this would have taken hours to accomplish and left plenty of room for user error; the script ran in under 30 seconds and produced zero defects.

I then re-used the same commands in several FitNesse fixtures, explaining how we could drive a lot of our testing effort using table-driven examples.  The reception was very positive from pigs and chickens alike, which made me a very happy camper.

One of the design aspects of the framework that went over well was that each command object encapsulated an atomic unit of work on our system - simple, a dozen or so lines of effort with clear inputs and outputs and a single goal.  These units could be combined to create complex behavior very quickly by compositing and chaining command objects together.

An Example of Compositing

Let's say you want to encapsulate the task of resetting user credentials, and let's say that process is comprised of the following atomic steps:

  1. Locate the user account from the data store;
  2. Acquire an authorization ticket to manage the user;
  3. Reset the user credentials using the ticket;
  4. Verify the new user credentials.

Some of these steps are dependent on others, but each step represents a reusable unit of work performed by the system.  E.g., step 2 will need to be repeated every time I want to touch a user account, not just when changing user credentials.  It would be good to encapsulate these atomic units as individual command objects so I can reuse them:

public interface ICommand
{
    bool Execute();
}

public class LoadUserAccountCommand : ICommand
{
    public bool Execute()
    {
        // ...
    }
}

public class AcquireAuthorizationTicketCommand : ICommand
{
    public bool Execute()
    {
        // ...
    }
}

public class SetUserCredentialsCommand : ICommand
{
    public bool Execute()   
    {
        // ...
    }
}

public class VerifyUserCredentialsCommand : ICommand
{
    public bool Execute()   
    {
        // ...
    }
}

At the same time, I don't want to have to remember to execute four commands every time I need to perform a simple system task.  So I'll encapsulate that too, using a Composite pattern:

public CompositeCommand : List< ICommand >, ICommand
{
    public CompositeCommand( params ICommand[] cmds ) : base( cmds )
    {
    }
   
    public bool Execute()
    {
        foreach( var cmd in this )
        {
            if( ! cmd.Execute() )
            {
                // halt on first failed command
                return false;
            }           
        }
        return true;
    }
}

public class ResetUserCredentialsCommand : CompositeCommand
{
    public ResetUserCredentialsCommand()
        : base(
            new LoadUserAccountCommand(),
            new AcquireAuthorizationTicketCommand(),
            new SetUserCredentialsCommand(),
            new VerifyUserCredentialsCommand()
        )
    {
    }
}

In essence, a composite command allows me to treat a sequential list of commands as if it were a single command.  The composite is a list of ICommand objects, and it supports the ICommand contract so it looks like any other command object to the framework.  The implementation of the Execute() method simply iterates over each command in the composite, executing each in turn until a command fails (returns false) or the end of the command collection is reached.

Sidebar: if you want to get sticky, in a true composite pattern the iteration over the child ICommand objects would not be contingent on the result of the previous command's execution.  That makes this more of a strategy than composite methinks.  However, I'm not that kind of pattern dude anymore and I think the intent is enough to call it a composite.  If you're a member of the pattern police and want to bust my chops, please leave a comment.

An Example of Chaining

Take another example: I need an account for a specific user.  That user may or may not exist yet, I don't care - I just want to run my test fixture.  In this case, I have two steps:

  1. Load an existing user account;
  2. Create a new user account.

Unlike the composite, these actions are not meant to be run in sequence - step two should only execute if step 1 fails to find an existing user.  In these cases, I use a chain of commands to encapsulate the logic:

public ChainOfCommand : List< ICommand >, ICommand
{
    public ChainOfCommand( params ICommand[] cmds ) : base( cmds )
    {
    }
    
    public bool Execute()
    {
        foreach( var cmd in this )
        {
            if( cmd.Execute() )
            {
                // halt on first successful command
                return true;
            }            
        }
        return false;
    }
}

public class LoadOrCreateUserAccountCommand : ChainOfCommand
{
    public LoadOrCreateUserAccountCommand()
        : base(
            new LoadUserAccountCommand(),
            new CreateUserAccountCommand()
        )
    {
    }
}

The code is almost identical to the CompositeCommand class.  The key difference is in the ChainOfCommand.Execute() method - where CompositeCommand executes each child command until one fails, ChainOfCommand executes each child command until one succeeds.  For example, when the LoadOrCreateUserAccountCommand is executed, no new user account is created if one can be loaded.

Again, sticklers will point out that this isn't true to a chain of responsibility pattern, and I'm ok with that.  It's just damn useful, whatever you call it.

Seeking Advice

My next post on this spike will be a request for advice, so please tune in ...



Automation Framework pt 1: Napkin Design

§ January 20, 2009 02:03 by beefarino |

Automating the core components of our product won't be too difficult.  My biggest obstacle at this point is time: with another round of "org chart refactorings" at the office, I've had tech writing added to my list of responsibilities so my time is scarce.  I want to get a usable and extensible framework to the team as quickly as possible.

The team has done a decent job of piecing apart functional system components into a set of core services and clients.  Almost no logic exists on the clients, and they communicate to the services through a set of relatively firm interfaces, although the transports vary wildly:

At this point, my only area of automation interest is the core components, as they contain the core logic of the product and are most impacted by our recent stability and performance issues.  I want the framework to support the following usage scenarios:

  • scripted system QA testing;
  • acceptance testing of specific features and performance metrics;
  • providing support for realtime load-testing of a production system;

So it needs to be fairly agnostic with regard to input - scripting could be done via PowerShell to take care of a lot of the heavy lifting of defining complicated tests, acceptance testing driven by a framework like FitNesse, and load-testing via a GUI. 

It'd be a real pain to try and hook up all of those core services to each of those input vectors.  Plus there may be other vectors I haven't considered (ooOOoo - like a DSL created with MGrammar).  An approach that I've found very appropriate to this situation has been to use the Command design pattern.

In a nutshell, the Command pattern aims to encapsulate a parameterized request as an object; e.g., a direct service method invocation:

...
service.CreateUser( userName, userType );
...


could be captured as a command object:

...
ICommand cmd = new CreateUserCommand( service, userName, userType );
cmd.Execute();
...


Command objects often support an Execute() semantic, but not always; sometimes Command objects are passed through an Executor object that will perform the action. 

If you're not experienced with this pattern, you may be wondering why you'd want to go through these hoops when you could just call the service directly.  Well, using command objects has a few significant benefits that are not readily apparent:

  1. Command objects provide encapsulation between (in my case) the service and the client; if the service contract changes, only the commands needs to change.  If I hard-wired 500 FIT fixtures to the service and it changes in the next build, I'd be crying.
  2. Command objects offer a quick way to persist a set of parameterized operations.  In other words, you can de/serialize command objects, save them to a database or a message queue, etc.  This also makes them highly accessible to multiple input forms, like XML, scripts, and FIT fixtures.
  3. Once you have a few simple commands implemented, you can very quickly piece them together to create more complex behavior.  Again, using some form of object serialization makes this easy and, more important, dynamic - something that a hard-wire approach would not be able to do.
  4. It makes supporting transactions and undo semantics a lot easier.  E.g., a Command could support Execute(), Commit(), and Rollback() methods.
  5. The Command pattern works well with the Composite and Chain of Responsibility patterns, again simplifying the creation of complex commands from simple atomic ones.

In short, the Command pattern brings ninja skills to a knife fight.  Revisiting the flow chart above:

Each input vector needs to focus only on creating a set of Command objects representing the actions to be taken, then passing them through a Command Executor that will execute the action against the core system services using the existing service interfaces.

Example forthcoming...



Automation Framework pt 0: Vision

§ January 14, 2009 16:16 by beefarino |

After spending the last month reacting to some remarkable system failures at a very visible client, I've convinced the CTO to give me some elbow room to come up with the strawman of an automation framework for the core components of our system.  I described my initial goal to be able to drive the brains of our product without having to have the entire body attached, so we can start automating load- and performance-testing.  I didn't share my secondary goals - to be able to define automated regression tests and user acceptance scenarios that can be run against the system, which I think will do wonders for our feature planning and entomology.

At the moment, doing any kind of testing is a hassle.  Nothing can be automated to behave deterministically, everything is either manual or random behavior (which can be good for burn-in, but doesn't do much for testing scenarios), and doing things manual is to slow to cover much ground past "yep, it starts, ship it!"

The system has the complexity of an enterprise architecture, along with:

  • no standard messaging, communication layer, or service bus - instead we have raw sockets, Remoting, some of it stateless, some of it stateful, some of it persistent, some of it not;
  • numerous pieces of proprietary hardware that are expensive in both dollars and space;
  • deep assumptions about the physical environment, such as every client having a NIC card, to the point that most components won't work outside of the normal production environment;
  • system configuration that is splattered across devices, files, databases, and AD;
  • a codebase that is closed for extension.

So you see, our ability to mock client behavior and bench-bleed the system is pretty crippled.  I don't have time to address all of these things, but I want to knock as many of them out as I can.

I'll post my napkin design in a bit...