Automation Framework pt 2: End-to-End Example

§ January 27, 2009 00:03 by beefarino |

Having covered the vision and napkin design of an automation framework for our product's core services, it's time for a working end-to-end example.  My goal is to be able to drive one function of our core product: creating a user account.  In addition, I will drive it from both PowerShell and FitNesse to see how well the framework meets the needs from the initial vision.

Getting to Red

I broke ground with this test:

[Test] 
public void CreateUserAccountCommandExecution()
{
    ICommand command = new CreateUserAccountCommand { Name = "joe" }; 
    bool result = command.Execute(); 
    Assert.IsTrue( result ); 
} 

Simple enough - a textbook command pattern; note:

  • an ICommand interface defines the command contract;
  • at the moment, the only member of ICommand is an Execute() method.  It accepts no arguments and returns a boolean to indicate success or failure;
  • CreatePlayerAccountCommand is a concrete implementation of the ICommand contract;
  • CreatePlayerAccountCommand has a Name property that identifies the user name.

Getting to Green

First thing's first - I need the command contract:

public interface ICommand
{ 
    bool Execute(); 
} 

Then I can implement the concrete CreatePlayerAccountCommand type:

public class CreateUserAccountCommand : ICommand 
{ 
    public string Name { get; set; }  
    public bool Execute() 
    { 
        IUserService clientInterface = new RemoteUserService( "http://beefarino:8089" );  
        Credentials credentials = new Credentials( "user-manager", "password" ); 
        Ticket authTicket = clientInterface.Authenticate( credentials );  
        UserData userProperties = new UserData();  
        userProperties.FirstName = Name; 
        userProperties.LastName = "Smyth"; 
        userProperties.Nickname = Name; 
        userProperties.DateOfBirth = System.DateTime.Now - TimeSpan.FromDays( 365.0d * 22.0d );
         
        string userId = clientInterface.CreateUser( authTicket, userProperties );  
        Ticket userTicket; 
        clientInterface.CreateUserTicket( authTicket, userId,  out userTicket );  
        return null != userTicket;
    } 
} 

I'm not going to discuss this code except to explain that:

  • the logic in the Execute() method performs the minimum amount of activity necessary to create a user account;
  • I'm making assumptions about a lot of the data I need (e.g., the age of the user).  I'm trying to keep the command as simple as unconfigurable as possible, and there are many, many more UserData fields available for account configuration that I'm not using;
  • the command object method does nothing outside of it's intended scope: it creates a user account, that's it.

Use it from PowerShell

Now that I have the command working, I want to see it working in PowerShell.  I'm taking a minimalist approach starting out.  Once I implement a few more commands and plug them into PowerShell, I'll see what implementation patterns emerge and replace this approach with something cleaner.  But for now, this mess will do:

[System.Reflection.Assembly]::LoadFrom( 'automation.commands.dll' ); 
function new-useraccount() 
{ 
    param( [string] name ); 
     
    $cmd = new-object automationcommands.createuseraccountcommand; 
    $cmd.Name = $name;
    $cmd.Execute(); 
}  
new-useraccount -name 'scott'; 

Hmmm ... runs silent, no output ... but looking at the system backend, I can see that it works.  

Use it from FitNesse

I downloaded the latest stable version of FitNesse from http://www.fitnesse.org/ and followed Cory Foy's short tutorial on using it against .NET assemblies (which is still accurate after 3+ years, #bonus) to get things running.  I created a new page and entered the following wikitext and table:

!contents -R2 -g -p -f -h 
!define COMMAND_PATTERN {%m %p} 
!define TEST_RUNNER {dotnet\FitServer.exe} 
!define PATH_SEPARATOR {;} 
!path dotnet\*.dll 
!path C:\dev\spikes\Automation\PokerRoom.Fixtures\bin\Debug\pokerroom.fixtures.dll  
A simple test of the CreateUser command: 
|!-PokerRoom.Fixtures.CreateUserAccount-!| 
|name|created?| 
|phil|true| 
|bob|true| 
|alice|true| 

I hacked up a quick fixture to support the table...

namespace PokerRoom.Fixtures 
{ 
    public class CreateUserAccount : fit.ColumnFixture 
    { 
        public string name { get; set; }  
        public bool created() 
        { 
            ICommand cmd = new CreateUserAccountCommand { Player = name };
    
            return cmd.Execute();
        } 
    } 
} 

... build it, and the FitNesse tests are green ...

After verifying that the users are actually created in the live system using our proprietary tools, I'm satisfied.

Moving Forward

So far so good.  It's very dirty, but it's working.  w00t * 2!

While developing this today I noted a few areas of concern:

  1. In the command object, there are several dependencies that obviously should to be injected.  Namely, the IUserService instance and the authority credentials;
  2. These dependencies are only really needed in the Execute() method;
  3. Looking ahead, I know I'm going to have many of these services, and it will be a pain to inject them all for each command instantiation;
  4. Compositing commands into complex behavior will eventually lead to the need to share state between commands.  I have an idea of how to manage this, but I'm concerned it will be cumbersome;
  5. There needs to be some kind of feedback when using the command from PowerShell; not sure where this should live or what it should look like at the moment...
  6. PowerShell will have a lot more to offer if I integrate with it more deeply.  I'll have to think about what this will look like, so as to minimize the amount of custom scripting necessary to run commands while accessing the full PowerShell feature set;
  7. I need to learn a lot more about FitNesse :).  I've already given the elevator speech to a coworker and demonstrated the fixture - he had a lot more questions than I had answers...

My next few posts will detail how I address these and other concerns.  Next post will detail some prefactoring to take care of items 1-4, maybe demonstrate command compositing.



Automation Framework pt 1: Napkin Design

§ January 20, 2009 02:03 by beefarino |

Automating the core components of our product won't be too difficult.  My biggest obstacle at this point is time: with another round of "org chart refactorings" at the office, I've had tech writing added to my list of responsibilities so my time is scarce.  I want to get a usable and extensible framework to the team as quickly as possible.

The team has done a decent job of piecing apart functional system components into a set of core services and clients.  Almost no logic exists on the clients, and they communicate to the services through a set of relatively firm interfaces, although the transports vary wildly:

At this point, my only area of automation interest is the core components, as they contain the core logic of the product and are most impacted by our recent stability and performance issues.  I want the framework to support the following usage scenarios:

  • scripted system QA testing;
  • acceptance testing of specific features and performance metrics;
  • providing support for realtime load-testing of a production system;

So it needs to be fairly agnostic with regard to input - scripting could be done via PowerShell to take care of a lot of the heavy lifting of defining complicated tests, acceptance testing driven by a framework like FitNesse, and load-testing via a GUI. 

It'd be a real pain to try and hook up all of those core services to each of those input vectors.  Plus there may be other vectors I haven't considered (ooOOoo - like a DSL created with MGrammar).  An approach that I've found very appropriate to this situation has been to use the Command design pattern.

In a nutshell, the Command pattern aims to encapsulate a parameterized request as an object; e.g., a direct service method invocation:

...
service.CreateUser( userName, userType );
...


could be captured as a command object:

...
ICommand cmd = new CreateUserCommand( service, userName, userType );
cmd.Execute();
...


Command objects often support an Execute() semantic, but not always; sometimes Command objects are passed through an Executor object that will perform the action. 

If you're not experienced with this pattern, you may be wondering why you'd want to go through these hoops when you could just call the service directly.  Well, using command objects has a few significant benefits that are not readily apparent:

  1. Command objects provide encapsulation between (in my case) the service and the client; if the service contract changes, only the commands needs to change.  If I hard-wired 500 FIT fixtures to the service and it changes in the next build, I'd be crying.
  2. Command objects offer a quick way to persist a set of parameterized operations.  In other words, you can de/serialize command objects, save them to a database or a message queue, etc.  This also makes them highly accessible to multiple input forms, like XML, scripts, and FIT fixtures.
  3. Once you have a few simple commands implemented, you can very quickly piece them together to create more complex behavior.  Again, using some form of object serialization makes this easy and, more important, dynamic - something that a hard-wire approach would not be able to do.
  4. It makes supporting transactions and undo semantics a lot easier.  E.g., a Command could support Execute(), Commit(), and Rollback() methods.
  5. The Command pattern works well with the Composite and Chain of Responsibility patterns, again simplifying the creation of complex commands from simple atomic ones.

In short, the Command pattern brings ninja skills to a knife fight.  Revisiting the flow chart above:

Each input vector needs to focus only on creating a set of Command objects representing the actions to be taken, then passing them through a Command Executor that will execute the action against the core system services using the existing service interfaces.

Example forthcoming...



Filling Shoes and Killing Trees

§ January 19, 2009 03:09 by beefarino |

Crap.

The recession has claimed another job, and it looks like I'll be taking over documentation efforts at the office.  Unfortunately, this takes a huge bite out of my time, as we produce roughly 2000 pages of material per release.  So expect some posts about writing good technical documentation along with my latest spike notes.

*Sigh.*

Not the job I would choose by any means, but it has to get done.  Of course, it makes me wonder:

...if our last professional technical writer is dispensible, and I'm the one taking over his responsibilities,  where does that put me on the ax-list??

 



Automation Framework pt 0: Vision

§ January 14, 2009 16:16 by beefarino |

After spending the last month reacting to some remarkable system failures at a very visible client, I've convinced the CTO to give me some elbow room to come up with the strawman of an automation framework for the core components of our system.  I described my initial goal to be able to drive the brains of our product without having to have the entire body attached, so we can start automating load- and performance-testing.  I didn't share my secondary goals - to be able to define automated regression tests and user acceptance scenarios that can be run against the system, which I think will do wonders for our feature planning and entomology.

At the moment, doing any kind of testing is a hassle.  Nothing can be automated to behave deterministically, everything is either manual or random behavior (which can be good for burn-in, but doesn't do much for testing scenarios), and doing things manual is to slow to cover much ground past "yep, it starts, ship it!"

The system has the complexity of an enterprise architecture, along with:

  • no standard messaging, communication layer, or service bus - instead we have raw sockets, Remoting, some of it stateless, some of it stateful, some of it persistent, some of it not;
  • numerous pieces of proprietary hardware that are expensive in both dollars and space;
  • deep assumptions about the physical environment, such as every client having a NIC card, to the point that most components won't work outside of the normal production environment;
  • system configuration that is splattered across devices, files, databases, and AD;
  • a codebase that is closed for extension.

So you see, our ability to mock client behavior and bench-bleed the system is pretty crippled.  I don't have time to address all of these things, but I want to knock as many of them out as I can.

I'll post my napkin design in a bit...