Automating the core components of our product won't be too difficult.  My biggest obstacle at this point is time: with another round of "org chart refactorings" at the office, I've had tech writing added to my list of responsibilities so my time is scarce.  I want to get a usable and extensible framework to the team as quickly as possible.

The team has done a decent job of piecing apart functional system components into a set of core services and clients.  Almost no logic exists on the clients, and they communicate to the services through a set of relatively firm interfaces, although the transports vary wildly:

At this point, my only area of automation interest is the core components, as they contain the core logic of the product and are most impacted by our recent stability and performance issues.  I want the framework to support the following usage scenarios:

  • scripted system QA testing;
  • acceptance testing of specific features and performance metrics;
  • providing support for realtime load-testing of a production system;

So it needs to be fairly agnostic with regard to input - scripting could be done via PowerShell to take care of a lot of the heavy lifting of defining complicated tests, acceptance testing driven by a framework like FitNesse, and load-testing via a GUI. 

It'd be a real pain to try and hook up all of those core services to each of those input vectors.  Plus there may be other vectors I haven't considered (ooOOoo - like a DSL created with MGrammar).  An approach that I've found very appropriate to this situation has been to use the Command design pattern.

In a nutshell, the Command pattern aims to encapsulate a parameterized request as an object; e.g., a direct service method invocation:

...
service.CreateUser( userName, userType );
...


could be captured as a command object:

...
ICommand cmd = new CreateUserCommand( service, userName, userType );
cmd.Execute();
...


Command objects often support an Execute() semantic, but not always; sometimes Command objects are passed through an Executor object that will perform the action. 

If you're not experienced with this pattern, you may be wondering why you'd want to go through these hoops when you could just call the service directly.  Well, using command objects has a few significant benefits that are not readily apparent:

  1. Command objects provide encapsulation between (in my case) the service and the client; if the service contract changes, only the commands needs to change.  If I hard-wired 500 FIT fixtures to the service and it changes in the next build, I'd be crying.
  2. Command objects offer a quick way to persist a set of parameterized operations.  In other words, you can de/serialize command objects, save them to a database or a message queue, etc.  This also makes them highly accessible to multiple input forms, like XML, scripts, and FIT fixtures.
  3. Once you have a few simple commands implemented, you can very quickly piece them together to create more complex behavior.  Again, using some form of object serialization makes this easy and, more important, dynamic - something that a hard-wire approach would not be able to do.
  4. It makes supporting transactions and undo semantics a lot easier.  E.g., a Command could support Execute(), Commit(), and Rollback() methods.
  5. The Command pattern works well with the Composite and Chain of Responsibility patterns, again simplifying the creation of complex commands from simple atomic ones.

In short, the Command pattern brings ninja skills to a knife fight.  Revisiting the flow chart above:

Each input vector needs to focus only on creating a set of Command objects representing the actions to be taken, then passing them through a Command Executor that will execute the action against the core system services using the existing service interfaces.

Example forthcoming...