Automation Framework pt 1: Napkin Design

§ January 20, 2009 02:03 by beefarino |

Automating the core components of our product won't be too difficult.  My biggest obstacle at this point is time: with another round of "org chart refactorings" at the office, I've had tech writing added to my list of responsibilities so my time is scarce.  I want to get a usable and extensible framework to the team as quickly as possible.

The team has done a decent job of piecing apart functional system components into a set of core services and clients.  Almost no logic exists on the clients, and they communicate to the services through a set of relatively firm interfaces, although the transports vary wildly:

At this point, my only area of automation interest is the core components, as they contain the core logic of the product and are most impacted by our recent stability and performance issues.  I want the framework to support the following usage scenarios:

  • scripted system QA testing;
  • acceptance testing of specific features and performance metrics;
  • providing support for realtime load-testing of a production system;

So it needs to be fairly agnostic with regard to input - scripting could be done via PowerShell to take care of a lot of the heavy lifting of defining complicated tests, acceptance testing driven by a framework like FitNesse, and load-testing via a GUI. 

It'd be a real pain to try and hook up all of those core services to each of those input vectors.  Plus there may be other vectors I haven't considered (ooOOoo - like a DSL created with MGrammar).  An approach that I've found very appropriate to this situation has been to use the Command design pattern.

In a nutshell, the Command pattern aims to encapsulate a parameterized request as an object; e.g., a direct service method invocation:

...
service.CreateUser( userName, userType );
...


could be captured as a command object:

...
ICommand cmd = new CreateUserCommand( service, userName, userType );
cmd.Execute();
...


Command objects often support an Execute() semantic, but not always; sometimes Command objects are passed through an Executor object that will perform the action. 

If you're not experienced with this pattern, you may be wondering why you'd want to go through these hoops when you could just call the service directly.  Well, using command objects has a few significant benefits that are not readily apparent:

  1. Command objects provide encapsulation between (in my case) the service and the client; if the service contract changes, only the commands needs to change.  If I hard-wired 500 FIT fixtures to the service and it changes in the next build, I'd be crying.
  2. Command objects offer a quick way to persist a set of parameterized operations.  In other words, you can de/serialize command objects, save them to a database or a message queue, etc.  This also makes them highly accessible to multiple input forms, like XML, scripts, and FIT fixtures.
  3. Once you have a few simple commands implemented, you can very quickly piece them together to create more complex behavior.  Again, using some form of object serialization makes this easy and, more important, dynamic - something that a hard-wire approach would not be able to do.
  4. It makes supporting transactions and undo semantics a lot easier.  E.g., a Command could support Execute(), Commit(), and Rollback() methods.
  5. The Command pattern works well with the Composite and Chain of Responsibility patterns, again simplifying the creation of complex commands from simple atomic ones.

In short, the Command pattern brings ninja skills to a knife fight.  Revisiting the flow chart above:

Each input vector needs to focus only on creating a set of Command objects representing the actions to be taken, then passing them through a Command Executor that will execute the action against the core system services using the existing service interfaces.

Example forthcoming...



Filling Shoes and Killing Trees

§ January 19, 2009 03:09 by beefarino |

Crap.

The recession has claimed another job, and it looks like I'll be taking over documentation efforts at the office.  Unfortunately, this takes a huge bite out of my time, as we produce roughly 2000 pages of material per release.  So expect some posts about writing good technical documentation along with my latest spike notes.

*Sigh.*

Not the job I would choose by any means, but it has to get done.  Of course, it makes me wonder:

...if our last professional technical writer is dispensible, and I'm the one taking over his responsibilities,  where does that put me on the ax-list??

 



Automation Framework pt 0: Vision

§ January 14, 2009 16:16 by beefarino |

After spending the last month reacting to some remarkable system failures at a very visible client, I've convinced the CTO to give me some elbow room to come up with the strawman of an automation framework for the core components of our system.  I described my initial goal to be able to drive the brains of our product without having to have the entire body attached, so we can start automating load- and performance-testing.  I didn't share my secondary goals - to be able to define automated regression tests and user acceptance scenarios that can be run against the system, which I think will do wonders for our feature planning and entomology.

At the moment, doing any kind of testing is a hassle.  Nothing can be automated to behave deterministically, everything is either manual or random behavior (which can be good for burn-in, but doesn't do much for testing scenarios), and doing things manual is to slow to cover much ground past "yep, it starts, ship it!"

The system has the complexity of an enterprise architecture, along with:

  • no standard messaging, communication layer, or service bus - instead we have raw sockets, Remoting, some of it stateless, some of it stateful, some of it persistent, some of it not;
  • numerous pieces of proprietary hardware that are expensive in both dollars and space;
  • deep assumptions about the physical environment, such as every client having a NIC card, to the point that most components won't work outside of the normal production environment;
  • system configuration that is splattered across devices, files, databases, and AD;
  • a codebase that is closed for extension.

So you see, our ability to mock client behavior and bench-bleed the system is pretty crippled.  I don't have time to address all of these things, but I want to knock as many of them out as I can.

I'll post my napkin design in a bit...



There is No Homunculus in your Software

§ January 12, 2009 06:52 by beefarino |

I want to refactor some code to a strategy pattern to isolate some complex authentication procedures, rather than have hard-coded behavior that prevents unit, load, and performance testing.  I just had a very frustrating discussion about it ...  

Developers talk about their code as if it were people all the time.  I do it, you do too.  Most of the time it's harmless, as in this little quip I said this morning:

"So when the service wants to authenticate the client, it will ask whatever authentication strategy is available."

Everyone in the conversation obviously knows that the application isn't capable of wanting or asking, and that these are metaphors to hand-wave over the technicalities of calling on an object.  No harm done; but consider the reply:

"But how would the service know that the client was really authenticated unless it does the work itself?"

Both examples show a kind of anthropomorphism ( like when your car won't start because it's upset with you for not changing its oil, or that your laptop doesn't like the couch because it won't connect to your wireless network there ).  Wants and asks describes dependencies and actions of the software - in short, the software's behavior.  Know attributes a conscious state to the software - that it somehow can understand the difference between doing some work and calling someone else to do the same work, which is nothing short of a direct demotion of a functional requirement ( the service MUST authenticate all clients ) into a software implementation ( this class MUST contain all authentication code ).

There is no homunculus in the software.  There is no little dude sitting in a virtual Cartesian Theater watching the bits fly by and getting upset when things don't happen the way he thinks they should.

Software doesn't know anything.  Software does whatever it is told to do.  If the service contains the authentication code, or if it delegates to a strategy that performs the same action, the behavior is the same.  Given that, if the former is prohibitive to testing, I say do the latter and test my heart out!!