The Thing about Best Practices

§ February 12, 2009 03:34 by beefarino |

There's been a big stinky vine growing on the interwebs lately, rooted in the some comments made by Joel and Jeff during a podcast and aggravated by a recent post on Coding Horror. If you've been under a rock for the past week, a good summary can be found here.  Or go google stackoverflow podcast rant and bathe in it. 

I'm pretty late to the game on this meme, but since my experience and attitude seems to differ a bit from what I'm reading elsewhere I thought I'd share it.

I think I'm an above-average software engineer.  It's something I take seriously, and I spend a great deal of my time learning, experimenting, reading, and seeking out new knowledge in this area.  It's one of the things I would pursue even if I wasn't getting paid to do it.  That said, I routinely choose not to apply good engineering principles on software projects where it yields no benefit.  E.g., I don't do as much TDD or refactoring on spikes when I know the code is going to be thrown away.  

I also think I'm a mediocre woodworker, at best.  I enjoy doing it, but I don't feel the same compulsion to learn as much as I can, to hone those skills, sacrifice the time and resources necessary to take those skills to the next level.  I'm content doing it as poorly as I do.  However, two years ago when I was working on the two-story tree house that would be holding my girls eight feet off the ground, you can bet your ass I learned all I could about proper tree house design and construction and applied every bit of that knowledge to my project.

What I'm trying to say is this:

  • you don't always have to apply the best practices, but you can't make that choice without knowing what they are;
  • you don't have to be very experienced to realize what you don't know and when you need to learn it.


Convert-FromHex PowerShell Filter

§ February 11, 2009 09:20 by beefarino |

While moving some data around, I found myself in need of a powershell filter to translate a hex string into its byte array equivalent.  I've written this routine many times, but never quite this succinctly:

process
{
    $_ -replace '^0x', '' -split "(?<=\G\w{2})(?=\w{2})" | %{ [Convert]::ToByte( $_, 16 ) }
}

My favorite part is the regex used to split the hex string - it matches nothing concrete, only lookarounds.

Use it like any other pipeline filter when you have a hex string and want a byte array; e.g.:

PS >"0x1234" | convert-fromhex
18
52

Enjoy!



Automation Framework pt 4: Sharing State in Commands

§ February 10, 2009 11:51 by beefarino |

We're already getting a lot of use out of the framework, but I'm constantly seeking out ways to make it easier to use and extend. 

There is one particular aspect of the framework code is leaving a bad taste in my mouth.   After trying a few approaches I've settled on one that I feel is the best option.  Not everyone agrees, and I'd appreciate some alternative approaches.

It has to do with sharing state in a batch of commands.  Consider the following powershell script:

new-deposit -Name Stan -Amount 500;  

which, after some magical binding and command-building logic, breaks down into a complex sequence of simple commands (composites and chains are exploded as sub-items):

  1. FindOrCreateUserAccount for identifier "Stan" (chain):
    1. LoadUserAccount for user named "Stan"
    2. CreateUserAccount for user named "Stan" (composite):
      1. AcquireAuthTicket for creating a user account
      2. CreateUserAccountWithTicket for user named "Stan"
  2.  MakeDeposit in the amount of $500 to Stan's account (composite):
    1. AcquireAuthTicket for making a deposit
    2. MakeDepositWithTicket to move $500 into Stan's account

The powershell function translates into a batch of eight command objects to perform the actual work.  The commands need to share some state to accomplish the overall goal - for instance, the FindOrCreateUserAccount command will need to produce a UserAccount object on which the MakeDeposit command can operate.  This is a bit of a conundrum - I want each command object to know only of its own duties, so the FindOrCreateUserAccount command isn't able to directly pass the UserAccount object to the MakeDeposit command. So how do I get the UserAccount object created by the FindOrCreateUserAccount command to the MakeDeposit command?

I've tried a few approaches.  

Using a Command Context

After completing my first end-to-end use of the framework, I jotted down some concerns, many of which orbit around the need to consolidate access to all of the system services I'm automating.  To address this, I changed the ICommand.Execute() method signature to accept a single parameter of type ICommandContext:

public interface ICommandContext
{    
    IUserService UserService { get; }
    IGameService GameService { get; }
    // ...    
}
public interface ICommand 
{
    bool Execute( ICommandContext context );
} 

So now anyone executing a command must supply a command context.  I did this for a few reasons:

  • it allows each command easy access to the various services that comprise the production system without a lot of plumbing code;
  • it gives me a single point of extension for all command types.  E.g., if the system expands to include another service, I can modify ICommandContext without breaking any of the other commands or configuration;
  • it provides an abstraction against which the command objects run.  For example, I can execute commands against a "test" context to verify their behavior, or a "record" context to build up transcripts of system activity;
  • it isolates configuration to a single object, so instead of having to manage a large configuration across dozens of command objects, I only need to focus on configuring one object.  Bonus.

Anyway, a collegue suggested I just add a table of named objects to the command context, something like this:

public interface ICommandContext
{
    IUserService UserService { get; }
    IGameService GameService { get; }
    Dictionary< string, object > Data { get; }
    
    // ...    
} 

The idea is that one command could load an object with a specific name:

public class LoadUserAccountCommand : ICommand
{
    public string UserName { get; set; }
    
    public bool Execute( ICommandContext context )
    {
        UserAccount account = null;
       
        // this call populates the user account object 
        context.UserService.GetUserAccount( UserName, account );
       
        context.Data[ "UserAccount" ] = account;
       
        return true;
    }
}
and another could consume it using the same name:
public class MakeDepositWithTicketCommand : ICommand
{
    public decimal Amount { get; set; }    
    public bool Execute( ICommandContext context )
    {
        UserAccount account = context.Data[ "UserAccount" ] = account;
        
        // make the deposit into the account ..
        context.UserService.Deposit( account.Id, Amount );
       
        return true;
    }
}

I tried this for a little while, it has some charm in its simplicity but I'll be blunt: I hate it.  I think it's fine and simple for a hack job but will become unmaintainable very quickly:

  • the hashtable hides the inputs of the command - e.g., there is no way to look at a command object and determine that it needs in the way of input to do its job without deciphering code;
  • the sheer number of entries required during a command session could  become quite long, and even assuming we use best practices and have an enum of magic Data keys it becomes difficult to use;
  • along those lines, as the number of entires grows, the names start to loose their simplicity.  "UserAccount" is no longer sufficient, so you have the "NewlyCreatedUserAccount" item, the "CachedUserAccount" item, etc.  Or worse, the team gets lazy and we have "UserAccount", "UserAccount2", etc;
  • this actually creates a high (and, ironically, hidden) level of coupling between commands - e.g., the MakeDepositWithTicketCommand can only work against the "UserAccount" data item, which will limit the scope of commands with which it can operate to those that know to fill the "UserAccount" data item.  

I'm convinced that a general purpose variable stack or hashtable will make the framework too cumbersome to use.  I came up with an alternative that feels better, but still has some ugly parts.

Using Shared Property References

The easiest way to explain this is by example.  In this rewrite of the sample from the previous section, note how both command objects expose an Account property:

public class LoadUserAccountCommand : ICommand
{
    public UserAccount Account { get; set; }
    public string UserName { get; set; }
    public bool Execute( ICommandContext context )
    {
        // this call populates the user account object 
        context.UserService.GetUserAccount( UserName, Account );
       
        return true;
    }
}
public class MakeDepositWithTicketCommand : ICommand
{
    public UserAccount Account { get; set; }
    public decimal Amount { get; set; }
    
    public bool Execute( ICommandContext context )
    {
        // make the deposit into the account ..
        context.UserService.Deposit( Account.Id, Amount );
       
        return true;
    }
}

If both Account properties are set to the same object reference, the commands implicitly share the Account state:

//...
UserAccount account = new UserAccount();
CompositeCommand cmd = new CompositeCommand(
    new LoadUserAccountCommand { UserName = userName, Account = account },
    new MakeDepositWithTicketCommand { Amount = depositAmount, Account = account }
);
bool result = cmd.Execute( context );
// ... 

The LoadUserAccountCommand fills the account data into the object, and the MakeDepositWithTicketCommand uses the object to deposit money.  I like this a lot better than the other solution:

  • the needs of each command are expressed in its public members;
  • each command operates in isolation and there is no hidden coupling - e.g., there is no assumption made by the MakeDepositWithTicketCommand that will prevent it from working with other command objects;
  • it's simple;
  • it feels right;

After using this for a while, I've found a few drawbacks.  First, it adds some extra setup to every command batch.  Second, this mechanism obviously doesn't work for value types, you have to wrap the value in a reference type which can feel a bit awkward.  I've also been told that the state-sharing mechanism isn't obvious, but I don't agree. 

I'd appreciate some feedback on my choices here - is there another approach I haven't considered?



PowerShell Brush for Syntax Highlighter

§ February 10, 2009 08:51 by beefarino |

While updating this blog to use dp.SyntaxHighlighter, I realized I needed a brush for my powershell examples.  Using some of the other brush scripts as examples, I came up with the script attached to this post.

I used the one-liner posted on Oisin Grehan's blog to slurp out all of the keywords recognized by powershell.  I also added a list of available cmdlets and aliases available as of CTP3 using get-command and get-alias.

The brush is triggered by any of the following marker "aliases" on a code block:

  • ps
  • ps1
  • powershell
  • msh

Enjoy!

shBrushPosh.zip (2.63 kb)