Adopting User Stories pt 1: Cutting Teeth

§ April 29, 2008 15:15 by beefarino |

A few months ago I agreed to fill the role of acting Scrum master so I could try out some new techniques with the team.  One of the things I've been itching to try is utilizing user stories as the basis for a Product backlog.  I got the idea from a few of Mike Cohn's articles on his site, as well as his book which I highly recommend.  It seemed like a fantastic idea, for several reasons:

  1. User stories are feature-centric and not implementation-centric.  That is, they describe a feature from a particular user role.  I firmly believe the Product backlog should be kept feature-centric as well, for reasons I may expound on some other time.
  2. User stories are expressed in "business" language.  This makes them very easy for sponsors to understand, discuss, and even create.  It also helps keep the conversation focused on the value the story brings to the project.
  3. User stores are simple and relatively terse.  Each one fits on a single index card.  That makes them very amenable to centralized backlog management tools, like Tackle.
  4. User stories easily adapt as the understanding of the business need changes.  This is exactly what the Product backlog should do.
  5. User stories are estimate-able and independent, just as a Product backlog entry should be.

Now, I should point out that I had never actually written or used user stories before, I've just read about them and heard anecdotes from my peers.  They've always seemed like a better approach to what I've experienced, which is the massive requirements collection that reads like home theater assembly instructions and is dated the moment it's written down so even with 80 pages of spec I need to walk over to the product owner to get questions answered.

The target project was an integration project with some new features on an established product.  There had been some ... adjustments ... in staffing and people were having to fill new roles.  There was a lot of confusion about how to drive the project and where to start.  In short, a perfect time to try something new...

Cutting Teeth in a StoryStorm

I had the customer representative, the product owner, and some members of the team commit to a four-hour storystorming session.  The basic idea is to get a bunch of people in a room for a timeboxed session where everyone cranks out as many stories as they can; the goal is breadth, not depth.

One person on the team had some exposure to user stories at another company, so I asked her to spend 15 minutes or so describing the concept to everyone.  I emphasized Mike Cohn's wisdom about phrasing each story from a specific user role, expressing a specific business feature and it's value to the user.  We discussed how the pile of note cards would become the product backlog for the project.  We walked through developing some example stories that were relevant to the project.  Everyone understood the process and thought it would produce a usable backlog.  So I passed out index cards and pens, set my egg timer for 5 minutes, and asked everyone to write as many stories as they could think of.

And they froze.  The ticking of the egg timer was deafening amidst the occasional tap of a pen on the table, the crackle of an index card being sacrificed to the Ultimate Outbox.  No one was writing a single thing.

After a few minutes I realized my mistake - it was completely unreasonable for me to expect the team to be comfortable cut loose on an open-form task they had never tried before.  They were afraid, and rightfully so.  They were filling new roles on a new project that was very different from the norm in a company that recently had lay offs.  I quickly thought back through the basics:

Software requirements are a communication problem.  User stories are one way of addressing that problem; each story is a placeholder for a conversation between the developers, testers, and stakeholders. 

So I killed the timer and suggested that we start talking.  We had already made a list of every user role in the project during the overview; to provide direction to the conversation I picked a single user role and we started discussions with the first thing that user would have to do to use the product.  It took some time to feel out how to hone the discussion into user stories, but following Mike's phrasing template was invaluable to accomplishing this.  After a while the index cards were getting pumped out faster than I could keep track of them, and the process started to feel natural and comfortable.

We finished that first session and did a mini-retrospective on the process.  The general consensus was positive, although some team members opted to withhold their joy until they see the project features implemented.  We scheduled several more sessions over the next week to round out the backlog.  Some sessions were spent standing in front of the product, actively modeling the user activities while a scribe jotted down the stories.  Overall, it seemed to be a very effective and productive way to produce the project requirements.

Little did I know that I had made several mistakes that would nearly derail the project...

... which I'll discuss in pt 2: the Kickoff that Didn't.



Confessions of a Design Pattern Junkie

§ April 25, 2008 16:28 by beefarino |

Me: "Hello, my name is Jim, and I'm a pattern junkie."

Group: "Hello. Jim."

Yes, I humbly admit it.  I read this book, that book, and this other one too, and now I'm pattern-tastically pattern-smitten with a pattern-obsession.  I'm that guy on the team - the one who starts the design with strategies and factories and commands and decorators before a lick of code is compiled.  The one who creates decorator and composite base classes for every interface because "I'll prolly need 'em."  The one who, at the end of the project, has produced Faulknerean code for lolcat functionality.  

But I confess: I am not the least bit ashamed.  I acknowledge my approach has been overbearing and self-indulgent. I know I need to change to be a better engineer.  Spending time as Scrum Master has shown me what pattern hysteria looks like from the outside.  It's WTFBBQ smothered in bloat sauce.

But the experience of being a pattern junkie has been irreplaceable, for a number of reasons.  Patterns are valuable to know, for reasons I'll expound on in a bit.  Taking the time to (over-)apply them to real projects has been the best way for me to learn how they work and interact.  My biggest problem is that I want to apply them as much as possible at the design stage of a project. I've come to terms with the fact that it's a bad idea, which has given me the chance to learn something and improve myself.

So, in the words of the good witch: "What have you learned Dorothy?"

First, let's talk about how misusing patterns has inhibited me.

Bad: Using a pattern leads me to using another. 

Using a strategy pattern precipitates the use of decorators and adapters on the strategy.  Using commands leads to the use of composites, iterators, and chain of responsibility.  The complexity of managing the patterns and dependency injection leads to the use of facades, factories, builders, and singletons.  Things become extraordinarily convoluted very quickly.  When I design against patterns a priori, when they don't service an existing need, the code I have to write explodes, and once it's written, maintaining it becomes a real chore.

Bad: Thinking in patterns makes me lose focus of the problem.

Using patterns makes me itch to break down problems into very atomic units, which is generally good, but I take it to the point of comedy.  Consider this example, which is an actual approach I used because I thought it was a good idea at the time.  I was working on an object that operates on an XML document. To supply the XML document to my object, I chose to define the IXMLDocumentProvider interface as an abstraction for loading the XML.  Why?  Because I was thinking about patterns and not the problem I was trying to solve.  My logic was roughly  this: if I use another strategy to manage the load behavior the XML source could be a file at runtime and an in-memory document in my unit tests, and I could use a decorator on the strategy to validate an XMLDSIG on the document in production if I need to.  In the end, all the program needed was the XML, which could have easily been supplied in a constructor or parameter.  There is but one instance of IXMLDocumentProvider in the project, and all it does is hand out an instance of an XML document supplied to its constructor.  I filled a non-existent need because I was focusing on the pattern and not the problem.

It isn't all bad; let's look at how using patterns has helped me.

Good: Using patterns yields testable code.

Using patterns extensively has helped me write highly testable code.  Patterns and dependency injection go together like peanut butter and chocolate.  Having patterns peppered throughout the design, my code is highly decoupled.  Unit testing is a breeze in such a scenario, and unit tests are good.

Good: Using patterns makes complex code understandable.

Patterns isolate concerns.  This makes large codebases more digestible, and it tends to break complex relationships into lots of smaller objects.  I know many people would disagree with me here, but I find it easier to work with 50 small class definitions that a) follow well-understood patterns and b) adhere to the single responsibility principle than 5 classes that have been incrementally expanded to 20,000+ lines of code containing a succotash of concerns.  A coherent class diagram will tell me more about a system than a list of 200+ method names.

Good: Using patterns makes complex systems extensible.

Again, patterns isolate concerns, which makes extending a system very simple once you are familiar with the system design.  For example, adding a decorator is easier, in my opinion, to altering the behavior of an existing class.  Folding new features into a well-designed strategy, command, or visitor pattern is cake.  Patterns help you grow a system by extending it, not altering it, which is a good idea.

My two-step program to better pattern application

I've learned from my mistakes.  I've come to the conclusion that patterns are a tool best applied to existing working and testable code.  My personal commitment is to stop using patterns at the design phase, but continue employing patterns when they make sense.  How will I do this?

My two steps are simple - when I work on a software feature, I promise to do the following:

  1. Design and code the feature to a working state as quickly and simply as possible.  At this phase I promise not employ patterns a priori, although I may employ Dependency Injection to make testing easier.
  2. Refactor my code to separate concerns, remove duplication, and improve readability.  At this phase, I will employ patterns NOT wherever possible, but only as necessary to meet my goal.  That means I'll pull them in when I need to separate concerns, when I need to untangle spaghetti code, when I need to make the code understandable.  

I'll let you know how the rehab goes.  Until then, there's no place like code ... there's no place like code .... there's no place like code .....



String Theories

§ April 22, 2008 15:55 by beefarino |

Yesterday morning, my colleagues and I we were having a discussion about all the different string representations and abstractions we've had to work with over the course of our lives as programmers.  Here's the list I came up with, in rough chronological order of my exposure to them:

  1. TRS-80 BASIC - I have no idea how they were represented in memory because I've never gone back to that platform, but I assume it was just another character array.  If someone knows the specifics I'd LOVE to learn about it.
  2. Borland Turbo PASCAL character arrays
  3. C/C++ char* / char[]
  4. Win32 LPSTR, LPCSTR, and all the other constructs from the Win32 API that I had to learn about because Owen and Michael would insist on leaving STRICT defined (thanks guys!).
  5. MFC CString
  6. Javascript string objects 
  7. Perl $scalars - a string, a number, a reference, or all three, or perhaps none of those.  I once dug deep into Perl internals; I could tell you more than you care to know about scalars, memory management, type conversion inside of the Perl interpreter; of course, show me some of the perl I hacked up a few years ago and I won't be able to tell you what it does...
  8. C/C++ wchar* / wchar[]
  9. Win32 TCHAR* / TCHAR[] - yes, technically the same as either char* or wchar*, thanks for not commenting about it.
  10. BSTR -  WTFBBQ?!  OIC - its a pointer to the MIDDLE OF THE FRACKING STRING STRUCTURE so I have to do pointer calculus to determine the length of the string and suck out the relevant bytes....  well, thank goodness there's:
  11. __bstr_t - ok, a bit friendlier, but I'm still glad that there's: 
  12. CComBSTR - ah, a Length() method!  
  13. VARIANT - *sigh* .... the lengths to which I went to pacify OLE Automation lust.  
  14. __variant_t - ignored in favor of:
  15. CComVariant - use only when necessary, follow each use with a thorough handwashing.
  16. PHP strings - never learned the internals of PHP.  I assume it operates on the same type of abstraction as the Perl scalar - anyone know for sure?
  17. Java string objects - took some getting used to.  Why can I + two strings, but not two Matrices?  How does it make sense that a base Object return an instance of a derived type String in Object.toString()?  See what happens when an active mind is no longer consumed with memory and pointer management?
  18. .NET string objects

That's what I came up with in about 5 minutes of gazing longingly over my geek life.  I'm sure there are others - my list doesn't include all of the one-off custom implementations I've made, or the third-party tools we used to use for cross-platform application development, or stuff like XML tokens, entities, etc.

I created the list for fun, but it's got some pretty interesting aspects to it.  For one, the same basic string construct that I learned on that TRS-80 never really changed.  Sure, they're immutable objects now, but really their representation and purpose have persisted since my dad brought home that fat grey box with the tape drive and keyboard that sounded like a hole punch. 

Second, it's made me realize that I take a lot of stuff for granted these days.  Here's some code from somewhere between #8 and 9 on my list:

TCHAR *psz = new TCHAR[ iStringSize ]; if( NULL == psz ) {     return E_OUTOFMEMORY; } // ...
delete[] psz; psz = NULL; 

Even writing this as an example makes me very nervous.  A few years ago I wouldn't have batted an eye, but these days it feels like a lot of work to pull all of the allocation, pointer management, and deallocation together.  And this example really doesn't account for all the things that could go wrong...  

So I have to say that, all strings considered (*groan*), I'm pretty content with the state of the art.



Coping with the Fear of Changing Code

§ April 20, 2008 14:43 by beefarino |

Another fear I'm seeing on the team is a fear of changing existing code.  The fear may stem from several sources: the code implements complex behavior that is undocumented; the code is inherited from a resident expert who is retasked or otherwise unavailable; the code hasn't been maintained properly and reads like a plate of spaghetti.  Whatever the source, a team member's response to the fear follows the pattern:

When team members are afraid, they will act in either their own interest of self-preservation, or in the interest of team survival.

When confronted with such code, a team member can choose one of two paths: they will choose to do as little with the code as possible, leave it alone as much as they can and still satisfy everyone's expectations; or they own up to the situation and remove the source of the fear, making the code easier for anyone to cope with and understand.

Those who pursue self-preservation take the former track.  They exert a lot of effort to develop an understanding of the code, but don't do anything to persist or share that knowledge.  Chances are they are trying to act in the team's interest by developing specialized knowledge of the code, but in the long-term they benefit only themselves.  The code remains an untenable briar patch for the next poor sod who receives it.

Those who pursue the best interests of the team take the latter approach.  They write unit tests around the existing code; they refactor the code and leverage the common design patterns to make the code comprehensible; they seek clarification from the business heads and write documentation targeted to other developers.  Their actions are targeted, whether implicitly or explicitly, at communicating with the other members of their team.