Adopting User Stories pt 2: the Kickoff that Didn't

§ May 9, 2008 12:24 by beefarino |

As recounted here, I had the team circle the wagons and pump out a bunch of user stories for an integration project.  It was the first time we'd tried user stories as a means to express requirements, but things felt solid and complete, and the product backlog was full of stories organized by system component and user role, perfect for developing in vertical slices

The project was considered top-priority and had the undivided attention of the company for the time being.  In the rush, I skipped estimating the backlog items and scheduled the sprint kickoff meetings.

Over the product backlog meeting, we iterated through each user story, reading it aloud and discussing in enough detail so everyone understood the feature and the value it brings to the user, but leaving out implementation discussions.  At least, that was my expectation.  The team members uninitiated in the project got flustered with the lack of specificity in the stories.  Some team members seemed to panic, like we had missed a big part of the picture; others got frustrated at the level of intimidation in the room.

The sprint kickoff was a bazillion times worse.  During the product backlog meeting, the product owner hadn't expressed a desire to see a specific set of user stories first, so individuals took the sprint kickoff in their own direction and were adamant to work on features that were obsoleted or purposefully left out of the product backlog.  There was no common ground to be found.  At the end of that sprint kickoff meeting the team had no goal, no milestones, no tasks ... we couldn't even set up the time for our daily scrum.

I've spent a lot of time decrypting what exactly happened.   There were things under my influence, and others outside of my control.  Here's where I think I failed, and what I've taken away from it....

 

Failure: The product backlog meeting was the first time the entire team had come together to discuss the project.  In particular, it was the first time any of the developers were exposed to the feature list.

Lesson: Expose representatives from each team to the relevant product backlog / user stories before the kickoff.

Lesson: Have the team estimate the complexity of each story, to elicit discussion if nothing else.  Ensure that representatives from development and testing are present.

This seems really obvious, but after a week of daily 3- to 4-hour story workshops I was feeling like the team was familiar with the stories.  Usually there is a lot more effort put on the product backlog than what we did for this project.  The product backlog should be treated like a pet requiring daily attention and affection to remain healthy, but instead we just chain it outside with an open bag of food.  Consider that the team would normally help the stakeholders evaluate priorities by providing rough estimates of the stories in the product backlog.  If one story has a high business value but would take 6 weeks of work, while 3 stories with a greater total value would take one week of work, it may make more sense to tackle the 3 stories in one week first.  Congealing the team on those estimates requires discussion of the features.  We didn't do this estimation because of time pressures - but in so doing (or, I guess, in so not doing), we neglected ourselves of that vital team conversation.

 

Failure: The user stories varied in specificity from hazy to anal; some overlapped each other; some were really just acceptance test criteria for another story.

Lesson: Focus each story on a user role, a feature, and a business value.  Don't repeat the combination in another story.

Lesson: Express detail and acceptance test criteria as story notes, not as separate stories.

Lesson: Spend the time to refactor the stories and organize the product backlog.

A lot of the frustration in the product backlog meeting stemmed from the fact that some features had a single story, while others had multiple stories, all of them nearly identical save one bit of detail or acceptance criteria.  The perception was that little to no thought had been given to those sparse stories.  I think most, but not all, of this frustration could have been avoided if I had worked with the product owner and customer representative to consolidate and organize the stories before bringing them to the team.

 

Failure: The sprint kickoff meeting started with a goal vacuum.

Lesson: Force the product owner to prioritize milestones for the sprint before the product backog meeting.

After all, that's the point of the product backlog meeting, right?  To get the team to commit to a specific set of goals delivered at a fixed time.  Not having that goal means the story buffet is open, and each team member will want to do what they think is the most important thing on that backlog.

 

And last in my list, but certainly not the last mistake I made...

Failure: While the testing, customer representative, and product owner understood the nature of this new user story beastie, the development team did not.

Lesson: If you expect the team to participate in a new thing, make sure they understand its nature.

It's completely reasonable for someone to get frustrated with a process if their expectations of the process haven't been managed.  If I had simply reiterated to the development team that these stories were really placeholders for conversations about the feature that we'd have during the sprint kickoff, it probably would have stemmed a lot of the frustration at the product backlog meeting.  

 

All in all, it isn't just vinegar; a lot of sugar came out of us trying user stores, which I'll explain in pt3: Two Steps Forward.



Adopting User Stories pt 1: Cutting Teeth

§ April 29, 2008 15:15 by beefarino |

A few months ago I agreed to fill the role of acting Scrum master so I could try out some new techniques with the team.  One of the things I've been itching to try is utilizing user stories as the basis for a Product backlog.  I got the idea from a few of Mike Cohn's articles on his site, as well as his book which I highly recommend.  It seemed like a fantastic idea, for several reasons:

  1. User stories are feature-centric and not implementation-centric.  That is, they describe a feature from a particular user role.  I firmly believe the Product backlog should be kept feature-centric as well, for reasons I may expound on some other time.
  2. User stories are expressed in "business" language.  This makes them very easy for sponsors to understand, discuss, and even create.  It also helps keep the conversation focused on the value the story brings to the project.
  3. User stores are simple and relatively terse.  Each one fits on a single index card.  That makes them very amenable to centralized backlog management tools, like Tackle.
  4. User stories easily adapt as the understanding of the business need changes.  This is exactly what the Product backlog should do.
  5. User stories are estimate-able and independent, just as a Product backlog entry should be.

Now, I should point out that I had never actually written or used user stories before, I've just read about them and heard anecdotes from my peers.  They've always seemed like a better approach to what I've experienced, which is the massive requirements collection that reads like home theater assembly instructions and is dated the moment it's written down so even with 80 pages of spec I need to walk over to the product owner to get questions answered.

The target project was an integration project with some new features on an established product.  There had been some ... adjustments ... in staffing and people were having to fill new roles.  There was a lot of confusion about how to drive the project and where to start.  In short, a perfect time to try something new...

Cutting Teeth in a StoryStorm

I had the customer representative, the product owner, and some members of the team commit to a four-hour storystorming session.  The basic idea is to get a bunch of people in a room for a timeboxed session where everyone cranks out as many stories as they can; the goal is breadth, not depth.

One person on the team had some exposure to user stories at another company, so I asked her to spend 15 minutes or so describing the concept to everyone.  I emphasized Mike Cohn's wisdom about phrasing each story from a specific user role, expressing a specific business feature and it's value to the user.  We discussed how the pile of note cards would become the product backlog for the project.  We walked through developing some example stories that were relevant to the project.  Everyone understood the process and thought it would produce a usable backlog.  So I passed out index cards and pens, set my egg timer for 5 minutes, and asked everyone to write as many stories as they could think of.

And they froze.  The ticking of the egg timer was deafening amidst the occasional tap of a pen on the table, the crackle of an index card being sacrificed to the Ultimate Outbox.  No one was writing a single thing.

After a few minutes I realized my mistake - it was completely unreasonable for me to expect the team to be comfortable cut loose on an open-form task they had never tried before.  They were afraid, and rightfully so.  They were filling new roles on a new project that was very different from the norm in a company that recently had lay offs.  I quickly thought back through the basics:

Software requirements are a communication problem.  User stories are one way of addressing that problem; each story is a placeholder for a conversation between the developers, testers, and stakeholders. 

So I killed the timer and suggested that we start talking.  We had already made a list of every user role in the project during the overview; to provide direction to the conversation I picked a single user role and we started discussions with the first thing that user would have to do to use the product.  It took some time to feel out how to hone the discussion into user stories, but following Mike's phrasing template was invaluable to accomplishing this.  After a while the index cards were getting pumped out faster than I could keep track of them, and the process started to feel natural and comfortable.

We finished that first session and did a mini-retrospective on the process.  The general consensus was positive, although some team members opted to withhold their joy until they see the project features implemented.  We scheduled several more sessions over the next week to round out the backlog.  Some sessions were spent standing in front of the product, actively modeling the user activities while a scribe jotted down the stories.  Overall, it seemed to be a very effective and productive way to produce the project requirements.

Little did I know that I had made several mistakes that would nearly derail the project...

... which I'll discuss in pt 2: the Kickoff that Didn't.



Confessions of a Design Pattern Junkie

§ April 25, 2008 16:28 by beefarino |

Me: "Hello, my name is Jim, and I'm a pattern junkie."

Group: "Hello. Jim."

Yes, I humbly admit it.  I read this book, that book, and this other one too, and now I'm pattern-tastically pattern-smitten with a pattern-obsession.  I'm that guy on the team - the one who starts the design with strategies and factories and commands and decorators before a lick of code is compiled.  The one who creates decorator and composite base classes for every interface because "I'll prolly need 'em."  The one who, at the end of the project, has produced Faulknerean code for lolcat functionality.  

But I confess: I am not the least bit ashamed.  I acknowledge my approach has been overbearing and self-indulgent. I know I need to change to be a better engineer.  Spending time as Scrum Master has shown me what pattern hysteria looks like from the outside.  It's WTFBBQ smothered in bloat sauce.

But the experience of being a pattern junkie has been irreplaceable, for a number of reasons.  Patterns are valuable to know, for reasons I'll expound on in a bit.  Taking the time to (over-)apply them to real projects has been the best way for me to learn how they work and interact.  My biggest problem is that I want to apply them as much as possible at the design stage of a project. I've come to terms with the fact that it's a bad idea, which has given me the chance to learn something and improve myself.

So, in the words of the good witch: "What have you learned Dorothy?"

First, let's talk about how misusing patterns has inhibited me.

Bad: Using a pattern leads me to using another. 

Using a strategy pattern precipitates the use of decorators and adapters on the strategy.  Using commands leads to the use of composites, iterators, and chain of responsibility.  The complexity of managing the patterns and dependency injection leads to the use of facades, factories, builders, and singletons.  Things become extraordinarily convoluted very quickly.  When I design against patterns a priori, when they don't service an existing need, the code I have to write explodes, and once it's written, maintaining it becomes a real chore.

Bad: Thinking in patterns makes me lose focus of the problem.

Using patterns makes me itch to break down problems into very atomic units, which is generally good, but I take it to the point of comedy.  Consider this example, which is an actual approach I used because I thought it was a good idea at the time.  I was working on an object that operates on an XML document. To supply the XML document to my object, I chose to define the IXMLDocumentProvider interface as an abstraction for loading the XML.  Why?  Because I was thinking about patterns and not the problem I was trying to solve.  My logic was roughly  this: if I use another strategy to manage the load behavior the XML source could be a file at runtime and an in-memory document in my unit tests, and I could use a decorator on the strategy to validate an XMLDSIG on the document in production if I need to.  In the end, all the program needed was the XML, which could have easily been supplied in a constructor or parameter.  There is but one instance of IXMLDocumentProvider in the project, and all it does is hand out an instance of an XML document supplied to its constructor.  I filled a non-existent need because I was focusing on the pattern and not the problem.

It isn't all bad; let's look at how using patterns has helped me.

Good: Using patterns yields testable code.

Using patterns extensively has helped me write highly testable code.  Patterns and dependency injection go together like peanut butter and chocolate.  Having patterns peppered throughout the design, my code is highly decoupled.  Unit testing is a breeze in such a scenario, and unit tests are good.

Good: Using patterns makes complex code understandable.

Patterns isolate concerns.  This makes large codebases more digestible, and it tends to break complex relationships into lots of smaller objects.  I know many people would disagree with me here, but I find it easier to work with 50 small class definitions that a) follow well-understood patterns and b) adhere to the single responsibility principle than 5 classes that have been incrementally expanded to 20,000+ lines of code containing a succotash of concerns.  A coherent class diagram will tell me more about a system than a list of 200+ method names.

Good: Using patterns makes complex systems extensible.

Again, patterns isolate concerns, which makes extending a system very simple once you are familiar with the system design.  For example, adding a decorator is easier, in my opinion, to altering the behavior of an existing class.  Folding new features into a well-designed strategy, command, or visitor pattern is cake.  Patterns help you grow a system by extending it, not altering it, which is a good idea.

My two-step program to better pattern application

I've learned from my mistakes.  I've come to the conclusion that patterns are a tool best applied to existing working and testable code.  My personal commitment is to stop using patterns at the design phase, but continue employing patterns when they make sense.  How will I do this?

My two steps are simple - when I work on a software feature, I promise to do the following:

  1. Design and code the feature to a working state as quickly and simply as possible.  At this phase I promise not employ patterns a priori, although I may employ Dependency Injection to make testing easier.
  2. Refactor my code to separate concerns, remove duplication, and improve readability.  At this phase, I will employ patterns NOT wherever possible, but only as necessary to meet my goal.  That means I'll pull them in when I need to separate concerns, when I need to untangle spaghetti code, when I need to make the code understandable.  

I'll let you know how the rehab goes.  Until then, there's no place like code ... there's no place like code .... there's no place like code .....



String Theories

§ April 22, 2008 15:55 by beefarino |

Yesterday morning, my colleagues and I we were having a discussion about all the different string representations and abstractions we've had to work with over the course of our lives as programmers.  Here's the list I came up with, in rough chronological order of my exposure to them:

  1. TRS-80 BASIC - I have no idea how they were represented in memory because I've never gone back to that platform, but I assume it was just another character array.  If someone knows the specifics I'd LOVE to learn about it.
  2. Borland Turbo PASCAL character arrays
  3. C/C++ char* / char[]
  4. Win32 LPSTR, LPCSTR, and all the other constructs from the Win32 API that I had to learn about because Owen and Michael would insist on leaving STRICT defined (thanks guys!).
  5. MFC CString
  6. Javascript string objects 
  7. Perl $scalars - a string, a number, a reference, or all three, or perhaps none of those.  I once dug deep into Perl internals; I could tell you more than you care to know about scalars, memory management, type conversion inside of the Perl interpreter; of course, show me some of the perl I hacked up a few years ago and I won't be able to tell you what it does...
  8. C/C++ wchar* / wchar[]
  9. Win32 TCHAR* / TCHAR[] - yes, technically the same as either char* or wchar*, thanks for not commenting about it.
  10. BSTR -  WTFBBQ?!  OIC - its a pointer to the MIDDLE OF THE FRACKING STRING STRUCTURE so I have to do pointer calculus to determine the length of the string and suck out the relevant bytes....  well, thank goodness there's:
  11. __bstr_t - ok, a bit friendlier, but I'm still glad that there's: 
  12. CComBSTR - ah, a Length() method!  
  13. VARIANT - *sigh* .... the lengths to which I went to pacify OLE Automation lust.  
  14. __variant_t - ignored in favor of:
  15. CComVariant - use only when necessary, follow each use with a thorough handwashing.
  16. PHP strings - never learned the internals of PHP.  I assume it operates on the same type of abstraction as the Perl scalar - anyone know for sure?
  17. Java string objects - took some getting used to.  Why can I + two strings, but not two Matrices?  How does it make sense that a base Object return an instance of a derived type String in Object.toString()?  See what happens when an active mind is no longer consumed with memory and pointer management?
  18. .NET string objects

That's what I came up with in about 5 minutes of gazing longingly over my geek life.  I'm sure there are others - my list doesn't include all of the one-off custom implementations I've made, or the third-party tools we used to use for cross-platform application development, or stuff like XML tokens, entities, etc.

I created the list for fun, but it's got some pretty interesting aspects to it.  For one, the same basic string construct that I learned on that TRS-80 never really changed.  Sure, they're immutable objects now, but really their representation and purpose have persisted since my dad brought home that fat grey box with the tape drive and keyboard that sounded like a hole punch. 

Second, it's made me realize that I take a lot of stuff for granted these days.  Here's some code from somewhere between #8 and 9 on my list:

TCHAR *psz = new TCHAR[ iStringSize ]; if( NULL == psz ) {     return E_OUTOFMEMORY; } // ...
delete[] psz; psz = NULL; 

Even writing this as an example makes me very nervous.  A few years ago I wouldn't have batted an eye, but these days it feels like a lot of work to pull all of the allocation, pointer management, and deallocation together.  And this example really doesn't account for all the things that could go wrong...  

So I have to say that, all strings considered (*groan*), I'm pretty content with the state of the art.