On Smarts, Curiosity, and Learning

§ July 6, 2008 14:22 by beefarino |

Justin Etheredge recently posted Being Smart Does Not a Good Developer Make, in which he offers several important perspectives in his usual good-hearted way; the one that sparked my neurons was that drive and curiosity are more valuable than raw knowledge to becoming a "good developer."

I whole-heartedly share this perspective.  This is probably due to the fact that I've painted myself into a professional corner in many respects.  I have no formal education in my chosen profession of software engineering, which has historically put me at a disadvantage in job interviews, when I push to lead a project, and in a lot of interaction with my peers.  Nevertheless, I've been able to land the jobs, get the project leads when I want them, and feel confident - if reservedly so - talking with my fellow engineers without sounding like a total poser.

I attribute my success to three things:

  1. I've been pretty lucky with my opportunities;
  2. I'm insatiably curious about most things;
  3. I have two degrees in psychology. 

Justin calls software engineering a meta-industry; likewise, psychology is probably the second-best example of a meta-science I can think of (with first place going to cryptography).  Digesting all that research on learning, perception, and memory has given me a significant leg-up when it comes to learning new things, understanding someone else's perspective, and communication.

I honestly believe that having and honing these skills is what keeps me employed as a software engineer.

If it was possible for me to add any value to Justin's post, it would be to not limit your exploration to your career.  Learning is an active, forceful process - go find something that interests you and relate it back to software design and engineering.  Psychology may not be your bag, but learning opportunities abound.  Here are two ideas I've taken from my own personal history:

Design and build a birdhouse (or, if you have some modest woodworking skills, a tree house or bookshelf).  If you really want a challenge, have someone else design it.  What sort of problems did you hit during design vs. construction?  How well did you estimate your materials?  What sort of mistakes ended up in the final product, and how would you avoid them next time?

Cook a menu by a fickle four-year-old.  This is a great exercise in communicating across domains of expertise: you could be a Master Chef, but you'll have no idea how to make Glittery Rainbow Unicorn Fishsticks or Chocolate Marshmallow Beans and Rice without some discussion.  Moreover, try explaining to them why their culinary delights don't taste as delightful as they expected.  Work with them to hone their ideas and produce something palatable.  Think about the communication problems you experienced and how you worked around them - they have a lot in common with communication problems you experience at work.



Confessions of a Design Pattern Junkie

§ April 25, 2008 16:28 by beefarino |

Me: "Hello, my name is Jim, and I'm a pattern junkie."

Group: "Hello. Jim."

Yes, I humbly admit it.  I read this book, that book, and this other one too, and now I'm pattern-tastically pattern-smitten with a pattern-obsession.  I'm that guy on the team - the one who starts the design with strategies and factories and commands and decorators before a lick of code is compiled.  The one who creates decorator and composite base classes for every interface because "I'll prolly need 'em."  The one who, at the end of the project, has produced Faulknerean code for lolcat functionality.  

But I confess: I am not the least bit ashamed.  I acknowledge my approach has been overbearing and self-indulgent. I know I need to change to be a better engineer.  Spending time as Scrum Master has shown me what pattern hysteria looks like from the outside.  It's WTFBBQ smothered in bloat sauce.

But the experience of being a pattern junkie has been irreplaceable, for a number of reasons.  Patterns are valuable to know, for reasons I'll expound on in a bit.  Taking the time to (over-)apply them to real projects has been the best way for me to learn how they work and interact.  My biggest problem is that I want to apply them as much as possible at the design stage of a project. I've come to terms with the fact that it's a bad idea, which has given me the chance to learn something and improve myself.

So, in the words of the good witch: "What have you learned Dorothy?"

First, let's talk about how misusing patterns has inhibited me.

Bad: Using a pattern leads me to using another. 

Using a strategy pattern precipitates the use of decorators and adapters on the strategy.  Using commands leads to the use of composites, iterators, and chain of responsibility.  The complexity of managing the patterns and dependency injection leads to the use of facades, factories, builders, and singletons.  Things become extraordinarily convoluted very quickly.  When I design against patterns a priori, when they don't service an existing need, the code I have to write explodes, and once it's written, maintaining it becomes a real chore.

Bad: Thinking in patterns makes me lose focus of the problem.

Using patterns makes me itch to break down problems into very atomic units, which is generally good, but I take it to the point of comedy.  Consider this example, which is an actual approach I used because I thought it was a good idea at the time.  I was working on an object that operates on an XML document. To supply the XML document to my object, I chose to define the IXMLDocumentProvider interface as an abstraction for loading the XML.  Why?  Because I was thinking about patterns and not the problem I was trying to solve.  My logic was roughly  this: if I use another strategy to manage the load behavior the XML source could be a file at runtime and an in-memory document in my unit tests, and I could use a decorator on the strategy to validate an XMLDSIG on the document in production if I need to.  In the end, all the program needed was the XML, which could have easily been supplied in a constructor or parameter.  There is but one instance of IXMLDocumentProvider in the project, and all it does is hand out an instance of an XML document supplied to its constructor.  I filled a non-existent need because I was focusing on the pattern and not the problem.

It isn't all bad; let's look at how using patterns has helped me.

Good: Using patterns yields testable code.

Using patterns extensively has helped me write highly testable code.  Patterns and dependency injection go together like peanut butter and chocolate.  Having patterns peppered throughout the design, my code is highly decoupled.  Unit testing is a breeze in such a scenario, and unit tests are good.

Good: Using patterns makes complex code understandable.

Patterns isolate concerns.  This makes large codebases more digestible, and it tends to break complex relationships into lots of smaller objects.  I know many people would disagree with me here, but I find it easier to work with 50 small class definitions that a) follow well-understood patterns and b) adhere to the single responsibility principle than 5 classes that have been incrementally expanded to 20,000+ lines of code containing a succotash of concerns.  A coherent class diagram will tell me more about a system than a list of 200+ method names.

Good: Using patterns makes complex systems extensible.

Again, patterns isolate concerns, which makes extending a system very simple once you are familiar with the system design.  For example, adding a decorator is easier, in my opinion, to altering the behavior of an existing class.  Folding new features into a well-designed strategy, command, or visitor pattern is cake.  Patterns help you grow a system by extending it, not altering it, which is a good idea.

My two-step program to better pattern application

I've learned from my mistakes.  I've come to the conclusion that patterns are a tool best applied to existing working and testable code.  My personal commitment is to stop using patterns at the design phase, but continue employing patterns when they make sense.  How will I do this?

My two steps are simple - when I work on a software feature, I promise to do the following:

  1. Design and code the feature to a working state as quickly and simply as possible.  At this phase I promise not employ patterns a priori, although I may employ Dependency Injection to make testing easier.
  2. Refactor my code to separate concerns, remove duplication, and improve readability.  At this phase, I will employ patterns NOT wherever possible, but only as necessary to meet my goal.  That means I'll pull them in when I need to separate concerns, when I need to untangle spaghetti code, when I need to make the code understandable.  

I'll let you know how the rehab goes.  Until then, there's no place like code ... there's no place like code .... there's no place like code .....



String Theories

§ April 22, 2008 15:55 by beefarino |

Yesterday morning, my colleagues and I we were having a discussion about all the different string representations and abstractions we've had to work with over the course of our lives as programmers.  Here's the list I came up with, in rough chronological order of my exposure to them:

  1. TRS-80 BASIC - I have no idea how they were represented in memory because I've never gone back to that platform, but I assume it was just another character array.  If someone knows the specifics I'd LOVE to learn about it.
  2. Borland Turbo PASCAL character arrays
  3. C/C++ char* / char[]
  4. Win32 LPSTR, LPCSTR, and all the other constructs from the Win32 API that I had to learn about because Owen and Michael would insist on leaving STRICT defined (thanks guys!).
  5. MFC CString
  6. Javascript string objects 
  7. Perl $scalars - a string, a number, a reference, or all three, or perhaps none of those.  I once dug deep into Perl internals; I could tell you more than you care to know about scalars, memory management, type conversion inside of the Perl interpreter; of course, show me some of the perl I hacked up a few years ago and I won't be able to tell you what it does...
  8. C/C++ wchar* / wchar[]
  9. Win32 TCHAR* / TCHAR[] - yes, technically the same as either char* or wchar*, thanks for not commenting about it.
  10. BSTR -  WTFBBQ?!  OIC - its a pointer to the MIDDLE OF THE FRACKING STRING STRUCTURE so I have to do pointer calculus to determine the length of the string and suck out the relevant bytes....  well, thank goodness there's:
  11. __bstr_t - ok, a bit friendlier, but I'm still glad that there's: 
  12. CComBSTR - ah, a Length() method!  
  13. VARIANT - *sigh* .... the lengths to which I went to pacify OLE Automation lust.  
  14. __variant_t - ignored in favor of:
  15. CComVariant - use only when necessary, follow each use with a thorough handwashing.
  16. PHP strings - never learned the internals of PHP.  I assume it operates on the same type of abstraction as the Perl scalar - anyone know for sure?
  17. Java string objects - took some getting used to.  Why can I + two strings, but not two Matrices?  How does it make sense that a base Object return an instance of a derived type String in Object.toString()?  See what happens when an active mind is no longer consumed with memory and pointer management?
  18. .NET string objects

That's what I came up with in about 5 minutes of gazing longingly over my geek life.  I'm sure there are others - my list doesn't include all of the one-off custom implementations I've made, or the third-party tools we used to use for cross-platform application development, or stuff like XML tokens, entities, etc.

I created the list for fun, but it's got some pretty interesting aspects to it.  For one, the same basic string construct that I learned on that TRS-80 never really changed.  Sure, they're immutable objects now, but really their representation and purpose have persisted since my dad brought home that fat grey box with the tape drive and keyboard that sounded like a hole punch. 

Second, it's made me realize that I take a lot of stuff for granted these days.  Here's some code from somewhere between #8 and 9 on my list:

TCHAR *psz = new TCHAR[ iStringSize ]; if( NULL == psz ) {     return E_OUTOFMEMORY; } // ...
delete[] psz; psz = NULL; 

Even writing this as an example makes me very nervous.  A few years ago I wouldn't have batted an eye, but these days it feels like a lot of work to pull all of the allocation, pointer management, and deallocation together.  And this example really doesn't account for all the things that could go wrong...  

So I have to say that, all strings considered (*groan*), I'm pretty content with the state of the art.