Some new books arrived
First off, Mike Cohn’s “Agile Estimating and Planning“.
I enjoyed Mike’s previous book “User Stories Applied”, which contained a lot of useful techniques, each contained in short, practical chapters, with plenty of examples. Agile Estimation and Planning has the same structure and light writing as User Stories Applied.
This book is the perfect companion to the User Stories book: you’ve got a bunch of cards with stories, what now? Your customer wants to know how long it will take, how much it will cost, how many people it will take.
The book starts off with a discussion of the disadvantages of estimating tasks and the advantages of estimating features. Then we see some estimating techniques in “story points” or “ideal days”. I share Mike’s preference for story points: they’re simple and reflect the intrinsic difficulty of the story. Thus, I expect a story’s points to remain constant. All the other variables that influence how long it takes (the skill of the team, size of the team…) are reflected in the changing parameter of velocity: how many points we can implement in one release. The principle (which Mike attributes to Tom Poppendieck): “Estimate size (points), derive duration (man days)”.
Then comes the other important point: planning by customer value. Mike describes how the customer can estimate value and prioritize stories, including some financial measures like Net Present Value. When we know the customer’s needs, we can schedule the stories into releases. Mike adds some extras to the basic agile (XP) planning process: buffering techniques from Critical Chain planning, to reduce uncertainty and planning multiple team projects.
Of course, you have to track and monitor the performance of the team and take the appropriate corrective action, or you wouldn’t be agile. Mike tells you how to do that, too. The book ends with an analysis of why agile planning works and a case study planning stories for a game.
If you’re new to Agile estimating and planning, this book will give you the practical information you need to start applying the techniques. If you’ve been doing this for some years, as I have, much of the material will be familiar, but you will still discover some useful techniques or explanations why it works. That comes in handy when you’re trying to introduce agile estimation and planning in your team or organisation.
The second book is about “Discovering REAL Business Requirements for Software Project Success“. In this book, Robin Goldsmith claims that many projects get in trouble because they don’t have the real business requirements to work with. The problem is twofold:
- Real, meaning that we touch on the essence of what the customer needs.
- Business requirements, meaning that we understand the business needs and its goals, before we decide what part (if any) we will automate. All too often, what we write are Product requirements, the way the system(s) should behave. But have you ever asked yourself if the system was really the best solution to the customer’s problem?
The book gives a lot of techniques to discover the business requirements. You can use these techniques both with heavy upfront requirements efforts or agile story writing.
I think stories are a great way to describe business requirements. You can use them to stage small “plays”, where people acting the different roles go through a certain scenario, based on available stories. As soon as the play reaches a dead-end without a suitable story, you know you need to write another story. That’s just one of the many ways you can both generate and verify requirements.
What I like most about the book, is that it contains a lot of such “tests”. You can use these tests to verify if your business requirements are really business requirements. Hmmm, can you do test-driven requirements discovery? I think so. I’ve been using some of these techniques in a TDD manner: if the requirements test fails, you need to discover some more requirements, until the test succeeds. Red-Green-Refactor, it’s not just for code!
Marilyn Bush’ and Donna Dunaway’s “CMMI Assessments. Motivating Positive Change” deals with performing CMMI assessments to ascertain the state of a process improvement effort, as opposed to performing an audit to rank the company on the CMMI’s maturity scale.
Many of the obstacles for a succesful assessment are the same as for a retrospective: the need to create trust, to create (and maintain) a constructive atmosphere, to ensure that we work to improve the next project, not to complain (or blame) about what happened in the previous project and to avoid people “gaming” the system to look good. With an assessment, the dangers are even greater, because there is always that maturity ranking. All too often, the maturity level, not the process improvement, becomes the goal. This and many other potential problems are recognized and addressed by the book. Some claimed benefits sound quite familiar:
- Assessments effect change by involving and motivating organizations in efforts of self-analysis. Or, as the cool kids say: “Hansei”. It is stressed that all workers involved in the work should be involved in process improvement and assessments.
- Assessments effect change because they help the workers in an organization understand that Processes, not People, need to be fixed. It’s never a People Problem, it’s always a Process Problem; how very Toyota Way!
I think using CMMI assessments to motivate people to perform process improvement is a bit of an uphill battle:
- the idea of ranking practically invites gaming
- the staged representation “forces” a certain order in your process improvement efforts. I prefer to attack bottlenecks or to improve flow by removing muda when I see them.
- a “real” assessment is quite a heavy, resource-intensive process. We need something that can be done a lot more frequently, to get faster feedback and to keep people motivated by seeing regular improvement. Something like Retrospectives. Buy Norm Kerth’s book if you haven’t already and look out for Esther Derby’s new book.
- there is a lot of confusion about what CMMI actually is. Is it a model, a process improvement technique, a process?
CMMI and agility
“Ohmygod, Pascal is straying from the one true Agile path. It starts with doing waterfall projects; before you know it, he’s onto the hard stuff, like CMM!” 🙂
Is there a conflict between agility and CMMI? A bit, but not a lot, I think. But I need to learn a bit more about CMMI. For me, CMMI is not a process. CMMI is a set of questions about process and how to ask them (assessments and appraisals). Processes or methods like XP, SCRUM or any give the answer to a lot of these questions. For example, Philips showed that you can be easily appraised at CMM Level 2 by applying XP and Scrum.
One can disagree with the questions or the fact that they are grouped in maturity levels (I prefer the continuous representation), but I think we can all agree that any method should be able to answer questions like “How do you discover and manage requirements? How do you plan? How do you manage risk?…”. As long as they are open questions…
My view is contradicted by what’s written in “CMMI SCAMPI Distilled” [Ahern et al] on pages 32-33, where they compare hacking, agile and “Planned development”, as represented by CMMI and SCAMPI (based on the work of Boehm and Turner). Phrases like “CMMI can be applied to a large class of software-intensive efforts. As projects become more complex and increase in size, Agile methods are less applicable and a planned delivery approach contained in CMMI is often the preferred approach.”. So, CMMI is a method now?
/me confused… I’ll let you know when I’m less confused.