XP Days Benelux 2013 – Call for session proposals

XP Days Benelux is an international conference where we learn to bring software to life and grow mature systems that support business needs.

It provides an excellent environment for exchanging ideas, hands-on exercises and extreme experiences.

The best way to learn is to facilitate a session. We like sessions where you explore ideas as well as questions.

The conference will be held on 28 and 29 November, 2013 in Mechelen, Belgium.

The best way to learn is to facilitate a session at XP Days!

We’re looking for sessions where you explore ideas as well as questions. Sessions that dig deeper, going beyond the basic techniques and practices. We really want to find out why/how things work or don’t work. We invite you to propose:

  • hands on coding/design/architecture sessions;
  • discovery sessions – open ended workshops that explore new topics, common problems, promising techniques, or burning questions;
  • experiential learning sessions; get people learning by doing & reflecting; for example games or simulations.

We’re not only interested in agile and software related topics but we also want to explore boundaries and cross borders. What can we learn from other disciplines or sciences?

Available timeslots are 75 and 150 minutes.

We also welcome short experience reports (30 minutes) that focus on what didn’t work and why.

Do you have an interesting story, idea or question?

Send us your session idea today.

What’s so special about XP Days?

For one thing, we constantly try to apply XP, agile, lean, systems thinking, theory of constraints and all the other stuff we talk about. It wouldn’t be an agile conference if it wasn’t organised by using agile values, principles and techniques.

For example: how is the way we build the program agile?

Collaborating to get the best possible sessions

We don’t think BDUF is appropriate or possible, not even for a session description. Therefore, we ask session authors to send in a simple proposal: title, subtitle, presenters, description. It’s the equivalent of a story card: the promise for a conversation about the session.

Once the proposal is sent in, the author(s) can incrementally fill in more information, as the session becomes clearer. There’s a separate deadline for submitting proposals (July 13th) and finalising the proposals (August 28th), to avoid “student syndrome” and last-minute session hurried submissions. Within those two timeboxes, the authors work at their own (sustainable) pace.

Session authors help other session authors by asking questions and giving feedback using the “Perfection Game“. The proposal authors (and the conference organisers) act as “peer coaches“. Through the magic of questions and feedback, the proposal author iteratively improves their proposal(s). Aren’t session authors helping their “competitors” in the race for a place in the program? We typically get 3 proposals per slot, so the competition is fierce. Well, nobody said collaboration was easy. In the end we all benefit from a better conference program. And the organisers see who applies the agile values and who just talks about them.

Isn’t this coaching only useful for beginners? It certainly helps new presenters to marshal their ideas. But, as we’ve experienced while pair programming, experienced presenters get new insights and clarity when “juniors” ask naive questions.

Kickoff 2011 goals-s

The most powerful coaching questions focus on goals and benefits for participants. Why should someone come to this sessions? What is the one idea you want to get across? What will participants be able to do after attending this session that they couldn’t do before? How will you know this session was a success for participants?

A common problem with session proposals is that the authors try to “pile on” too many ideas. The usual session feedback reads “not enough time”. The solution is not to take more time, but to present less material more thoroughly. Think deeply: what is your Minimum Viable Product? If there’s any time left in your session you can spend it on interaction with participants and exploration of your central idea, rather than adding more ideas.

Release early, release often

A session proposal is not a session. You may have created the greatest session proposal, that doesn’t mean you’ve got the greatest session. There’s only one way to know if your session design works: implement it, release it and get feedback from participants. We therefore encourage presenters to do “tryouts” of their session. We offer places to do this at the Agile Belgium and Agile Holland meetups. You may be able to schedule a tryout at a local user group or in your company. Or just invite a few friends and colleagues to try out your session.

Seasoned presenters know that it takes at least three iterations to get all the details and the timing right.

Preparing to select sessions

Now that we’ve iteratively improved session descriptions, the most difficult task is to select the sessions for the program.

Near the end of the session improvement process we ask all session authors to select the sessions they want to see in their “ideal track”. We use these votes as a start of our selection process.

We also use these votes to calculate the “value” of a draft program. We do this by calculating the Program Attendance Factor (PAF) for each voter. The PAF value indicates how many of their preferred sessions a voter can attend, given the layout of the program. To count, a preferred session must be in the program and it must not be at the same time as another preferred session. A program that has a higher total PAF has more value for voters, because they can attend more preferred sessions.

In preparation for the program committee meeting we create a card for each session. Making a conference program requires a lot of visual management:

  • The color of the card indicates the type of subject so that we can quickly see if the program has a good balance of sessions
  • The names of the presenters are clear so that we can ensure no presenter has more than one session in a day (sustainable presenter pace)
  • The card has coloured dots to indicate the type of session and the experience level so that we can see at a glance if there’s a good balance for participants with different learning styles and experience
  • The cards have different sizes representing session length so that we can’t put them in program slots of the wrong size

The program committee reviews all session proposals before the program committee meeting. There are so many proposals that program committee members can’t review or even read them all. Therefore, the committee divides the sessions among its members and trusts that the other members have good judgment.

Before the program committee meeting, the proposals are put into four categories:

  • the sessions we really want to see in the program. Each program committee member can “champion” a session
  • the sessions that we won’t consider because they’re incomplete, withdrawn or inappropriate
  • the sessions that were selected in the voting process
  • other sessions

Sessions on the board

We have large sheets of paper that represent each of the days with drawn slots that are just the right size for the session cards.


Building a program happens in three rounds:

  1. Create a first version by putting the championed and most voted for sessions in appropriate slots. We do this track per track. For example: first fill a track on both days with technical sessions, then fill a track with process sessions… It’s OK if some slots are still empty.This first step should go quite quickly, without too much discussion. The goal is not to create a perfect program; it’s not even a workable program. We just need a version to start iterating.
  2. We spot a constraint violation, for example. a presenter who does two sessions in one day or we’ve selectedtoo many sessions of one type or subject. We resolve the constraint by swapping the problem session with another same-sized session already on the board or not on the board. We may also spot improvements in balance or PAF score. These are also performed using swaps. The only constraint is that we can’t introduce constraint violations by this swap. We keep iterating until the program stabilizes. We select a few “backup” sessions that we can add to the program in case one of the selected presenters can’t come.
  3. We internally publish a draft version of the program. All organizers can review the program at their leisure to spot any constraint violations or improvements. We often have to take a bit of distance to spot these. When nobody has any more improvement ideas, we contact the authors of selected and rejected proposals. Sometimes presenter availability can introduce new constraints (for example, the presenter can only be present on the first day but his session is scheduled on the second day). In that case, we go back to swapping sessions to resolve the constraint.
  4. DONE. Until something happens like a presenter dropping out.

Getting better

Kickoff 2011 retro-s

We continuously improve this session proposal-improvement-review-selection process. We’ve incrementally built a tool that supports our workflow. New features get added just-in-time as we notice irritations or get new ideas for improvement. Most of the organisers participate in other conferences where we “‘steal” the best ideas. In return, we invite everybody to “steal” any idea they like.

In summary: creating a proposal, growing and refining it, coaching other to improve their proposal, doing tryouts and refining the session is a lot of work. And then you might not even get selected because there are so many great proposals.

Is it worth it? What’s the worst thing that could happen? You spent some time to understand an interesting subject better. That’s not so bad, is it? If you get selected or if you just attend, you get to spend two days with open, intelligent, helpful and experience people to explore and discover interesting new ideas and problems. And one of those ideas or problems could be yours.


Ignore the cost accountants at your peril, because they won’t ignore you

Why should I learn about cost accounting?

It may not be immediately obvious, but the way your company manages its finances, costs and budgets has a profound effect on the way projects or products are started, run and terminated. You may have been in situations where you sat back in amazement at some incomprehensible management decision and thought “what were they thinking?”

They were probably thinking about budgets and costs. You will be surprised again next time unless you learn to understand and dialogue with your CFO and cost accountants.

Fun with cost accounting

Pierre Hervouet and I have created a session that explains the basics of cost accounting (based on making and pricing cocktails) and presents three alternative views you may find useful:

  • Throughput Accounting
  • Lean Accounting
  • Beyond Budgeting

The session doesn’t certify you as an accounting master, but it provides food for thought and pointers to plenty of material you can dig into if you want to know more. The materials (currently only in French) are published on the agilecoach site.

Join the conversation

Why don’t you go see your CFO or cost accountants today? Ask them what’s keeping them up at night. You may discover that you face much the same issues and that the solutions are similar. Yes, yes, accounting and budgeting are getting more agile, more lean!

The Agile Alliance has a new “Agile Accounting Standards” program to “Engage with FASB’s Emerging Issues Task Force to promote and develop an Agile Accounting Standard that will better define and standardize internal IT development costs for organizations that use an iterative or agile software development methodology” because  “The phased gate language [of the current standard] results in significant confusion and challenges of interpreting how to map the iterative work that happens throughout an Agile project lifecycle and is becoming an increasing urgent issue.


Call for Sessions Agile Tour Brussels 2013

The Agile Belgium Community is calling for speakers for the Agile Conference: Agile Tour Brussels 2013. This event will take place on the 27th of September in Brussels Belgium.

Deadline for submission:  20th of June 2013

What’s Agile Tour Brussels?

It’s the 2nd edition of the conference Agile Tour Brussels. Last year we gathered attendees and speakers from Belgium, France, the Netherlands and Switzerland to share our passion for Agile. The purpose of this conference is to gather in one place Agile practitioners and people wanting to know more about Agile.

At Agile Tour Brussels you will speak to a mix of attendees who are either completely new to Agile, experienced or experts.

We are looking for any kind of sessions (Lecture, interactive talk, workshop, technical,…) on any topics like Scrum, XP, KanBan, Lean, Devops, Leadership, Lean Startup, Coaching, System thinking, Product Aspect, Agile Games,…

We are looking for different levels of sessions, sessions for beginner, advanced and expert. We are also looking for new and experienced speakers.

How to submit a session?

To submit a session, fill in the form located here: http://at2013.agiletour.org/fr/callForSpeaker.html

If you never registered to the website agiletour.org, you will be asked to create an account.

Tips for a successful submission:

  • Don’t forget to specify that you are applying for Agile Tour Belgium – Brussels
  • Specify the size of the audience you’d like to have for your session (e.g < 100 persons)
  • Sessions must be in English
  • We would prefer to have session of 1 hour maximum

All information about the event and previous edition can be found on www.atbru.be

If you have any questions, feel free to drop an email at bruno@atbru.be

Thanks in advance for your support!

The Agile Tour Brussels Organization Team.


A simple property Dialog – An alternative approach

Performing a Root Cause Analysis for a simple bug takes too much time

In a previous post I described how we performed a root cause analysis for a simple bug: one incorrect value  in a dropdown. Performing such a heavy analysis (which generates a lot of rework) may not be appropriate for every bug.

Here’s how another team handled a very similar bug: one value missing from a dropdown.

The fast way to deal with simple bugs

  • There’s a bug: in one screen one of the dropdowns misses one value: “X”. This bug is unexpected: in all other screens the behaviour of the dropdowns is correct, each of them contains “X” as the final and default value. This bug only appears in one screen. Why only this screen? What’s so special about this screen? Isn’t this the same code for all dropdowns with the same behaviour?
  • The bug is reported during internal testing. The bug can be repeated very easily.
  • Developers grumble (I told you the “Thank you” step in the  algorithm to write perfect code was difficult!)
  • A developer takes the bug and fixes it: an extra value “X” is added to the dropdown in the screen
  • Tester validates: go into the screen; open the dropdown; the value “X” is there. Bug closed.
  • Software gets shipped to customer
  • Done. Next bug!


A few days later…

  • Customer files a bug report: each time they enter this specific screen an “X” gets added to the dropdown. Enter the screen twice, you see “X” “X”. Enter again, you see “X” “X” “X”. And so on…
  • Developers grumble: “Not another stupid bugreport!
  • A developer takes the bug and analyses it. “Some idiot has added ‘X’ to this dropdown. Let’s remove that.
  • Tester validates: go into the screen; open the dropdown; no “X” is added. Bug closed.
  • Software gets shipped to customer
  • Done. Next bug!

Easy. Only took a few minutes to fix and (a few days later) a few minutes to test. And another release to build, ship and install.

A few days later…

  • Customer reopens their bug report: when they enter the screen, there is no “X”. To be clear: they expect exactly one “X” in this dropdown (and all similar dropdowns in other screens). Not zero. Not an infinite number.
  • Developers grumble. “When will they make up their mind?
  • Developer takes the bug and fixes it: an “X” is added to the dropdown unless there’s already an “X” in the dropdown
  • Tester validates: go into the screen, one “X”; go into the screen again, still one “X”; go into the screen again, still one “X”; go into other screens; go into this screen, still one “X”. Stop the application. Restart the application. Go into the screen, one “X”; go into the screen again, still one “X”. Bug closed.
  • Software gets shipped to customer
  • Done

Easy. Only took an hour or two to fix and (a few days later) an hour to test. And another release to build, ship and install.

And they all lived happily ever after

Except for the customer who grumbles “how can I trust them with the important stuff if they can’t get the simple stuff right?

Except for the developers who grumble “how can we get any work done if we have to keep fixing these stupid bugs?

Except for the testers who grumble “why do we have to retest every bugfix a thousand times? Can’t ‘they’ get things right the first time? We need fewer releases, more detailed specs, more elaborate test scripts, more time to test and, above all, a lot more testers to get any quality in this application.


A simple property Dialog – Adventures in Root Cause Analysis

I’ve been going on a bit lately about the value I get from of performing “Root Cause Analysis” when I encounter a bug.

A simple algorithm to write perfect code

You don’t have to watch the presentation. The message is this: each time there’s a bug:

  1. Someone finds a problem and reports it to you
  2. Thank the reporter
  3. Reproduce the problem
  4. Add (at least) one failing test
  5. Correct the problem
  6. Rerun all tests and correct your correction until the tests pass
  7. Improve your tests
  8. Improve the way you write tests
  9. Look for similar problems. Goto 2
  10. Make this type of problem impossible
  11. Perform the actions that were identified during the Root Cause Analysis

 But that’s just common sense!

We’re all agile lean continuously improving test driven extreme programmers, aren’t we? Doesn’t everybody do this?

Yes… but…

  • It takes too much time
  • We can’t do this for every bug because we’ve got too many bugs
  • We don’t want to waste time on bugs, we prefer to spend time adding valuable features
  • As Dijkstra said: “ program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence.”
  • <your objection here>

Just a little taste

The presentation tells the story of a team that performed a Root Cause Analysis. Here’s another team’s story:

Coach: Can we try a small Root Cause Analysis experiment?

Dev: Yes, but… we don’t have a lot of time.

Coach: Do you have one hour? We can timebox the experiment.

Dev: Sure, but… you can’t do anything useful in one hour.

Coach: Maybe you can’t; maybe we can.

Dev: ???

Coach: Have you seen any interesting bugs lately?

Dev: Well, I just fixed a bug. But it’s so trivial you won’t discover anything useful.

Coach: Thank you! Let’s see.

I love clear bug reports

First we look at the bug report:

 Let’s apply the algorithm

Coach: have you thanked the reporter yet?

Dev: no…

Coach: Let’s do it now.

Dev: ok… <Calls product Manager>

Dev: Hi, I just called to thank you for reporting the bug about the property dialog of the swizzled foobar.

Product Manager: <surprised> OK…

Dev: The report was so clear the bugfix almost wrote itself. Thanks!

Product Manager: You’re welcome!

Coach: OK. We’ve now done the hardest part; Let’s do the next steps.

Next step: reproduce the problem. That’s quite easy: start up the application, select a foobar, swizzle it and open its property dialog. Look at the dropdown control for the status: it presents options ‘A’, ‘B’ and ‘C’.

Coach: We have a manual test procedure. Can we automate this test?

Dev: No. This is user-interface code. We’ve performed some experiments earlier and decided that it wasn’t worth the time and effort to create and maintain automated tests for the user interface.

Coach: OK.

Fix and Test

The fix was really simple

Buggy :

Fixed :

Rerun the application, perform the manual test: the correct options are now offered.

Result! Bug fixed! High Five! Job well done.

High five

Hmmmm… The fix is indeed simple and the bug is now gone. But that IF is worrying. I’ve seen (and had to maintain) code that was riddled with IFs: each time a bug was detected, the developers added an IF for the specific conditions and expected outcomes described in the bugreport. Let’s add a post-RCA action: let’s read and discuss the ANTI-IF campaign.

Dev: You see now: this is a trivial bugfix, there’s nothing to learn here.

Coach:Maybe. We’ve only spent 10 minutes yet. Let’s see the rest of this code

Show me the code

If I take off my glasses, squint a bit and look at the code, this is what I see:

code outline 1

Red lines are user interface code, calls to the UI toolkit. Green lines are simple java, C#, ruby… UI-independent code.

Guess where the bug is… In the Green code. But we can’t test it, because we can’t test user interface code. So, that doesn’t help us.

A small step for a programmer…

We don’t have any tests, so any refactoring is going to be risky. Let’s do some simple, safe refactoring: let’s move all the Green and Red code together.

Now the code looks something like this:

refactored code outline

Exactly the same code, it does exactly the same thing. What have we achieved? Nothing. Yet.

Another small step

Now that we’ve got all the green code together, what does it do? Essentially, it fills in a number of variables (like the list of values for the status), which are filled up with values to put into the UI controls.

Let’s do another small, safe refactor:

  • Create a new class. I don’t have a good name yet, let’s call it “Stuff” for the moment
  • Move every local variable in the green code into the Stuff class
  • Create an instance of Stuff in the function
  • Fill in all fields of the instance, just like you fill in the local variables

Again, nothing really changes. We’ve just collected all local variables in one object.

Our code now looks something like this.

Finally, a real refactoring

Now the code has been reorganised, we can finally do a real refactoring, still taking small steps because we’re working without a safety net.

Let’s extract the green and red code in separate methods. Our code now looks like this:

Note to self: “Stuff” and “prepare” aren’t very descriptive. That’s probably because we don’t understand the code well enough yet. Let’s revisit naming when we understand better.

We’re now 30 minutes into the Root Cause Analysis

A giant leap for testing

Aha! We now have a method “prepare” which contains the bug AND doesn’t depend on the UI. Can we test the code now? Yes we can!

This test fails before the fix. It succeeds after the fix. We now have an automated regression test for this bug.

Improve your tests

This test raises a lot of questions:

  • What if the foobar isn’t swizzled? Are the options ‘A’, ‘B’ and ‘C’ correct in this case? Should we add a test?
  • Are there any other special cases for status depending on the properties of a Foobar?
  • Are there any other fields that depend on the properties of the Foobar?
  • Are there any other properties of the domain that are important?
  • ….

This could be the start of a really great conversation with the Product Manager and testers!

Result: we’re 45 min into the RCA and we’ve written an automated regression test of that bug in supposedly untestable code.


Those stupid names “Stuff” and “prepare” irritate me. Now that we’ve got automated tests for this code, we can refactor more audaciously.

What is “Stuff”? It contains the data as its shown in the View of the Foobar property dialog. It’s a ViewModel. Let’s rename it to FoobarProperties.

What does “prepare” do? It creates a FoobarProperties and stuffs values into it. What does FoobarProperties do? Nothing, it just sits there and contains these values. We might as well move the code from prepare into FoobarProperties:

And now the unit test becomes a unit test of FoobarProperties, a pure processing class, no longer of the mixed processing/UI class FoobarPropertyDialog:

Improve the way you test

Now that we have a unit test for FoobarProperties we can add more tests for different cases. Looking at those tests we’ll see that some properties of a Foobar are independent of its state. E.g. we always have a status ‘A’ and ‘B’. We can include those invariants in our tests. We’ll talk with the Product Manager first and together extend the testcase.

We can now see that we consider too much code as “UI code” and therefore not testable. If we separate ViewModel code from View code, we can cover more code with fast unit tests.

Look for similar bugs

Coach: are there other places in the UI where we display or change the status of a Foobar?

Dev: Yes… Maybe 3-4 screens and dialogs

Coach: Did we make the same mistake there?

Dev: Let’s quickly test the application. It would be quite embarassing if we got the same bug report for another screen…

Dev: Ooops! We made the same mistake in one other dialog. The other dialogs are OK.

Coach: Let’s note the dialog to be fixed and move on, because we’ve only got a few minutes left in our timebox and I’d like to do the next step first before fixing this bug.

Dev: What more can we do?

Make this type of problem impossible

Now, before we fix the buggy dialog, we’ll apply the same safe refactorings to make the code testable, add tests to demonstrate the bug (and serve as regression tests once the bug is fixed) and then fix the bug. If we consistently test our ViewModel code and validate our ViewModel behaviour with the Product Manager, we’re less likely to overlook certain cases.

The fact that this bug appears in some dialogs and not in others tells us that we’re looking at different code. Some code takes into account the “swizzledness” of a Foobar, some code doesn’t.

But we can do better: why is this code duplicated? Ideally, we’d want one instance of the code that decides which status options to show. Otherwise, if the rules change (or more likely, if we discover we’ve missed an existing rule) we’ll have to remember to update all the pieces of code that determine status options of a Foobar. So, once we’ve extracted and fixed the ViewModels from both our buggy Dialogs, we’ll extract the common code that determines the status options. Of course, this common code will have unit tests that verify this common behaviour.

Afterwards, we’ll do the same to the Dialogs that were implemented correctly. We can do this refactoring gradually, once the two bugs have been corrected. Ideally, we’ll do this if we have to change the code of those Dialogs anyway, to fix a bug or add a feature. Even if we don’t touch these classes, we’ll make sure we refactor them within X time, so that we don’t have to remember these “dangling” refactorings too long.

Some developers have taken the “swizzledness” into account, some haven’t. Let’s share our findings with the team and the Product Manager. We may have to organise a session to clarify some of the subtleties of our domain. Once they’re clear we can encode and document them as automated regression tests, so that we get a failing test next time we forget to take into account one of those subtleties.

 Dev: Hey, coach, your hour is up! Shall we get a coffee?

Coach: Great! Let’s step away from the keyboard 🙂

Looking back (over a cup of coffee)

What have we done in 60 minutes?

  • Isolated code that contains a bug from “untestable” code
  • Added an automated test that shows the bug and can be run as a regression test after the fix
  • Simplified the code by separating the ViewModel from the View
  • Identified a new class of testable code, the ViewModels
  • Found another bug before anyone noticed
  • We know how to make code testable and better factored for this type of bug
  • We know how to avoid this type of problem and be ready for changes in the domain

 Looking forward

During the Root Cause Analysis we noted a number of actions to be taken. We make each of these tasks visible (for example, by adding them to the Kanban board):

  • Extend the FoobarTest to cover more cases, in collaboration with the Product Manager and testers
  • Test and fix the  second Foobar status bug
  • Make the code that determines the allowed status values common between the two bugfixed dialogs. Validate tests with the Product Manager
  • Refactor the other dialogs that display Foobar statuses so that they use the common code
  • List the subtleties of the domain and organise a knowledge sharing session with the Product Manager
  • Ask the team to review and discuss the ANTI-IF campaign
  • Let’s read some more about ViewModel architectures. If it looks useful we’ll make this a standard way of designing our user interfaces. If we do, we’ll have to see how we transition gradually to such an architecture.

Dev: Well… I never expected so many issues and ideas to come out of a Root Cause Analysis of such a simple bugreport. Let’s hope we don’t do too many of those Root Cause Analyses, because they generate a lot of work.

Coach: ????

Dev: That was a joke, coach. 🙂  I’m sold. Now, let’s get back and fix that bug we found.