Last week I talked about „Spicing up you retrospective“ at the ALE2012. I did a similar talk in June at the ACE!Conference in Krakow. But after the talk in Krakow, I had a chat with Bob Marshall and he pointed out that fun isn’t the (only) answer to make retrospectives better. He also pointed me to a blog post where he wrote about these issues. So, the following article is mainly based on his ideas.
Retrospective Challenges
For me, there are two main retrospective challenges:
- Create a motivating environment and keep the people them engaged
- Work on the identified items
On first sight it seems that these two challenges may be caused by:
-
Repetition –> Boring Retros
-
Same problems –> No effect
-
No responsibility
-
Tasks are not visible
-
Tasks are too big
But these are only the things you see on the surface, if you dig deeper you’ll find the real root causes.
Inject Purpose
IMHO, the root cause is a missing purpose. Any retrospective without a purpose is a complete waste of time. (the same applies for any other meeting). It doesn’t make sense to change your retrospectives regularly and introduce new ideas as long as there is no purpose behind. But how can you inject purpose into your retrospectives? The answer is: by using hypotheses. To do so, I adapted the original retrospective flow by Diana Larsen and Esther Derby the following way: The first two steps didn’t change. But instead of directly generating insight you check the hypotheses from the last retrospective. This is really powerful, as it offers you the possibility to check if the tasks from your last retrospectives had the effect you expected (your hypothesis). In most cases, you’ll find out that your hypotheses were wrong. Instead of simply checking if you worked on all of the tasks you identified the last time, you additionally check if they were helpful and had a positive effect. If your hypotheses were wrong, this gives you the opportunity to check why they didn’t have the expected outcome. Now you can enter the step „Generate Insight“ and check what went wrong. This approach helps you to iterate on your tasks until you can fulfill your hypotheses. But it could also be, that you find out that a hypothesis was complete nonsense. This is also fine. Another change to the standard flow is the adaption of the step „Decide What To Do.“ You have to add a hypothesis to any task you identify. Otherwise you won’t be able to check if the task helped. Make sure that your hypothesis is testable as described in the scientific method. If your hypothesis is not testable, it doesn’t make sense. The closing step is the same as in the normal retrospective flow.
Examples
I was asked at ALE2012 to give some examples: – Task: Collocate the team with the PO –> Hypothesis: The response time to questions to the PO will drop. – Task: Introduce a Definition of Ready (DoR) –> Hypothesis: Better prepared User Stories, and we can keep the time-box of the Sprint Planning. – Task: Stand-up in front of the task board –> Hypothesis: More focused stand-up (keep the time-box). Keep in mind that this is only examples to give you an idea how this could possibly work. I know that it is difficult to measure „more focused stand-up,“ but I’m sure you’ll find a way 😉 In the next blog post I’ll write about some ideas how to shape such retrospectives by using metaphors.
The approach you describe for retrospectives has some lean startup ideas in it; setting a hypothese and testing it. Great addition!
Thank you. In the end everything is based on the good old Deming Cycle –> PDCA.
It’s really great how you described it!
I’ve been using a very similar approach for a long time, but didn’t manage to conceptualize I was actually adding new phases to the approach suggested by „Agile retrospectives“ book. And wouldn’t have been able to put it so nicely as well :o)
However I think I’m doing also something more: besides checking if the action we tried was effective and actually satified the hypothesis, we also check whether the action was only a „one shot“ action or we need to systemize something in our process, so that a sustained benefit is achieved.
Thanks for your comment. I like the idea of checking if it was only a „one-shot“ or if the team needs to do more. IMHO you always have to look if you’re only working on the symptoms instead of the real root cause.