Five things I’ve learned about facilitating Given-When-Then in lockdown

2021-06-23 21-06-23
Gherkin
by Gojko Adzic

Share!

 

Last summer, we started the Given-When-Then-With-Style community challenge. In 52 posts so far, with 22 challenges/solutions and accompanying articles, hundreds of community members sent in their ideas on how to describe tricky problems effectively with Gherkin, and the SpecFlow team even introduced a new plug-in as a direct result of one of the proposals.

As the first large-scale experiment of this type, the series of challenges was particularly interesting to me as a way of getting lots of different people to collaborate on Given-When-Then ideas remotely. Distributed collaboration seems to be the new normal, and it’s likely to be an important constraint for many teams in the future.

Here are five interesting lessons I learned from facilitating the challenges, that probably help you create better Gherkin specifications when working with a distributed team.

1. Add more context to draft ideas

With remote work, especially if asynchronous, it’s difficult to know if people really understood what we wrote. That’s the key reason why specifications with examples, the ones coming out of spec workshops, usually need a good introduction, and a lot of context around individual scenarios. But I usually recommended that people do not waste time on putting context into draft specs sent out for review. I now think that’s wrong.

There were a few challenges where my proposals were just misunderstood by a lot of people, although I originally thought that they were perfectly clear. For in-person workshops, situations like that are easy to resolve with feedback exercises. But remote asynchronous communication makes this impractical. Because of that, I think it’s important to add a lot more context to scenarios when sending them out for review.

Note that I’m just talking about the context here, not the examples themselves. Getting the spec to look too perfect before it gets sent out for review is a common way to lose engagement. Several teams I’ve interviewed for the Specification by Example book suffered from this problem when working asynchronously across different time zones. Team members who were supposed to review a spec would just approve it too soon without a lot of critical thought when it seemed complete.

When sending out proposals for review, make the title and the context as close to what you feel should be the final spec as possible, but don’t make the examples perfect. Leave some room for the discussion there.

Also, it might be worth including some context for each individual case. For example, see Alister Scott’s response to one of the early challenges. He adds an extra column to a table of examples, introducing each scenario. That makes it easy to understand the intent when reviewing drafts.

2. Plan for multiple good solutions

With early challenges, I often had a relatively formed idea about a good solution at the start, but I was frequently surprised by creative proposals from the community. The big lesson for me was not to get too attached to my own ideas. I think this is also an important note for anyone running remote spec by example sessions, and doubly so for teams doing asynchronous reviews.

With in-person specification workshops, early discussions are often divergent and help discover plenty of good ideas. The stuff we write on whiteboards is half-baked anyway, so nobody is too convinced about their own ideas early on. Remote asynchronous communication often requires people to write down fully formed ideas before sending to others for review, and this can lead to unjustified commitment.

The lesson for remote spec by example sessions and reviews for me is to assume that there will be a bunch of divergent ideas coming back from a review, not just comments on what I sent out, and that it’s probably worth planning a few rounds of reviews instead of just one.

3. Consider that “bad” answers might be good in a different context

We had many well-known community members as guest experts, and our opinions differed at times. I asked a few of the people who wrote books on BDD to send me their thoughts about the challenges, and the discussions. Here’s what they said:

I’ve always seen BDD as a practical approach. It is good to lay down some core principles that are generally useful, but you also need practice. Formulation – writing good scenarios that do not only test your requirements but also document the expectations – requires practice and discussions. For me the #givenwhenthenwithstyle challenge was a great way to initiate these discussions and give practical help for problems you might also encounter in your own project.

Gaspar Nagy, BDD coach & trainer, creator of SpecFlow, co-author of Discovery and Formulation (BDD Books series)

The challenge I found most interesting was specifying relative periods. This issue comes up repeatedly when scenarios contain date-related data.
The solution has multiple aspects, which I wrote about.

Your series of challenges has given me ideas for topics to cover when I revise my book “Lean-Agile Acceptance Test-Driven Development: Better Software Through Collaboration

Ken Pugh, ATDD/BDD Author, Trainer, Coach

Discussions about how to write good UI interaction tests is always interesting. Andy Knight’s post was well structured and informative, as were the responses documented in the solution post on the SpecFlow website. This is one of the perennial problems that people face when adopting BDD and it’s good to see it given such thorough treatment.

– Seb Rose, co-author of Discovery and Formulation (BDD Books series)

Whenever I disagreed with a proposed answer, I could not just discard it as an anonymous stack-overflow comment from a newbie. I tried to understand why someone with a lot of experience would propose it, and that helped me often spot areas or constraints that I’ve overlooked, and a potential wider picture to consider.

When working with remote team members, consider that differences in solutions might be due to a wider context, or a difference in starting assumptions. That kind of dissonance is easy to spot and resolve in live in-person workshops, but become quite a challenge with asynchronous remote communication. One way this could potentially improve is to explicitly document assumptions in the scenario header. None of the Gherkin tools currently have specific support for this, but you can use the scenario header to expose your ideas. Here’s how that could look for one of the more controversial challenges.

Scenario Outline: cancelling uncharged authorisations

   To avoid locking up client funds indefinitely, we need to cancel an authorisation 
   if it has not been charged for a month.

   Assumptions:

   * The accuracy level of 1 day is enough. 
   * When the perfect calculation doesn't exist, it's better to make it shorter than longer.
   * If someone didn't pay us within two weeks, they probably don't want to pay us at all. 

   Given a transaction is authorised on <auth date>
   And the transaction was not charged
   When the cancellation job runs for <processing date>
   Then the authorisation should be <status>

   Examples: matching date exists
   ...

4. Propose ideas for voting

The initial challenges were all open-ended, which resulted in some amazing proposals, but not a lot of engagement. When we started offering half-baked ideas up for voting, the engagement surged. I guess this is only logical, as it lowers the bar to participate. With each vote, we also asked people to explain why they were selecting that particular option, and this was very helpful in exposing hidden assumptions.

Of course, just offering multiple ideas for voting doesn’t mean that any of them are right, or that we’re not missing something. So perhaps the right approach for creating effective Given-When-Then specifications remotely is to combine two rounds. One with an open-ended question where people can propose any solution they like, and then a vote on a smaller selection of proposed ideas. This will help to engage a wider audience, but still cover enough divergent material.

5. Give people more time to review

Another big lesson for me, that seems obvious in retrospect, is that increasing the time between challenges also helped with engagement. Early on, we rolled out one challenge per week. Some got a huge number of responses, and some fell completely flat. Spreading out challenges more gave people time to review and respond.

When working synchronously, in the same room, it’s easy for people to focus on one thing at a time. Working remotely, not so much. With in-person sessions, I usually proposed doing spec workshops very close to the start of the actual work. Usually at the start of the iteration, or in the first few days. With remote work, it’s critical to give people enough time to review. This is especially important if organising two rounds of reviews, as proposed above. I’m not sure what the right amount would be, I guess that depends on individual teams, but perhaps one iteration ahead is a good starting idea.

From challenges to a reference

For now, we’re pausing this series, and there will not be any new challenges for a while. The SpecFlow team gave all articles of the series a permanent home on the new website. It’s now way easier to browse the challenges and read both challenge and solution on one page.

I also recorded a short video that explains the series to new website visitors.

With that, I want to thank everyone for participating in the challenge. Reach out on Twitter to follow up on the challenges, or propose new ones to be added.