6 things that will make your adoption of BDD easier

2021-07-07 21-07-07
BDD
by Gaspar Nagy

Share!

 

Initially I thought I would make a list of things that are required to adopt Behaviour Driven Development (BDD) successfully. But then I was thinking about the projects where we introduced BDD: if all these criteria would have been required, we should have failed with most of them. In fact I learned about some of the items in this list while adopting BDD. And it was an exciting journey!

So here is my reframed list: things that make your BDD adoption easier, things that you should watch out for if you are currently in this change process.

The list is subjective, it contains things that I have learned through experience. Probably not complete, but hopefully a good place to start.

#1 — The team collaborates on defining the requirements

One of the most important things I have learned about software development while doing BDD projects is related to collaboration. Particularly to collaboration on requirements. 

Many agile teams are cross-functional: they include different kinds of developers, testers and other experts. Usually the entire team is invited to the requirement workshops (Sprint Planning Preparation, Backlog Grooming, Story Preparation, etc. — call it as you wish) so you would expect that people will “collaborate” there. But in my experience, just putting a bunch of people in the same room will not trigger collaboration. Collaboration cannot be enforced, it has to be enabled. This means that somehow you have to arrange the setup and facilitation of the workshop in a way that people should be eager to contribute. 

Without enabling collaboration, the requirement meetings become a simple handover. This means that we lose the opportunity to discuss important questions. These questions will   suddenly pop up during implementation or later, but answering them late causes more costs and more frustration.

The exact way you can enable collaboration in your team might be different depending on the situation, but many of the techniques use the concept of examples as a driving force for the discussions. This is also a great input for BDD, because the examples can be converted to scenarios later. You can google for Specification by Example, Example mapping or Feature mapping to find some useful hints. In our BDD Discovery book, we also show some of them in detail.

For inspiration please check the table below. It contains a couple of examples that show the differences between handover and collaboration in a requirement workshop.

Handover Collaboration
The Product Owner explains the details of the requirements and asks if there is any question. The Product Owner gives a brief vision about the requirements and asks the team to come up with illustrative examples for each business rule.
The Product Owner provides a step of actions the user needs to perform to work with the feature and lists the exceptional cases in a table. The Product Owner explains the “happy path” usage of the feature with a concrete example and asks the team to come up with as many “What if” questions as they can. (e.g. “What if the user is not verified yet?”)
The Product Owner sends out a complete specification document for review, people don’t see each other’s review comments. The Product Owner sends out a draft document with the vision and the key business rules. The team is asked to provide shared comments on the content and extend the document with useful examples explaining each rule.

#2 — User Stories are treated as product increments

Many agile teams treat User Stories (or Product Backlog Items) as features. Probably this comes from the expectation that user stories should provide business value and the most obviously visible business value is an implemented feature that can be utilized by the end users.

This approach raises many issues, because usually features cannot be precisely defined (you can always improve it). In order to make them usable you need to deal with a lot of side-cases. Treating user stories as features makes them too big and they can easily become a victim of scope creep

Splitting such a story is always a pain, like choosing which of your fingers you would cut. As some sort of user-validation is going to be needed to “finish” a feature-story, often these stories are suspended and kept open for a longer period until this feedback arrives. The teams keep tracking the catalogue of the implemented stories as it describes the feature set of their product.

User Stories should rather be seen as product increments. We define user stories to define small chunks of work that move the product to an (assumably) good direction. If we are lucky, the increment produces a ready-to-run feature, but this is often not the case. In many cases we use user stories to check whether the direction we assumed to be good is really right. This is a business value. Not necessarily for the end users yet, but it is a value for our business. 

Splitting a real user story should not be painful at all: it is the game of finding the minimum increment that is enough to validate our concept. If this has been implemented, we can close the story! Are some side-cases not handled yet? No problem! Once we have seen the concept working, we can estimate and plan them  much better. 

Once we have got the right direction and implemented all the necessary increments to make it usable for the end users, we have got a feature. The feature is never done: you can always improve it a bit. But once we’ve got a feature, no one is really interested whether we needed 3 or 4 user stories to implement it. Features can be organized and documented into a tree structure. As there are typically less features than stories, it takes you less efforts to track and organize them.

With BDD, you can define business-readable scenarios for the features. The scenarios are introduced with the stories, but in the end they become a part of a feature. This is also why the file where you store them is called a feature file. Organizing your feature files based on the user stories leads to chaos and unstructured documentation, but understanding the feature concept and separating it from the user story concept guides you towards a well-organized living documentation.

#3 — The whole team is responsible for QA and testing

Fortunately QA and testing are less and less often separated into different departments, offices or even vendors. Many development teams have got testers included. But what sort of inclusion is it? For many teams the testers are considered to be a part of the team, but they do their work separately. Yes, they meet each other at standups and planning meetings, but their work is not integrated. As a result the teams struggle either with the unnecessary costs of overlapping testing activities done both by the testers and the developers or with the production issues caused by the white spots of functionality that has not been not tested by either of them. Not to mention the bug vs feature debates rooting in the different understanding of the requirements. 

QA and testing should be a whole team approach and this means more than working next to each other. Testers and developers should be aware of each other’s activities and based on that they should establish and maintain a testing strategy.

This might be hard in the beginning as we have worked separately for years, but you need to break these walls. Pairing is an easy and efficient way to do that. Once you have tried, you will realize that testers might be a help for the devs to guide them with the tests they need to satisfy (think about test data, for example). Developers might help testers to provide automation helper libraries that make their work easier. 

If you do so, you can realize that the boundary between testers and developers gets blurred. Roles become less important but the responsibility for quality is going to be emphasized. 

Having a shared understanding of the requirements is not only good for avoiding the bug vs feature debates, but you can also spare time by not defining test cases redundantly. It also helps to better identify white spots.

You can get the most value out of BDD if the scenarios are also used as (living) documentation and they represent our shared understanding of requirements. When testing and QA are truly involved in the development, they will be a great help in discovering and documenting these expectations. 

#4 — Established CI/CD process and mentality

Build servers are more common nowadays. They take your code and perform compilation (build) and testing tasks on it regularly, giving you feedback about the general health of your solution. 

The ALM tools the teams use (e.g. Azure DevOps) provide such services either in the cloud or on the premises, therefore establishing a basic build server or build service should not be a matter of cost. Thanks to the improved configuration interfaces of these services, setting up one does not take a horrible amount of work either (especially in the cloud).

However, there is a big difference between a build server and a continuous integration (CI) or continuous delivery (CD) pipeline. And this difference is in you: how much you trust and listen to the feedback they provide. That’s the CI/CD mentality.

Continuous Integration (CI) enables you to continuously monitor whether the work you do conforms to the code quality expectations of the project and whether it is integrated well in the work done by others. The tasks you might want to perform in a CI pipeline vary on a big scale from simple tasks, like verifying that the project compiles, to running complex tests and static analysis of the code. The key is how much you observe the results. Ideally, the CI build runs for every code change (commit & push) and it is fast enough, so that you don’t dig into a new problem before the result arrives. While the feedback provided by the CI build is primarily addressed to the person who has made the change, in general the entire team is responsible for keeping the CI build in good shape. If 2-3 tests are always failing, nobody will take care if a new one starts to fail.

Continuous Delivery (CD) takes the concept of CI to a next level. Now we do not only want to make sure that the code changes properly integrate, but that the product is always in a potentially releasable state. Potentially releasable does not mean that you actually want or can release — it might be the case that the feature you currently develop does not handle all cases (remember the concept of user stories). The code should have all technical quality gates passed that are required to release. Obviously this means that the code compiles, but it also means that updating the installation package or the release process has not been forgotten. 

Since it has been all automated, it will give you a high level of safety, not only for implementing new features and making sure that they are free from unwanted side-effects, but also for a safe delivery of hotfixes. But it works only if you take care of the process and the results. 

In order to keep the project in a potentially releasable state, we would also need to ensure somehow that it still works as it is expected to work at that time. This is the classic area of regression testing. To ensure this continuously, you need a good coverage of automated tests that verify the expected behavior. There are many ways to make such automated functional regression tests, but if you introduce BDD in your project, you will realize that the BDD scenarios, if they are written properly, can cover the large portion of the regression tests required to establish the Continuous Delivery process.

Even if you have CI, establishing Continuous Delivery might not happen from one day to another. But it is worth the effort. You can also do the change incrementally: you try to cover the most important or the most frequently changing areas of your project with tests that you trust and slowly let the covered area grow.

#5 — Tests are a part of the code-base

Tests represent your expectations for the delivered solution. In this regard they are temporary:  today’s tests might not be expected to have passed yesterday (when the feature had not been not implemented yet) and they might not be valid anymore tomorrow (if the expectations have changed in the meantime). 

If you think about feature branches or multiple supported version-lines of the product you are implementing, you can even see that there is no single truth. There is no single set of currently valid tests. 

The more you move into collaborative work, small user stories or CI/CD, you will realize that the management of these test-sets becomes impossible, unless you version them together with the source-code of the solution they suppose to verify. The easiest way to version them together is to put them into the same code-base, the same Git repository. If you check out a specific version of the code, you immediately get its tests with.

Fortunately this is easier now, as many testing frameworks support storing the test definitions in source-control. Keeping them in the same repo might also help in achieving the shared responsibility for quality. If I make a small change in the code, I can quickly adjust the related test. Or if I find a bug by running my tests, I can easily look at the related production code to diagnose the impact. 

The majority of BDD tools, including SpecFlow, store the test definitions (the BDD scenarios in this case) in files that are perfect for source-control. Just make sure you don’t store them in a separate one.

#6 — Practice Agile: Inspect & Adapt

Introducing a new technique requires courage and patience. Things will not necessarily work immediately and sometimes you will not be able to tell upfront whether something is going to work or not. It is easy to fall into upfront-skepticism: you are not sure if it is going to work, so you won’t even try. Also sometimes teams tend to over-plan the adoption: they discuss the potential outcome of the change endlessly. Try to go back to the core principles of agile: inspect & adapt and embrace failure. 

Instead of discussing endlessly or dropping the idea because you cannot prove the value upfront, define a small boundary, a playground where you can try. This can be a small pilot team, a new feature that you try implementing differently or a timebox. Try it and watch the results. If necessary, make adaptations. 

This is also very true for introducing BDD. You are very likely to change many things: the way you have handled requirements, the collaboration between developers and testers, the automation solution and maybe some others too. Keep it small, try it with full throttle (suspend your skepticism for the tryout period) and see what has changed and how they are for your benefit. In the end, there is no better evidence than the results of a pilot project in your own company with your own domain.