If you liked reading about the solutions in the Given-When-Then with Style series, and you would also like to know how those would be implemented in real work, you can now download a wonderful demo project by Zoltan Toth (Developer, Specflow team at Tricentis). This is a great way to explore the challenges further, and see how the feature files mentioned in the solutions would be transformed into automated tests in real life.
Zoltan did an excellent job implementing the solution from the first two challenges as a C#/Specflow project. The project covers the scenarios from How to set up something that’s not supposed to exist and How to structure a long list of examples.
The code demonstrates several common patterns for organising Given-When-Then projects, such as where to store the implementation of steps relative to the system under test, and how to link the feature file with automation code. It also shows a few nice tricks for implementing Given-When-Then steps that include tables, which can significantly simplify specifying lists of objects with multiple properties.
Note that this project is focused on the Given-When-Then patterns and the automation workflow, and it is not trying to demonstrate how to build the business part of the code. The system under test is intentionally trivial, using an in-memory data store and simplified Unicode mapping. This is good for demonstration purposes, so you could focus your attention on the parts that are important for this article series. In real work, the code would evolve from this point to proper Unicode support, most likely using a third-party library for string normalization, and a repository that connects to a persistent storage, such as a database. In a TDD flow, this work would usually be driven by more technical unit tests and integration tests, which would help evolve technical design without really changing the business functionality.
To download the code, head over to GitHub.
The history of the changes on GitHub is also well worth checking out, as the project demonstrates the typical flow of working with executable specifications. Here are some interesting things to notice:
The final feature file specification is slightly different from the one in the article. This is perfectly normal, as small inconsistencies become clear only once we start automating step implementations.
Developers will notice slight differences in wording between similar steps in different scenarios. Small inconsistencies may not be noticeable to people who participated in a specification workshop, since they have the right context from the discussions which led to the examples. However, someone reading the same feature description a few months later might be confused by inconsistencies. These can, and should, be fixed as you automate the tests. In the Specification by Example book, I call this step “Refining the specification”.
More importantly, you’ll see that the last scenario has different examples from the one in the solution article. During test automation, developers sometimes notice that it would be easier or more consistent to use alternative data for the same purpose. By consolidating the example data, the last scenario was also able to use the common background set-up, making it shorter and more focused. Replacing examples with alternatives that better demonstrate the underlying rules is also OK, as long as they actually serve the same purpose. This is why it’s so important to provide good contextual information around each example, such as the response messages or case names. Without that contextual information, people refining the specification might think that an alternative example is equivalent, but it may not be. When in doubt, check with the business representatives or other workshop participants if replacing examples.
Automation costs are often higher at the start. In a usual development flow with Given-When-Then feature files, adding initial examples requires putting in new automation code, new bindings for step implementations, and experimenting with the design of the automation layer. After the first few examples are working, the structure is in place, and adding new scenarios becomes very easy. You can see that the first half of the version control history contains more changes to automation code than the system under test, but the second half is almost all about working on the business part of the system, with very few changes to test code. People new to this kind of test automation are sometimes concerned about the overhead of working with executable specifications. When done right, automated tests don’t slow down the work, they actually help people move faster with confidence. There is some cost of course, usually towards the start, which pays off quickly as the complexity grows.
The development flow was different from how the features evolved during the discussions. Looking back at the solutions for the first two challenges in this series, you can see a typical flow of information discovery. We started with simple examples, provided counter-examples to clarify rules, then identified related questions by identifying boundaries. This sometimes produced more questions and scenarios. The flow of that discussion was significantly different from the flow of the final feature file. Once a feature got clarified nicely, the development flow could proceed quickly. We did not need to identify all those questions and go back and forth while coding.
The development flow was not always Red-Green-Refactor. The typical TDD cycle calls for adding a failing test before any code changes. However, when reviewing a feature file from the top to bottom, if you try to follow red-green-refactor, you may sometimes not really get to the red stage. Some specifications might already be covered by the time you consider them.
A feature file should be designed for readability. Its purpose is to serve as good documentation. It’s normal for follow-up scenarios to clarify preceding ones, to ensure that people who implement the system don’t forget about important concerns. However, when implementing the initial scenarios, people who participated in the specification workshop already know the wider context so they might do the right thing straight away.
On a related note, you do not necessarily have to follow feature file from top to bottom when implementing it. Developers should be able to pick and choose various scenarios from the feature file to evolve the code gradually. Development flow usually follows technical complexity, and feature file flow usually follows business complexity. It’s OK for them to be different. As long as all the scenarios pass at the end, it doesn’t matter really what sequence people implement them in.
Feature files don’t cover everything a developer needs to do. The underlying code sometimes evolved without changes to feature files or having a failing test that needs to turn green. As people start implementing a feature, they might discover additional questions or have ideas on improving the system. In particular, a feature file shouldn’t imply technical aspects of system design, and developers will likely want to add more tests to cover those in a true TDD flow.
To keep things simple and focused on Gherkin and Specflow, this project doesn’t contain any technical unit tests. In real work, if you want to keep the red-green-refactor cycle, it’s normal to add and extend the feature file with business-oriented tests, and add unit tests to drive the technical aspects of design. Testers might also want to add more examples, or to explore the system further by trying out different boundaries. Beware of adding too many additional scenarios or examples to the main feature file, though, as this might make it confusing and overly complex. We’ll deal with this in one of the following challenges.
PS: … and don’t forget to share the challenge with your friends and team members by clicking on one of the social icons below 👇