As a homeschool dad, I often get the chance to try and explain complicated things to young children. Right now we are studying chemistry which can be a tough concept to grasp. What is an atom? How do molecules work? These aren’t easy questions to answer, but I’ve found that in order to get my kids to understand these things, I have to relate them to something they already know. If I tell them an atom is a basic unit of matter, that isn’t going to help them much. However, if I make it concrete and talk about taking a piece of iron and cutting it into smaller and smaller pieces until making the piece any smaller would result in it no longer being iron, they have a better chance of understanding the concept.
As adults, we still need to learn in much the same way that our children do. If we try to just understand something in the abstract, we aren’t going to get too far. We need to make it real and concrete. We need examples to help us understand. On the flip side of that, if we can’t explain a concept with simple examples, we probably don’t fully understand that concept.
Creating software is a learning process and so we need to apply the principles of learning to it. This is the idea behind behavior-driven development. If we can define clear and simple examples of the behavior that we want, we probably have a good grasp of that idea. However, since creating software is a learning process we need a feedback loop. We need some way to check if what we are learning is and continues to remain valid. You can call this feedback loop testing, and in this article, I want to explore the idea of how we can create test cases that drive a better feedback loop
If you want to know more about BDD, check our recent article about BDD.
In order to create an effective feedback loop, you need to have good test cases. A feedback loop is a way to pass information from one part of a system to another. If that information is irrelevant, the feedback loop will eventually get ignored. Think of the number of times you’ve ignored a failing test because “It’s that same flaky testing acting up again.” However, there is an even greater threat to the value of a feedback loop and this is if the information it provides is faulty. This can actually drive the wrong behavior. For example, if you were to change your code because of a test failure, only to discover later that the test logic was actually inverted.
Effective test design helps you learn about your system (particularly ways in which it can break), but it can also help to explain how your system works. In order to get the maximum benefit out of your tests, you should design them to help you both learn and explain. Thinking in examples can help you learn new things, and it also helps to explain to others how something works. When we create software, we don’t just need to check that the code does what we think it does, we also need to be able to explain to others how it works so that they can use it effectively. Writing example-based test cases can help us solve both needs in one place.
When trying to do example-driven testing it can be helpful to approach it through the lens of behavior-driven development since the structures provided by that framework allow for rigorous solution specification. Let’s consider a few best practices that can help you create effective behavior-driven test cases.
One of the questions that frequently comes up in BDD or other specification by example approaches is the question of who writes the specification or feature files? Maybe the product manager is defining the client’s needs, so should they be the one to write the specification? Or maybe it should be the testers? After all, they are going to execute the specification as a test at some point. But then again, the term behavior-driven development contains the term development right in it, so perhaps it should be the developers that write the specifications?
Trying to assign a certain person or role to write the specification totally misses the whole point. The point of a behavior-driven approach to testing isn’t to have the specification file (although that sure is a nice side benefit). The point is to understand what the customer needs and wants, and how we are going to meet those needs in our system. The only way we will be able to figure out the answer to these questions is by having many perspectives. The business analyst or product manager needs to provide input. So does the tester and so does the developer. It would probably be good to pull in the documentation for some discussion as well. For some more thorny problems, you may even need to pull in data analysts or sales and support staff along with designers and others.
If the goal is to understand what the client needs and how we can meet those needs, we are going to need many different viewpoints and inputs. No one person or role writes the specification file. The whole team works together to produce it.
You may not have heard of Harold Dodge before, but he has been called the father of statistical quality control. His work laid the foundations for much of the field of quality control and had an influence on people like W. Edwards Deming. He has a quote that was popularized through Deming where he says that you cannot inspect quality into a product.
You can not inspect quality into a product.-Harold Dodge
It’s pretty hard to disagree with this. You can find problems, but if the process is broken, finding more bugs isn’t going to make the product better. Despite knowing this for close to 100 years now, we still can’t seem to get past the idea that testing is something you do after the product has been created. How many kanban boards have you seen with the testing column coming after the development column? First, you create the thing and then you test it right?
Well, if you can’t inspect quality into a product, doesn’t it make more sense to work on quality long before the product is created?
The best testing happens at very different places in the SDLC than we usually think of. We think of testing as something we do after coding but we should really be testing before we start coding. Working together as a team to make sure you understand the problem you are solving and how to solve it, is testing. It’s the best kind of testing. It is certainly customer-focused and it has a direct influence on the quality of the product. Don’t try to inspect quality into the product. Test for it before you even have a product or feature.
Test cases that are built from BDD spec files enable using your tests for more than just testing. They can be living documentation of how the system works. A specification written in human-friendly language can help with understanding how a product works. However, for this to be helpful the specification needs to be an accurate representation of how the product currently works and not just of how it worked when it was first made. This requires us to not have a “set it and forget it” attitude. We need to be sure that our specifications evolve as the product evolves
Try SpecFlow+ LivingDoc, a tool that generates specifications as living documentation.
However, there is a word of warning here. Don’t let tests become reactive either. There is a very real threat that happens when tests fail. I have seen it many times. The test specification gets changed so that the test passes. Sometimes this is the right thing to do, but be careful. The goal isn’t to have passing tests. The goal is to have a product that makes life better for the customers. If we need to change the specification we should be asking what has changed in the customer’s need to make this change necessary. Specification files should be a reflection of how we are meeting customer needs and so should generally be updated in response to changes in customer needs rather than as a response to the system itself changing.
Software development is a learning process. This is why agile methodologies emphasize iterative approaches to development. As we write code we learn new things and we find out the limitations of what we have previously done. Our client’s needs change over time. We need to use different technologies. We need to re-architect parts of our product. All of these things cause changes to the code and if you have tests that are tightly coupled to the code, they can be quite brittle and break causing a lot of maintenance work.
When writing test cases, you want to try and set them up to be as flexible as possible. One way to do this is through specification files that are agnostic to which automation tool you use to actually run the tests. This helps to decouple the tests from the underlying code and can make them more stable as your product changes and evolves over time.
However, writing test cases in a no-code format doesn’t mean you will never use tools. Creating a specification that takes client needs into account above the tool stack being used is very important, but being able to execute that specification as a test allows you to continually verify that you are meeting those needs. Tools like Specflow and others can make life much better in this regard, but be sure that you set up your automation so that the particular tool stack can be updated and changed. Tools versions change. The underlying tech stack you use might change. You might want to change where you run the tests. Set yourself up to make it easy to change things in the future. There is only one constant in software development: change.
Good test case creation focuses on client needs above the tools involved. By setting up processes that enable this, you will be able to create tests that can continue to add value for a long time. Set up your tests to support your clients and your framework and tooling to support your tests and you’ll be able to produce high-quality software. And who knows, with enough practice you might even be able to explain the concept of atoms to a 6-year-old!