SpecFlow Project Template with dotnet new

To make it easier to create new SpecFlow projects, we have created a project template that you can access with dotnet new. As with all templates, you need to install them first before you can use the templates. More information on installing templates can be found here.

To install the template, execute the following command from the command line of your choice:

dotnet new -i SpecFlow.Templates.DotNet

Once the installation is complete, you can create new project with the following command:

dotnet new specflowproject

This creates a .NET Core 3 project with SpecFlow+Runner set as default test runner. You can define a different test runner or target framework using the following optional parameters.

  • framework: the following values are supported:
    • netcoreapp3.0 (default): .NET Core 3
    • netcoreapp2.2 : .NET Core 2.2
    • net472: .NET 4.7.2
  • unittestprovider: can be one of the following:
    • specflowplusrunner (default): SpecFlow+ Runner
    • xunit: XUnit
    • nunit: NUnit
    • mstest: MSTest

Example:

dotnet new specflowproject --unittestprovider xunit --framework netcoreapp2.2

This creates a new project with XUnit as the unit test provider, and targetting .NET Core 2.2. The project is created with a pre-defined structure that follows best practices. The project includes a single feature file (in the Features folder) and its associated steps (in the Steps folder).

Item templates

In addition to the project template, we added also some item templates to the template pack, which includes the following:

  • specflow-feature: .feature file in English
  • specflow-json: specflow.json configuration file
  • specflow-plus-profile: Default.srProfile (SpecFlow+Runner configuration)

If you have additional ideas for the templates, please open a GitHub issue here.

Big thanks go out to our intern Manuel for creating these templates!

Gherkin Conventions for Readable Specifications

This article was written by Sophie Keiblinger. Sophie has been a software test engineer at TechTalk since 2014 and is based in Vienna, Austria. She is passionate about BDD and test automation because it “encourages and fosters communication between all stakeholders and leaves more time for exploring and finding the interesting stuff”.

Developers have coding guidelines and formatting tools that help them keep their code clean, maintainable and readable as well as increase recognisability. You might think that Gherkin has no need for such conventions, given that Gherkin is essentially written using a natural language. But giving your brain easily recognizable and expected patterns to work with makes reading feature files and working with Gherkin much faster and easier.

We have been asked if there are general conventions for Gherkin that should be followed. While there are none that we are aware of, we can share the guidelines that we generally agree on in our projects at TechTalk.

Discernible Given-When-Then Blocks

In theory, your scenarios can be as simple as a single Given, When and Then step each. However, in real life they tend to grow and have multiple steps for each of these keyword. In order to quickly spot where one block ends and another one begins, you can indent the steps starting with “And”. Then your scenario would look something like this:

Now, the steps in this example don’t use tables. If you use a lot of tables, you might find the visible pattern unruly or uneasy for the eye:

An alternative is to have every step start with the same indentation, and add an extra newline before the next keyword block. However, this makes your scenario longer, which is something you generally want to avoid – you don’t want to have to scroll in order to see all the information for a scenario. Plus, I personally prefer to use extra newlines differently (but more on that later).

In the end, it is probably best to decide with your team what is more important to you and which version you prefer.

Steps with Tables

We often use tables in our steps. In order to make it immediately recognizable that a step needs further input from a table, we use a colon at the end of the step. This helps when using Intellisense, which does not include table previews but will display the colon:

You can also formulate your step to make this clearer:

Notice that I used “following” in my step’s wording. I can only recommend this approach as well. However, in most of my projects, I can be sure that a colon at the end of the step means it is followed by a table. Indenting the table also makes it clear that it is part of that particular step.

Reducing Noise

In order to reduce noise, we recommend using default values for fields that the system requires, but that are not relevant to your scenario. For example, if you want to test the validation of a date of birth, you don’t need to know the person’s name, academic title or social security number. These might be mandatory fields in your application, but have no bearing on the outcome of your scenario.

This also works with steps that have tables, in which case you only include the columns needed for your scenario.

Parameters in Steps

As you can see in the example above, I used single quotation marks ‘like this‘ around step parameters. This makes it easy to spot parameters in a step, even with just very simple syntax highlighting or none at all.

Newlines within Scenarios

Using newlines helps your brain group the right information together and makes it easier to tell where the next logical unit starts. While the text may still be readable without newlines between steps/blocks when they are short, it becomes very difficult to read once tables are involved.

If your scenario starts getting long, newlines before each block might help to make it more readable. In the case of steps containing tables, I would always add newlines between each step:

I would also recommend adding a newline before your examples block:

Newlines between scenarios and separator comments

The more scenarios you have in the same file and the bigger they are, the harder it becomes to find the point where one scenario ends and another one starts. As a visual aid, we add 2 newlines between scenarios. Usually we also add a comment separator:

If you place the separator comment directly above your scenario (or its tag), they will be displayed when all scenarios are collapsed (including 2 newlines):

If you instead add a newline between the separator and scenario (or it’s tag), then you will see the scenarios neatly stacked after each other when they are collapsed (with 1 newline separating them):

All of these formatting conventions are ones that we generally agree upon in our projects. But remember that they are all just suggestions and approaches that we have developed over time to deal with various issues we have faced. It’s important to figure out what works best for you based on our own set of challenges and needs. So rather than seeing these formatting conventions as set in stone, try to use them to inspire you to find ways to make your own feature files more readable and easier to work with. And feel free to share any tips you may have!

I am sure that no matter what conventions you agree upon with your team, they will make working with and maintaining your Gherkin specifications easier.

All of the examples above can also be found on GitHub here.

Targeting Multiple Browser with a Single Test with SpecFlow+ 3

(Note: This is an updated version of this article for SpecFlow+ 3. If you are looking for the article for earlier versions of SpecFlow, it can be found here.)

If you are testing a web app (e.g. with Selenium), you will normally want to test it in a range of browsers, e.g. Chrome, IE/Edge and Firefox. However, writing tests for all the browsers can be a time-consuming process. Wouldn’t it be much easier to just write just one test, and be able to run that test in all browsers?

That’s where using targets with the SpecFlow+ Runner comes in. Targets are defined in your SpecFlow+ Runner profile. They allow you to define different environment settings, filters and deployment transformation steps for each target. Another common use case is to define separate targets for X64 and x86.

Defining targets for each browser allows us to execute the same test in all browsers. You can see this in action in the Selenium sample project available on GitHub. If you download the solution and open Default.srprofile, you will see 3 different targets defined at the end of the file:

<Targets>
  <Target name="IE">
    <Filter>Browser_IE</Filter>
    <DeploymentTransformationSteps>
      <EnvironmentVariable variable="Test_Browser" value="IE" />
    </DeploymentTransformationSteps>
  </Target>
  <Target name="Chrome">
    <Filter>Browser_Chrome</Filter>
    <DeploymentTransformationSteps>
      <EnvironmentVariable variable="Test_Browser" value="Chrome" />
    </DeploymentTransformationSteps>
  </Target>
  <Target name="Firefox">
    <Filter>Browser_Firefox</Filter>
    <DeploymentTransformationSteps>
      <EnvironmentVariable variable="Test_Browser" value="Firefox" />
    </DeploymentTransformationSteps>
  </Target>
</Targets>

Each of the targets has a name and an associated filter (e.g. “Browser_IE”). The filter ensures that only tests with the corresponding tag are executed for that target.

For each target, we transform the Test_browser environment variable to contain the name of the target. This will allow us to know the current target and access the corresponding web driver for each browser. WebDriver.cs (located in the Drivers folder of the TestApplication.UiTests project) uses this key to instantiate a web driver of the appropriate type (e.g. InternetExplorerDriver). Based on the value of this environment variable, the appropriate web driver is returned by GetWebDriver() is passed to BrowserConfig, used in the switch statement:

private IWebDriver GetWebDriver()
{
  switch (Environment.GetEnvironmentVariable("Test_Browser"))
  {
    case "IE": return new InternetExplorerDriver(new InternetExplorerOptions { IgnoreZoomLevel = true }) { Url = SeleniumBaseUrl };
    case "Chrome": return new ChromeDriver { Url = SeleniumBaseUrl };
    case "Firefox": return new FirefoxDriver { Url = SeleniumBaseUrl };
    case string browser: throw new NotSupportedException($"{browser} is not a supported browser");
    default: throw new NotSupportedException("not supported browser: <null>");
  }
}

Depending on the target, the driver is instantiated as either the InternetExplorerDriver, ChromeDriver or FirefoxDriver driver type. The bindings code simply uses the required web driver for the target to execute the test; there is no need to write separate tests for each browser. You can see this at work in the Browser.cs and CalculatorFeatureSteps.cs files:

[Binding]
public class CalculatorFeatureSteps
{
  private readonly WebDriver _webDriver;

  public CalculatorFeatureSteps(WebDriver webDriver)
  {
    _webDriver = webDriver;
  }
       
  [Given(@"I have entered (.*) into (.*) calculator")]
  public void GivenIHaveEnteredIntoTheCalculator(int p0, string id)
  {
    _webDriver.Wait.Until(d => d.FindElement(By.Id(id))).SendKeys(p0.ToString());
  }

To ensure that the tests are executed, you still need to ensure that the tests have the appropriate tags (@Browser_Chrome, @Browser_IE, @Browser_Firefox). 2 scenarios have been defined in CalculatorFeature.feature:

@Browser_Chrome
@Browser_IE
@Browser_Firefox
Scenario: Basepage is Calculator
	Given I navigated to /
	Then browser title is Calculator

@Browser_IE 
@Browser_Chrome
Scenario Outline: Add Two Numbers
	Given I navigated to /
	And I have entered <SummandOne> into summandOne calculator
	And I have entered <SummandTwo> into summandTwo calculator
	When I press add
	Then the result should be <Result> on the screen

Scenarios: 
		| SummandOne | SummandTwo | Result |       
		| 50         | 70         | 120    | 
		| 1          | 10         | 11     |

Using targets in this way can significantly reduce the number of tests you need to write and maintain. You can reuse the same test and bindings for multiple browsers. Once you’ve set up your targets and web driver, all you need to do is tag your scenarios correctly. If you select “Traits” under Group By Project in the Test Explorer, the tests are split up by browser tag. You can easily run a test in a particular browser and identify which browser the tests failed in. The test report generated by SpecFlow+ Runner also splits up the test results by target/browser.

Remember that targets can be used for a lot more than executing the same test in multiple browsers with Selenium. Don’t forget to read the documentation on targets, as well as the sections on filters, target environments and deployment transformations.

Updating Plugins for SpecFlow 3

This article covers updating existing plugins to work with SpecFlow 3. If you are interested in developing your own plugins, the documentation for SpecFlow plugins is here. This documentation includes links to sample plugins that you can use as a basis for developing your own.

Overview

With SpecFlow 3, the plugins you want to use are no longer configured in your app.config file. To use existing plugins with SpecFlow 3, the easiest way is to package your plugin as a NuGet package and edit the .targets and .props files as described below. These files need to have the same name as your NuGet package.

As part of the configuration of your plugin, you also need to determine which plugin assembly to load based on whether you are using Full Framework or .NET Core.

Generator Plugins

To update your generator plugin:

  1. Package your plugin as a NuGet package if you haven’t already.
  2. The actual DLL to reference is determined in the .targets file. You need to load a different version of the plugin depending on which version of MSBuild you are using (Full Framework vs .NET Core). Using the xUnit .targets file as an example (the file is here):
    <PropertyGroup> 
       <_SpecFlow_xUnitGeneratorPlugin Condition=" '$(MSBuildRuntimeType)' == 'Core'" >netstandard2.0</_SpecFlow_xUnitGeneratorPlugin> 
       <_SpecFlow_xUnitGeneratorPlugin Condition=" '$(MSBuildRuntimeType)' != 'Core'" >net471</_SpecFlow_xUnitGeneratorPlugin> 
       <_SpecFlow_xUnitGeneratorPluginPath>$(MSBuildThisFileDirectory)\$(_SpecFlow_xUnitGeneratorPlugin)\TechTalk.SpecFlow.xUnit.Generator.SpecFlowPlugin.dll</_SpecFlow_xUnitGeneratorPluginPath> 
    </PropertyGroup> 
    

    .
    Which plugin to use is determined based on the MSBuildRuntimeType (either “Core” or another value). “netstandard2.0” and “net471” are the directories containing the corresponding DLL for each runtime.

  3. Edit the .props file in your package to include your plugin’s path. You need to add an ItemGroup containing the SpecFlowGeneratorPlugins element to this file.
    Using the .props file for the generator plugin for xUnit (located at /Plugins/TechTalk.SpecFlow.xUnit.Generator.SpecFlowPlugin) as an example, the .props file is configured like this:

    <ItemGroup> 
      <SpecFlowGeneratorPlugins Include="$(_SpecFlow_xUnitGeneratorPluginPath)" />
    </ItemGroup> 
    

    This adds the plugin’s fully qualified path to the list of SpecFlowGeneratorPlugins.

Runtime Plugins

Like generator plugins, runtime plugins are also no longer configured in your app.config file. Instead, SpecFlow loads all files ending with .SpecFlowPlugin.dll found in the following locations:

  • The folder containing your TechTalk.SpecFlow.dll file.
  • The current working directory

To update your runtime plugin:

  1. The actual DLL to reference is determined in the .targets file. You need to load a different version of the plugin depending on the target framework of your project (Full Framework vs .NET Core). Using the xUnit .targets file as an example (the file is here):
    <PropertyGroup> 
       <_SpecFlow_xUnitRuntimePlugin Condition=" '$(TargetFrameworkIdentifier)' == '.NETCoreApp' ">netstandard2.0</_SpecFlow_xUnitRuntimePlugin> 
       <_SpecFlow_xUnitRuntimePlugin Condition=" '$(TargetFrameworkIdentifier)' == '.NETFramework' ">net45</_SpecFlow_xUnitRuntimePlugin> 
       <_SpecFlow_xUnitRuntimePluginPath>$(MSBuildThisFileDirectory)\..\lib\$(_SpecFlow_xUnitRuntimePlugin)\TechTalk.SpecFlow.xUnit.SpecFlowPlugin.dll</_SpecFlow_xUnitRuntimePluginPath> 
    </PropertyGroup> 
    

    .
    Which plugin to use is determined based on the TargetFrameworkIdentifier (either “.NETCoreAPP” or “.NETFramework”). “netstandard2.0” and “net45” are the directories containing the corresponding DLL for each runtime.

  2. Edit the .props file in your package to include your plugin’s path. Because .NET Core does not copy the referenced files to your target directory, you need to add your runtime plugin to the None ItemGroup and set CopyToOuputDirectory to “PreserveNewest” to ensure the plugin is copied.
    Using the props file for the xUnit plugin (located at /Plugins/TechTalk.SpecFlow.xUnit.Generator.SpecFlowPlugin) as an example:

    <ItemGroup> 
      <None Include="$(_SpecFlow_xUnitRuntimePluginPath)" > 
        <Link>%(Filename)%(Extension)</Link> 
        <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> 
        <Visible>False</Visible> 
      </None> 
    </ItemGroup> 
    

Plugin Developer Channel

We have set up a Gitter channel for plugin developers here. If you questions regarding the development of plugins for SpecFlow, this is the place to ask them.

Generating Code Behind Files using MSBuild

Since SpecFlow 1.9, your can generate the code-behind files for feature files (*.feature.cs) at compile time. To do so, you need to use a MSBuild Task. The documentation is here.

Pros

  • Feature files and code-behind files are always in sync
  • No need to check the feature.cs files into your source control system
  • Works without Visual Studio
  • Works for both .NET Full Framework and .NET Core

Cons

  • When adding a new file, the CustomTool entered in the feature file’s properties currently has to be removed each time
  • Realtime test discovery will only find new tests after the project has been (re)built

Best practises

Store code-behind files in same folder as the feature file

In the past, we recommended moving the generated code-behind files to a different folder from your feature files.
We no longer recommend this approach, as you will otherwise experience problems with up-to-date checks in MSBuild.

Additionally, Microsoft has since fixed a bug in VS, meaning that navigating from the Test Explorer to the feature file works again (see here). For this to work, the code-behind files need to be located by VS, and having the generated files in a separate folder will break this feature again.

Known Bugs

  • Prior to SpecFlow 2.4.1, Visual Studio sometimes does not recognize that a feature file has changed. To generate the code-behind file, you therefore need to rebuild your project. We recommend using SpecFlow 2.4.1 or higher, where this is no longer an issue.

Enabling MSBuild Code Behind Generation

Classic Project System

  1. Add the NuGet package SpecFlow.Tools.MsBuild.Generation with the same version as SpecFlow to your project.
  2. Remove all SpecFlowSingleFileGenerator custom tool entries from your feature files.

SDK style project system

Please use at least SpecFlow 2.4.1, as this version fixes the above issue in 2.3.*.

  1. Add the NuGet package SpecFlow.Tools.MsBuild.Generation with the same version as SpecFlow to your project
  2. Remove all SpecFlowSingleFileGenerator custom tool entries from your feature files.

Common issues

After upgrading the NuGet packages, the code-behind files are not generated at compile time

If you are using the classic project system, the previous MSBuild target may no longer located at the end of your project. NuGet ignores entries added manually, and places the MSBuild imports at the end.  However, the AfterUpdateFeatureFilesInProject target needs to be defined after the imports, because otherwise it will be overwritten with an empty definition. If this happens, your code-behind files are not compiled as part of the assembly.

Linked files are not included

If you link feature files into a project, no code-behind file is generated for them (see GitHub Issue 1295).

More infos about MSBuild:

Improving Gherkin Specs with SpecFlow Step Argument Transformations

In this post we will look at how parameter strings in Gherkin steps are converted to arguments in step binding methods, and how we can implement our own custom conversions using step argument transformations.

The Basics

From Gherkin to Code

When writing Gherkin style specs, you typically want to reuse your steps with slight variations. Gherkin supports parameterized steps to do so.

In its simplest form, the value for parameters can be specified inline in a scenario:

Scenario: Ordering an item in stock
  Given we have '3' items in stock.
  When we order '1' item
  Then we should have '2' items in stock left.

Without any modifications to the bindings you could use the same steps in other scenarios:

Scenario: Ordering the last item in stock
  Given we have '1' items in stock.
  When we order '1' item
  Then we should have '0' items in stock left.

Within a Scenario Outline, you can simply provide multiple values for each parameter in the Example section:

Scenario Outline: Ordering items in stock
  Given we have '<ItemsInStock>' items in stock.
  When we order '<OrderedItems>' item
  Then we should have '<ItemsLeft>' items in stock left.
Examples:
| ItemsInStock | OrderedItems | ItemsLeft |
|            3 |            1 |         2 |
|            1 |            1 |         0 |

The binding for the Given step could look like this:

[Given(@"we have '(.*)' items in stock\.")]
public void GivenWeHaveASpecificNumberOfItemsInStock(int itemsInStock)
{
   // ... set up the stock
}

 

What is happening behind the scenes?

In order for this to work, SpecFlow must convert the string parameter coming from the step to an instance of the datatype used in the binding method. In our case, this conversion is from a string to an int:

"3" (string) becomes 3 (integer) - this is the standard conversion

SpecFlow provides out-of-the-box support for the conversion of most primitive datatypes (see Standard Conversion in the SpecFlow documentation). This even includes support for converting enum members to the proper type.

Wouldn’t it be cool …

The whole idea behind writing specifications in Gherkin is to improve collaboration between all members of an agile team. You will therefore want to put significant effort into formulating specifications that are easily understood by all stakeholders. After you have discussed a specification and drafted a version of it in Gherkin, it is a good idea to read it out loud again and ask if this is how you would express yourself when explaining it to someone else.

While the line

Given we have '0' items in stock.

might be technically correct, you probably wouldn’t talk like this to another person. In human-to-human communication you would probably instead say something like

Given we have no items in stock.

Of course, this will not work with the existing step binding, as there is no standard conversion from the string “no” to an integer (in this case zero). To overcome this, we could go back to our string argument and handle this special case ourselves:

[Given("we '(.*)' items in stock\.")]
public void GivenWeHaveASpecificNumberOfItemsInStock(string itemsInStockExpression)
{
   var itemsInStock = (itemsInStockExpression == "no")
                    ? 0
                    : int.Parse(itemsInStockExpression);
   // ... set up the stock
}

While this approach works, it has a few issues:

1) Feeling pain for wanting to be nice

No matter how nice and clean the step binding was before (maybe just a single line calling the production API?), we have now mixed in some code just to make the specs nicer to read. As we want all members in our team work together (whatever their role), it should be easy for a non-coder (e.g. a tester) to look at a step binding’s code and have a good idea of what is going on. The more technical plumbing code (similar to the code above) there is, the harder it becomes to identify the important parts of that method.

To somewhat ease this problem, we can extract the conversion code to a dedicated method:

[Given("we '(.*)' items in stock\.")]
public void GivenWeHaveASpecificNumberOfItemsInStock(string itemsInStockExpression)
{
   var itemsInStock = GetNumberFromHumanString(itemsInStockExpression);
   // ... set up the stock
}

private int GetNumberFromHumanString(string itemsInStockExpression)
{
   if (itemsInStockExpression == "no")
      return 0;

   return int.Parse(itemsInStockExpression);
}

While this helps a little bit, we still are left with the call to this method and the method code itself.

2) The signature of the binding lost its expressiveness

With the original version of the step binding, we can clearly see that we expect an integer for the number of items in stock. The second version (with the string parameter) is extremely generic, as it just accepts a string. To see what string this is and what the allowed values are, you need to identify, inspect and understand the code that converts the string to the actual number. If additional cases are added, the binding can become contrived and very difficult to read over time.

Step Argument Transformations to the Rescue

In situations similar to the one described above, SpecFlow’s Step Argument Transformation feature comes in handy. Step argument transformations allow you to extend SpecFlow’s ability to convert strings in Gherkin steps to any type you wish. This means that we can go back to the very basic version of the step binding in our example, and inform SpecFlow of the desired conversion in a separate step transformation.

Here is our step binding:

[Given(@"we have '(.*)' items in stock\.")]
public void GivenWeHaveASpecificNumberOfItemsInStock(int itemsInStock)
{
   // ... set up the stock
}

Note that there is no conversion here whatsoever. The conversion is introduced by providing a step argument transformation like this:

[Binding]
public class MyStepArgumentTransformations
{
   [StepArgumentTransformation]
   public int TransformItemsInStockExpressionToInteger(string itemsInStockExpression)
   {
      if (itemsInStockExpression == "no")
         return 0;

      return int.Parse(itemsInStockExpression);
   }
}

Note that the method is marked with the StepArgumentTransformationn attribute and has to reside in a class marked with the Binding attribute. This is all that is needed for SpecFlow to use this method to convert the parameter to an int.

As this method now stands on its own, it can be used in other contexts as well. So we should remove the residuals of the initial use case by refactoring the names to something more generic:

[StepArgumentTransformation]
public int TransformHumanReadableIntegerExpression(string expression)
{
   if (expression == "no")
      return 0;

   return int.Parse(expression);
}

Instead of the conversion code being buried in the step binding or an extracted helper method, we now have the conversion separated into its own class.

The whole situation now looks like this:

"no" is evaluated by the method and returns the integer value 0, used by itemsInStock

This has some consequences:

  • We have returned to separated concerns. Converting the human readable expressions to what we actually need in the code (the integer in this case) can be considered a UI concern. By moving the conversion code to the implicit layer of SpecFlow magic, the binding is back on focusing on the business concern of the Gherkin step, i.e. wiring to the production code.
  • As the conversion code now stands alone, it can be tested as well. While the current implementation seems quite obvious, we might want to increase our confidence in parts of the test automation layer by testing them separately.

Restricting the considered strings

While you can overwrite an already existing conversion (string to int in our example), be aware that this conversion is available globally in all projects referencing the assembly containing the class.

For this reason, you might want to restrict the cases where SpecFlow applies this conversion. This is possible by specifying a regular expression for the transformation:

[StepArgumentTransformation("(\d+|[no])")]
public int TransformHumanReadableIntegerExpression(string expression)
{
   // ...
}

With this regex we tell SpecFlow to just consider strings consisting of at least one digit (the \d+ part) or the string “no”. This constraint has the advantage that if someone uses a very different string for such a parameter, SpecFlow’s default conversion kicks in and its error message is displayed.

Even more Powerful: Conversions to Custom Types

As mentioned earlier, decreased readability is one disadvantage of having to convert string input to the target format in the step binding.

Using step argument transformations, the expressiveness of step bindings can be improved even further by using custom types for parameters.

For example, we could introduce a tiny type, HumanReadableIntegerExpression:

public void GivenWeHaveASpecificNumberOfItemsInStock(HumanReadableIntegerExpression itemsInStockExpression)
{
   var itemsInStock = itemsInStockExpression.Value;
   // ... set up the stock
}

Notice the following benefits:

  • The signature of the method now tells us more about the supported strings we can use in our bindings. Of course, we still have to know (or look up) the exact expressions that are supported. But as we are getting familiar with helpers like this, we immediately know what expressions we can write.
  • As the value is immediately available from the method’s parameter, we could even inline the local variable in the above example, and would require no additional lines of code for improved bindings.
  • The conversion is separated out in the step argument transformation, such that a non-coder could swap one possibility to express the number by another one, without having to change cumbersome wiring. For example, she could replace HumanReadableIntegerExpression by HumanReadableIntegerExpressionAdvanced with support for more advanced expressions.

HumanReadableIntegerExpression itself is a tiny class:

public class HumanReadableIntegerExpression
{
   public int Value { get; }
   public HumanReadableIntegerExpression(int value) { Value = value; }
}

We then just need to change the transformation method to return an instance of that class:

[StepArgumentTransformation]
public HumanReadableIntegerExpression TransformHumanReadableIntegerExpression(string expression)
{
   if (expression == "no")
      return new HumanReadableIntegerExpression(0);

   return new HumanReadableIntegerExpression(int.Parse(expression));
}

Restrictions

Step argument transformations will not work with tables provided by TechTalk.SpecFlow.Assist (e.g. with the CreateSet<T> or CreateInstance<T> methods). They can be used with paremeters in example tables.

Wrapping Up

Using step argument transformations can bring a number of benefits, allowing you to write steps in a human-readable way that reflects the way you would express yourself in normal conversation. However, it is important to ensure that team members have at least a basic understanding of the “magic” going on behind the scenes.

Benefits

  • Move the code used for conversions from your own bindings to a dedicated class wired by SpecFlow.
  • Reveal the intention of the step binding method and clarify the ability to express parameters using specific values.

Pitfalls

  • It can be challenging for team members not familiar with the step argument transformations feature to understand how SpecFlow “magically” knows how to interpret a string such as “no” as 0. But this is generally an easy problem to solve.

VS Integration Breaking Changes – Affects ALL users!

The stable SpecFlow V3 has been released! Check out our latest blog post!

The upcoming SpecFlow 3 release will require an update to the Visual Studio extension for SpecFlow. Because the extension is normally updated automatically whenever a new version is released, this change has the potential to affect all users, not just those that upgrade to version 3! Please read the following information in detail.

What will break?

The new extension will not be compatible with versions of SpecFlow earlier than 2.3.2. If you are using an earlier version of SpecFlow, you should make sure that you have disabled automatic updates for the SpecFlow extension in Visual Studio. To do so:

  1. Select Tools | Extensions and Updates from the menu in Visual Studio.
  2. Enter “SpecFlow’ in the search field to restrict the entries in the list.
  3. Click on the “SpecFlow for Visual Studio” entry and disable the Automatically update this extension check box.
  4. This will prevent newer versions of the extension from being installed automatically. Once you are ready to upgrade to SpecFlow 3, you can enable this option again.

What limitations are there?

Because the Visual Studio extension can only be installed once per Visual Studio installation, you will not be able to mix SpecFlow 3 projects with projects that use a version of SpecFlow prior to 2.3.2.

How will the update be handled?

Our intention is to release a preview version of the Visual Studio extension that will not trigger automatic updates in Visual Studio for the duration of the preview period. If you want to try out the preview version of SpecFlow 3, you will need to add the feed to Visual Studio manually to install the new version of the extension. We will provide additional details on how to do this once the preview is available.

Once SpecFlow 3 has been officially released, we will update the live Visual Studio extension with the new version. This will cause your extension to automatically update if you have not disabled automatic updates (see above). From this point on, users of older versions of SpecFlow will need to download and install the compatible version of the Visual Studio extension manually and ensure that automatic updates are disabled.

I see a potential pitfall. What should I do?

If you see any potential pitfalls in this approach, please, let us know now! If you have suggestions for how to make this process easier, we would like to hear them! You can contact us at support@specflow.org.

Targeting Multiple Browsers with a Single Test

(Note: This article was written for SpecFlow+ Runner 1.8 and SpecFlow 2; an updated version of this article for SpecFlow+ 3 is here.)

If you are testing a web app (e.g. with Selenium), you will normally want to test it in a range of browsers, e.g. Chrome, IE/Edge and Firefox. However, writing tests for all the browsers can be a time-consuming process. Wouldn’t it be much easier to just write just one test, and be able to run that test in all browsers?

That’s where using targets with the SpecFlow+ Runner comes in. Targets are defined in your SpecFlow+ Runner profile. They allow you to define different environment settings, filters and deployment transformation steps for each target. Another common use case is to define separate targets for X64 and x86.

Defining targets for each browser allows us to execute the same test in all browsers. You can see this in action in the Selenium sample project available on GitHub. If you download the solution and open Default.srprofile, you will see 3 different targets defined at the end of the file:

<Targets>
  <Target name="IE">
    <Filter>Browser_IE</Filter>
  </Target>
  <Target name="Chrome">
    <Filter>Browser_Chrome</Filter>      
  </Target>
  <Target name="Firefox">
    <Filter>Browser_Firefox</Filter>
  </Target>
</Targets>

Each of the targets has a name and an associated filter (e.g. “Browser_IE”). The filter ensures that only tests with the corresponding tag are executed for that target.

For each target, we are going to transform the browser key in the appSettings section of our app.config file to contain the name of the target. This will allow us to know the current target and access the corresponding web driver for each browser. The corresponding section in the project’s app.config file is as follows:

  <appSettings>
    <add key="seleniumBaseUrl" value="http://localhost:58909" />
    <add key="browser" value="" />
</appSettings>

To make sure that the name of the target/browser is entered, we transform the app.config file to set the browser key’s value attribute to the name of the target using the {Target} placeholder. This placeholder is replaced by the name of the current target during the transformation:

<Transformation>
  <![CDATA[<?xml version="1.0" encoding="utf-8"?>
    <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
      <appSettings>
        <add key="browser" value="{Target}"
        xdt:Locator="Match(key)"
        xdt:Transform="SetAttributes(value)" />
      </appSettings>
    </configuration>
  ]]>
</Transformation>

This section locates the key (browser) and sets its value attribute to the name of the target. This transformation occurs separately for all three targets, resulting in 3 app.config files, each with a different browser key (that is available to your application).

WebDriver.cs (located in the Drivers folder of the TestApplication.UiTests project) uses this key to instantiate a web driver of the appropriate type (e.g. InternetExplorerDriver). The key from the configuration file is passed to BrowserConfig, used in the switch statement:

get
{
  if (_currentWebDriver != null)
    return _currentWebDriver;

  switch (BrowserConfig)
  {
    case "IE":
      _currentWebDriver = new InternetExplorerDriver(new InternetExplorerOptions() { IgnoreZoomLevel = true }) { Url = SeleniumBaseUrl };
      break;
    case "Chrome":
      _currentWebDriver = new ChromeDriver() { Url = SeleniumBaseUrl };
      break;
    case "Firefox":
      _currentWebDriver = new FirefoxDriver() { Url = SeleniumBaseUrl };
      break;
    default:
      throw new NotSupportedException($"{BrowserConfig} is not a supported browser");
    }

  return _currentWebDriver;
}

Depending on the target, the driver is instantiated as either the InternetExplorerDriver, ChromeDriver or FirefoxDriver driver type. The bindings code simply uses the required web driver for the target to execute the test; there is no need to write separate tests for each browser. You can see this at work in the Browser.cs and CalculatorFeatureSteps.cs files:

[Binding]
public class CalculatorFeatureSteps
{
  private readonly WebDriver _webDriver;

  public CalculatorFeatureSteps(WebDriver webDriver)
  {
    _webDriver = webDriver;
  }
       
  [Given(@"I have entered (.*) into (.*) calculator")]
  public void GivenIHaveEnteredIntoTheCalculator(int p0, string id)
  {
    _webDriver.Wait.Until(d => d.FindElement(By.Id(id))).SendKeys(p0.ToString());
  }

To ensure that the tests are executed, you still need to ensure that the tests have the appropriate tags (@Browser_Chrome, @Browser_IE, @Browser_Firefox). 2 scenarios have been defined in CalculatorFeature.feature:

@Browser_Chrome
@Browser_IE
@Browser_Firefox
Scenario: Basepage is Calculator
	Given I navigated to /
	Then browser title is Calculator

@Browser_IE 
@Browser_Chrome
Scenario Outline: Add Two Numbers
	Given I navigated to /
	And I have entered <SummandOne> into summandOne calculator
	And I have entered <SummandTwo> into summandTwo calculator
	When I press add
	Then the result should be <Result> on the screen

Scenarios: 
		| SummandOne | SummandTwo | Result |       
		| 50         | 70         | 120    | 
		| 1          | 10         | 11     |

Using targets in this way can significantly reduce the number of tests you need to write and maintain. You can reuse the same test and bindings for multiple browsers. Once you’ve set up your targets and web driver, all you need to do is tag your scenarios correctly. If you select “Traits” under Group By Project in the Test Explorer, the tests are split up by browser tag. You can easily run a test in a particular browser and identify which browser the tests failed in. The test report generated by SpecFlow+ Runner also splits up the test results by target/browser.

Remember that targets can be used for a lot more than executing the same test in multiple browsers with Selenium. Don’t forget to read the documentation on targets, as well as the sections on filters, target environments and deployment transformations.

Fit-for-purpose Gherkin

A common mistake I see is the tendency to write Gherkin specification as if they were a series of instructions describing the algorithm for how the test is executed: “enter X” or “open page Y”. This is understandable; after all, it’s how you might write a standard test script. It’s very easy to continue using the same approach with Gherkin.

But the power of Gherkin comes from its abstraction – the ability to formulate steps in plain English (or any of the other supported languages), rather than as an algorithm. Instead of describing how something is done, Gherkin should describe what is being done without worrying about the implementation details.

This is a powerful tool, separating the algorithmic steps (bindings) from the actual description of the test intended to be read by humans. The primary goal of Gherkin is to formulate your scenarios so that they are quick to read, clearly formulated and understandable to all stakeholders – including non-technical team members.

OK, but what does that mean in real practical terms? Let’s take a look at a very basic example where we have a web application with various user roles, including administrators. Any test interfacing with the UI that involves the administrator dashboard will require the following initial steps in order to execute the test:

  1. Open the login page by navigating to the appropriate URL.
  2. Enter the user name.
  3. Enter the password.
  4. Click on the Log In button
  5. Select Administrator Dashboard from the menu.

These steps are important– you will definitely need to log in to the system somehow and navigate to the appropriate area to perform to execute the test. But let’s take a step back from the nitty gritty details, and ask ourselves what our precondition is: we need to be logged in as administrator, and we need to be viewing the administrator dashboard.

So, as tempting as it is to translate each of the steps listed above into individual steps in Gherkin, it is much more advisable to write the following:

Given I am logged in with administrator privileges

And I am viewing the administrator dashboard

These statement clearly capture the current state of the system required in order to perform the test. It goes without saying that being logged in as administrator involves logging in, but how this is done is not where the focus of your discussion should be. These steps are implicit implementation details that add nothing to the discussion about what can be done on the administrator dashboard itself. Furthermore, you may interface directly with your API and not even necessarily perform these steps using the UI.

The only place you need to worry about the implementation itself – how you log in to the system – is in the binding code for the Given step. There is no reason to explicitly state these steps in Gherkin; in fact, doing so runs the risk of making the feature file bloated, and thus harder to understand and navigate. As a side effect, you will also end up with a lot of unnecessary bindings for the unnecessary steps that could easily be grouped together.

Now if you have been critically evaluating the previous Gherkin step, you might even argue that “Given I am viewing the administrator dashboard” already implies you are logged in as administrator. And to a certain extent I would agree with you. In this case, my preferred approach would be to move the first step (Given I am logged in with administrator privileges) to the scenario background – assuming the feature file I am working on covers a range of administrator functions, all of which require the user to be logged in as an admin. Using background statements is another great way to reduce the clutter in your Gherkin files.

Another thing to remember is that there are always implicit steps; the question is where to draw the boundary. It is probably unnecessary to specify “Given I am logged in” as the prerequisite for every test, just like it is unnecessary to add the step “Given I have started the application”. Of course, you will want to specify the login state for certain, specific tests: testing the login process presumably requires the user to not already be logged in, so this is a step that should be specified. Similarly, access to the administrator dashboard is limited to a certain user type – being logged in as an administrator is therefore an important criteria. In this case, it also includes the information that other user types cannot perform these actions.

So let’s come back to the actual implementation of the steps. We have reduced a number of explicit steps to a single Gherkin statement, but we still need to execute all of those steps. That’s easily handled by writing code for each of the necessary steps that is called from the binding:

public void GivenIAmLoggedInWithAdminPrivileges()
{
    NavigateTo(URL);
    EnterUserName(“Admin”);
    EnterPassword(“P@ssw0rd”);
    ClickLogInButton(); 
}

As mentioned above, this might be abstracted to simply interface with your API, rather than actually performing these steps via the UI. Of course, just like you can reduce multiple explicit steps to a single Gherkin step, you can also group different sub-functions within a parent function, which I would probably do here:

public void GivenIAmLoggedInWithAdminPrivileges()
{
     Login(“Admin”,“P@ssw0rd”);
}

So I have one single function to handle the login process, which can obviously be reused elsewhere. Furthermore, any changes to the login process only require a single change to the test automation code and do not require any changes to the Gherkin file.

This makes it easier to maintain your tests and reuse steps that represent logical units, rather than having multiple tests share a large number of mini-steps that are brittle. Changes to the login process or your API do not require the test to be rewritten; they only affect the bindings of the tests. Furthermore, the step Given I am logged in as with administrator privileges can be used by any test involving an administrator account in a single binding method.

Of course, these concepts are not new. You will at some point have taken a function that has grown over time, and split it up into smaller sub-functions. You should adopt a similar approach to Gherkin: it may help to try and think more in terms of general functions, rather than individual statements. This may require a paradigm shift on your part: instead of being as explicit as possible in your feature files, reduce the steps to the essentials without obscuring any key information. The less bloated and more concise your tests/specifications are, the better suited they are to encouraging meaningful discussion between stakeholders, and the more understandable they are to non-technical team members. At the end of the day, that is probably the biggest benefit to this approach – it enables everyone to understand what is (supposed to be) going on and why.

As with any code (yes, feature files are code files too), a certain amount of refactoring and maintenance will always be required as your project grows. However, ensuring that your Gherkin files are fit-for-purpose from the start will mean that this refactoring will be a lot easier and less frequent. It will also have far less of an impact on your binding code if you reduce the frequency with which you need to update your Gherkin steps.

Sending Execution Reports via Email

We’ve had a number of users ask how to send execution reports via email after the test run has been executed. SpecFlow+ Runner 1.6 (a pre-release version is available on NuGet) introduces the option to give reports static names, i.e. without a time stamp, meaning sending the report as an attachment is now relatively easy to automate.

To help you out, we’ve outlined the necessary steps in the documentation here. The example uses mailsend, but you can obviously use other command line tools (or use telnet if you are really hardcore).

The same principles of course apply to any file you might want to send, and sending files via email is obviously not restricted to environments using SpecFlow+ Runner.