Setting Up The SpecFlow+ Runner Server
The SpecFlow+ Runner server collects execution statistics for your tests at a central location. You can use this data to improve the efficiency of your test execution by using the adaptive test scheduling option. When using this option, tests are executed in an order based on the previous execution results, i.e. failing tests are executed first, and stable tests are executed last.
Before you can set up the server, you need to install the SpecFlow+ Runner NuGet packages that contain the server components. Information on installing the NuGet packages can be found here.
Installing the Server
To install the SpecFlow+ Runner server:
- Create a new SQL database instance used to store the execution statistics.
- Locate the "server" directory in your solution's
\packages\SpecRun.Runner.x.y.z\toolsdirectory (created when you install the NuGet package). Copy the contents of the “server” directory to your server.
- Enter your database connection string in the
<Connection Strings>element of the SpecRun.Server.exe.config file.
- Initialise the database using
SpecRun.Server.exe initdatabasefrom the command line.
- Start the service using
SpecRun.Server.exe startfrom the command line. Note: You may need to start the service as a user with elevated privileges, e.g. start the command line as administrator.
- The connection to the server takes place via port 6365; ensure that your firewall on the server allows connections via this port.
- Enter the server’s URL in the
.srprofilefile in your Visual Studio project (
<Server ServerURL =”http://MyServer:6365” publishResults=”true”>).
- Rebuild your solution and run your tests. You can verify that you have set up everything correctly by checking if records have been added to the database on the server.
Adaptive Test Execution Order
You can use these statistics to execute failing and new tests first. To do so, set the
testSchedulingMode attribute in the
<Execution> element of your
.srprofile file to
Adaptive. SpecRun then uses the statistics in the pass/fail history to determine which tests to run first. Previously failing tests and new tests are executed before successful and stable tests.