Using CARDS+ cases to calculate e2e test coverage
In an iterative, continuous development process, manual tasks must be reduced to guarantee a high level of quality over time. Testing the artifact is one of those tasks.
WHAT IS TEST COVERAGE?
For unit and integration tests, multiple options exist to determine code coverage. The most common ones are:
-
Line Coverage
-
Branch Coverage
There are a bunch of tools to calculate those metrics for almost any language and testing framework you can imagine.
However, for e2e (end-to-end) tests, there is no simple tool and there is no apparent way of calculating any obvious metric. Reasons for that are:
-
No shared code base - client and server are in separate repositories.
-
Paths/branches in code do not mirror actual use cases.
-
Breaking use cases down to code paths is too cumbersome.
HOW WE DO IT
Let’s say a project is set up as follows:
How does CARDS+ factor into this?
CARDS+ is an agile documentation method. It is simple to learn and quite lean. One of the components of any CARDS+ product documentation is the system description, which covers topics, epics and cases:
-
System Description
-
Topic Administrator Area
-
Epic User Management
-
Case Filter User
-
Case Create User
-
Case Edit User
-
Case Delete User
-
-
…
-
-
We can use these cases from the documentation to get an idea of how well-tested our application is - based on use case coverage, not lines of code or similar metrics.
In simple terms, we want at least one test for every case found in the documentation.
Give me more!
Let us write a simple rspec/watir based test:
|
|
By running this test with rspec
and --format json
, we can get a file that looks similar to this:
|
|
Thanks to watir and rspec we already have ruby available, so we can use the JSON above and our CARDS+ documentation to calculate the case coverage:
|
|
We have tweaked this example script a little with 'colorize' and also take care of WIP cases, which are not encountered as missing when not covered. Also any failed tests or a case coverage of less than 100% is encountered a build failure.
We use this script as part of our build pipelines to be confident, that the current state of the software always satisfies the requirements as defined in the Case documentation. The output of a run looks like the following:
In case of e2e tests, 100% coverage should be enforced.
We are eager to hear from other options of e2e test coverage measurements!