<< back to the blog
/ /

Using CARDS+ Cases to calculate e2e Test Coverage

In an iterative, continuous development process, manual tasks must be reduced to guarantee a high level of quality over time. Testing the artifact is one of those tasks.

What is Test Coverage?

For unit and integration tests, multiple options exist to determine code coverage. The most common ones are:

  • Line Coverage
  • Branch Coverage

There are a bunch of tools to calculate those metrics for almost any language and testing framework you can imagine.

Coverage

However, for e2e (end-to-end) tests, there is no simple tool and there is no apparent way of calculating any obvious metric. Reasons for that are:

  • No shared code base - client and server are in separate repositories.
  • Paths/branches in code do not mirror actual use cases.
  • Breaking use cases down to code paths is too cumbersome.

How we do it

Let's say a project is set up as follows:

  • Client and server are separate. The server offers GraphQL to query data. The client offers a Web UI (Angular, React, etc.).
  • The project uses CARDS+ for agile product documentation and Confluence as its Wiki.
  • The project uses rspec and watir for e2e testing. Therefore, ruby is already in place.

By using rspec and watir for the tests we have a good and straightforward way of doing e2e tests against our system. With rspec, tests are easy to write and with watir we have a great API for the browser interface.

How does CARDS+ factor into this?

CARDS+ is an agile documentation method. It is simple to learn and quite lean. One of the components of any CARDS+ product documentation is the system description, which covers topics, epics and cases:

  • System Description
    • Topic Administrator Area
      • Epic User Management
        • Case Filter User
        • Case Create User
        • Case Edit User
        • Case Delete User
      • ...

We can use these cases from the documentation to get an idea of how well-tested our application is - based on use case coverage, not lines of code or similar metrics.

In simple terms, we want at least one test for every case found in the documentation.

Give me more!

Let us write a simple rspec/watir based test:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
describe "User Management" do
    before :each do
        @base.login()
        @browser.element({routerlink: "/user"}).fire_event :click
    end
    
    it 'Case Filter User: should filter admin user' do
        filter_input 'admin'
        filter_input 'email@example.com', 3
    end
    
    it 'Case Create User: should create User' do
        create_user(@kennung1, "Max", "Mustermann", "max@mustermann.de", 1, true)
    end
end

 

By running this test with rspec and --format json, we can get a file that looks similar to this:

{  
    "examples":[  
       {  
          "id":"./spec/test_user_overview_spec.rb[1:1]",
          "description":"Case Filter User: should filter admin user"
       },
       {  
          "id":"./spec/test_benutzer_overview_spec.rb[1:2]",
          "description":"Case Create User: should create User"
       }
    ],
    "summary":{  
       "duration":106.006282,
       "example_count":13,
       "failure_count":2,
       "pending_count":0,
       "errors_outside_of_examples_count":0
    },
    "summary_line":"2 examples, 0 failures"
 }

 

Thanks to watir and rspec we already have ruby available, so we can use the JSON above and our CARDS+ documentation to calculate the case coverage:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
require 'rubygems'
require 'json'
require 'faraday'

conn = Faraday.new(url: url) # Create a connection with faraday

# Read the case from the documentation
result = conn.get('rest/api/content/search', {cql: "space=#{space} AND title~\"Case*\""})
search_results = JSON.parse(result.body)['results']
# Read the test results
test_results = JSON.parse(ARGF.read)

result_descriptions = []
test_results['examples'].each {|entry|
  description = entry['description']
  status = entry['status']
  result_descriptions.push(entry['description'])
}

puts "Found #{search_results.length} documented cases with test coverage:"

case_coverage = {}

search_results.each { |page|
  case_title = page['title']
  match_count = result_descriptions.select { |description| description.start_with?(case_title)}.length

  case_coverage[case_title] = match_count

  puts "\t#{case_title} - #{match_count}"
}

uncovered_cases = case_coverage.select {|key, value| value==0}
uncovered_count = uncovered_cases.length

puts "\nThere exist uncovered cases! " unless uncovered_cases.empty?

all_case_count = case_coverage.length
case_coverage_percent = (((all_case_count-uncovered_count).fdiv(all_case_count))*100).round(2)

puts "\n#{result_descriptions.length} test cases in total"
puts "Case coverage: #{case_coverage_percent}%"

 

We have tweaked this example script a little with 'colorize' and also take care of WIP cases, which are not encountered as missing when not covered. Also any failed tests or a case coverage of less than 100% is encountered a build failure.

We use this script as part of our build pipelines to be confident, that the current state of the software always satisfies the requirements as defined in the Case documentation. The output of a run looks like the following: 

example output

In case of e2e tests, 100% coverage should be enforced.

We are eager to hear from other options of e2e test coverage measurements!

<< back to the blog