Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cucumber-Json flags failed tests as passed if an exception occurs #430

Open
theObserver1 opened this issue Jun 27, 2021 · 2 comments
Open

Comments

@theObserver1
Copy link

Version: bdd-4,0,2 pytest: 6.2.2

Summary
When a test fails with eg an keyerror exception due to an invalid test step, the cucumber-json will mark the test case as passed. When imported back into eg XRAY the test is marked as passed when in fact it has not.

This happens because the invalid step is not included in the test report scenario.steps and will therefore not appear in the cucumber-json causing test cases results to be inaccurate.

Steps to Repro:

  1. Execute a feature file containing a step not defined in the framework. This will trigger an KeyError exception and a FixtureLookupError. Ensure the --cucumber-json flag was passed into pytest-bdd :
Scenario:  repro
    Given this step should exist and be valid
    And this step should not exist

  1. Examine the resultant cucumber-json file. Notice the test case is marked as passed as the erroring test step was not included.
@vishujci
Copy link

This is an issue which needs to be fixed asap as it reports false positives

@cchan-lm
Copy link

I just ran into the same as well. The Cucumber JSON file does not reflect steps that have not been implemented. I have a workaround with a session-scoped fixture that tracks which steps have not been implemented using pytest_bdd.scenario._find_step_function and pytest_bdd.exceptions.StepDefinitionNotFoundError, and then after the test is finished, the fixture corrects the JSON (this also required forcing the JSON to be created before pytest_sessionfinish). It certainly is not an ideal workaround... it's only because we need this information now. Any step in a scenario that's not implemented should show as "failed" or "not implemented". Devs need to be aware of steps that haven't been implemented. Software should not be released if tests are not fully written.

I also tried skipping the step via pytest_bdd_before_step but turns out StepDefinitionNotFoundError is raised before the hook can even be used. I guesspytest_bdd_before_scenario could be used but some steps can still be useful and should still be tested before the step that hasn't been implemented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants