options for context and fail when unused fixtures are present #4
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
@mikicz - Thanks for creating this plugin, I find it's pretty useful for keeping things clean when refactoring or using in large codebases.
In this PR I have attempted to add a couple of extra options to extend it a bit.
--unused-fixtures-fail-when-present
A boolean option that, when set to true, will set the pytest exit code to failure when there are unused fixtures present in the test session. I think this could be useful for CICD scenarios when this plugin is used
--unused-fixtures-context
Pass one or more directories that should act as the context for missing fixtures i.e. only consider fixtures missing if they are defined in any of the context directories. This makes it easy to limit the scope of missing fixtures to just the
tests
directory in the current repobaseid check
I added the
baseid
check topytest_collection_finish
as the tests were failing when I ran them locally before making any changes. The logs reported there were unused fixtures in the_pytest
module (even with my virtual environment named venv). Example log output:I'm not 100% sure this is the right thing to do as I'm not an expert on the pytest internals. There's some info in the docs here
Tests
I haven't added any tests for new functionality yet, I wanted to get your feedback on the proposed changes first. Happy to add these if you are happy with the proposed features
Let me know your thoughts