-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor tests #1011
Refactor tests #1011
Conversation
Can you integrate codecov so we can actually measure codecov on every PR? |
Never used it before but can have a look :) Did you mean |
|
Welcome to Codecov 🎉Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests. ℹ️ You can also turn on project coverage checks and project coverage reporting on Pull Request comment Thanks for integrating Codecov - We've got you covered ☂️ |
Thanks! Looks like all set 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 27 out of 42 changed files in this pull request and generated no comments.
Files not reviewed (15)
- Makefile: Language not supported
- requirements/minimum.old: Language not supported
- requirements/test.txt: Language not supported
- segmentation_models_pytorch/decoders/unet/model.py: Evaluated as low risk
- segmentation_models_pytorch/decoders/pspnet/model.py: Evaluated as low risk
- segmentation_models_pytorch/decoders/linknet/model.py: Evaluated as low risk
- segmentation_models_pytorch/encoders/inceptionv4.py: Evaluated as low risk
- segmentation_models_pytorch/decoders/manet/model.py: Evaluated as low risk
- segmentation_models_pytorch/decoders/fpn/model.py: Evaluated as low risk
- segmentation_models_pytorch/decoders/pan/model.py: Evaluated as low risk
- segmentation_models_pytorch/decoders/deeplabv3/model.py: Evaluated as low risk
- segmentation_models_pytorch/decoders/upernet/model.py: Evaluated as low risk
- segmentation_models_pytorch/decoders/segformer/model.py: Evaluated as low risk
- segmentation_models_pytorch/decoders/unetplusplus/model.py: Evaluated as low risk
- segmentation_models_pytorch/init.py: Evaluated as low risk
Comments suppressed due to low confidence (1)
segmentation_models_pytorch/base/model.py:12
- The new attribute
requires_divisible_input_shape
should be covered by tests to ensure its behavior is correctly validated.
requires_divisible_input_shape = True
Merging it for now, the most important I think is to have full-pretrained model tests to ensure we do not break anything with updates, it's added as a separate job "logits_match". Coverage can be improved for loss and metrics in the follow-up PR or in case of new updates in these modules. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like it's reporting code coverage now!
@@ -8,3 +8,4 @@ timm==0.9.0 | |||
torch==1.9.0 | |||
torchvision==0.10.0 | |||
tqdm==4.42.1 | |||
Jinja2==3.0.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is Jinja2 required? I don't see any code that uses it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
save/load pretrained raises an error on huggingface_hub 0.24 and asks to install Jinja2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
alternatively we can bump minimum version for huggingface_hub
@@ -0,0 +1,208 @@ | |||
import unittest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why use unittest instead of pytest? unittest is more or less obsolete
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, but I like the idea of test inheritance, don't know if we can use it with pytest. But we still use pytest to collect and run tests
import torch | ||
import unittest | ||
|
||
from packaging.version import Version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like we have a new test dependency on packaging, this should be added to pyproject.toml
and requirements/tests.txt
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm... I thought it's a default python package, but it probably installed with other dependency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not default, it is installed using pip install packaging
. It may or may not be installed, we should add an explicit dependency on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would personally use pytest fixtures for this in conftest.py
so they are available to all test code without import
Thanks for the review! |
Feel free to refactor if you know a better/cleaner way to organize tests and have bandwidth! I'm going to be mostly off for holidays up to 3rd of Jan, but this week can review/merge. Then, I think it would be great to make a new release, what do you think? |
Yes, would be very excited for a new release so we can start using newer timm! |
The idea is to: