-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overhaul 2.1 Remove Dependencies / Add Full Timm Support #3
Conversation
Looking forward to test this update! |
Yes this PR allows the use of convnext as the encoder! |
@notprime Can you give this a review and maybe give some thoughts? |
Looking forward to this update... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything seems fine to me, maybe we can specify act_layer and norm_layer directly in params instead of adding them through params.update via kwargs, just to make it clearer
@JulienMaille @ogencoglu @notprime These features are now merged and you can install them using
|
This PR seeks to do a few things:
pretrainedmodels
andefficientnet-pytorch
as they are no longer maintained.mock
andtorchvision
dependency (torchvision isn't used anywhere anyway)torchseg.encoders.TimmEncoder
)torchseg.encoders.supported
). This includes ConvNext and Swin pretrained backbonesThere's some other misc cleanup that is done:
Activation
class. We instead let the user choose the head activation when creating a model. Still defaults tonn.Identity()
encoder_params
totimm.create_model(**kwargs)
in case users want to further customize the backbone