-
I want to evaluate a model, and first configure it using the API. The model's configuration file is as follows:
There are a lot of datasets to evaluate, and during the inference phase, some of the dataset results have been generated, but not all of them have been completed.
When running run.py with the --reuse parameter, is it possible to continue the evaluation from the last checkpoint and continue the evaluation for datasets that haven't been fully inferred yet? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
It will continue if you keep the |
Beta Was this translation helpful? Give feedback.
It will continue if you keep the
abbr
attribute in model config the same as the origin one.