We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi I'm interested in your project.
BTW i have a question about master_params and model_params.
i thought master_params are 32bits and model_params are 16bits since you leave a comment in train.py like this.
`
def master_params_to_model_params(self, model_params, master_params): """ Move FP32 master params to FP16 model params. """ for model, master in zip(model_params, master_params): model.data.copy_(master.data) `
however in this code, `
if not hasattr(self, 'optimizer'): if self.fp16_mode: self.optimizer = optim.SGD( self.master_params, lr, momentum=0.9, weight_decay=5e-4) else: self.optimizer = optim.SGD( self.model.parameters(), lr, momentum=0.9, weight_decay=5e-4)
you use master_params in fp16 mode.
which params is for fp16? master or model?
thanks for your kind reply.
The text was updated successfully, but these errors were encountered:
Hi I'm interested in your project. BTW i have a question about master_params and model_params. i thought master_params are 32bits and model_params are 16bits since you leave a comment in train.py like this. ` def master_params_to_model_params(self, model_params, master_params): """ Move FP32 master params to FP16 model params. """ for model, master in zip(model_params, master_params): model.data.copy_(master.data) ` however in this code, ` if not hasattr(self, 'optimizer'): if self.fp16_mode: self.optimizer = optim.SGD( self.master_params, lr, momentum=0.9, weight_decay=5e-4) else: self.optimizer = optim.SGD( self.model.parameters(), lr, momentum=0.9, weight_decay=5e-4) ` you use master_params in fp16 mode. which params is for fp16? master or model? thanks for your kind reply.
I think the project is right. Only in fp16 mode, we have master_params, otherwise, we only have model_params which is in fp32 precision .
Sorry, something went wrong.
No branches or pull requests
Hi I'm interested in your project.
BTW i have a question about master_params and model_params.
i thought master_params are 32bits and model_params are 16bits since you leave a comment in train.py like this.
`
def master_params_to_model_params(self, model_params, master_params):
"""
Move FP32 master params to FP16 model params.
"""
for model, master in zip(model_params, master_params):
model.data.copy_(master.data)
`
however in this code,
`
`
you use master_params in fp16 mode.
which params is for fp16? master or model?
thanks for your kind reply.
The text was updated successfully, but these errors were encountered: