Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference time remains unchanged #391

Closed
yash88600 opened this issue May 15, 2020 · 1 comment
Closed

Inference time remains unchanged #391

yash88600 opened this issue May 15, 2020 · 1 comment
Assignees
Labels
technique:pruning Regarding tfmot.sparsity.keras APIs and docs

Comments

@yash88600
Copy link

The compression technique used for pruning converts the model into a sparse model and save's it efficiently (maybe using csr format or other) but this doesn't help in reducing the inference time as the no of parameters still remains the same?
Please correct me if i am wrong

@alanchiao
Copy link

alanchiao commented May 15, 2020

See #173 for efforts to improve latency.

Currently, latency is not improved, as you said.

@alanchiao alanchiao added the technique:pruning Regarding tfmot.sparsity.keras APIs and docs label May 15, 2020
@alanchiao alanchiao self-assigned this May 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
technique:pruning Regarding tfmot.sparsity.keras APIs and docs
Projects
None yet
Development

No branches or pull requests

2 participants