how can the ai-edge-torch converter convert the operators supported on edgeTPU devices #450
Labels
status:awaiting ai-edge-developer
type:feature
For feature requests
type:performance
An issue with performance, primarily inference latency
Description of the bug:
Input model: ZenNet-SA-M/rafdb_quant-m-sa_int8_pte.tflite
Input size: 1.57MiB
Output model: rafdb_quant-m-sa_int8_pte_edgetpu.tflite
Output size: 1.60MiB
On-chip memory used for caching model parameters: 0.00B
On-chip memory remaining for caching model parameters: 8.09MiB
Off-chip memory used for streaming uncached model parameters: 0.00B
Number of Edge TPU subgraphs: 1
Total number of operations: 243
Operation log: rafdb_quant-m-sa_int8_pte_edgetpu.log
Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.
Number of operations that will run on Edge TPU: 1
Number of operations that will run on CPU: 242
See the operation log file for individual operation details.
Compilation child process completed within timeout period.
Compilation succeeded!
Actual vs expected behavior:
The operators are supported on edgeTPU (not CPU)
Any other information you'd like to share?
No response
The text was updated successfully, but these errors were encountered: