You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 7, 2021. It is now read-only.
Hello,
I have followed the steps indicated in this tutorial (https://github.com/Xilinx/Vitis-AI/blob/master/alveo/notebooks/image_classification_tensorflow.ipynb), and it works. After, I have repeated the tutorial with my own model, I am able to quantize and compile the model, but in the inference I get the following error:
[XDNN] loading xclbin settings from /opt/xilinx/overlaybins/xdnnv3/xdnn_v3_96x16_2pe_8b_9mb_bank03_2.xclbin.json
[XDNN] using custom DDR banks 0,3
Path ./freeze_modelV2_partition_01-data.h5 is a file.
Loading weights/bias/quant_params to FPGA...
python: xdnn.cpp:1357: int XDNNV3FillWeightsBiasQuantBlob(short int*, int, std::__cxx11::string, std::__cxx11::string, const DType*, unsigned int, const DType*, unsigned int, short unsigned int, short unsigned int, unsigned int, unsigned int, int, int, int, int, bool, std::__cxx11::string, int, std::vector&) [with DType = float; std::__cxx11::string = std::__cxx11::basic_string]: Assertion `origBiasSize == outChans' failed.
Any suggestions? There is my inference code (https://pastebin.com/BP4CbUaz). Thank you.
The text was updated successfully, but these errors were encountered: