Skip to content
This repository has been archived by the owner on Jan 7, 2021. It is now read-only.

Failure 'origBiasSize == outChans' loading weights to FPGA #129

Open
rubende opened this issue Nov 3, 2020 · 0 comments
Open

Failure 'origBiasSize == outChans' loading weights to FPGA #129

rubende opened this issue Nov 3, 2020 · 0 comments

Comments

@rubende
Copy link

rubende commented Nov 3, 2020

Hello,

I have followed the steps indicated in this tutorial (https://github.com/Xilinx/Vitis-AI/blob/master/alveo/notebooks/image_classification_tensorflow.ipynb), and it works. After, I have repeated the tutorial with my own model, I am able to quantize and compile the model, but in the inference I get the following error:

[XDNN] loading xclbin settings from /opt/xilinx/overlaybins/xdnnv3/xdnn_v3_96x16_2pe_8b_9mb_bank03_2.xclbin.json
[XDNN] using custom DDR banks 0,3
Path ./freeze_modelV2_partition_01-data.h5 is a file.
Loading weights/bias/quant_params to FPGA...
python: xdnn.cpp:1357: int XDNNV3FillWeightsBiasQuantBlob(short int*, int, std::__cxx11::string, std::__cxx11::string, const DType*, unsigned int, const DType*, unsigned int, short unsigned int, short unsigned int, unsigned int, unsigned int, int, int, int, int, bool, std::__cxx11::string, int, std::vector&) [with DType = float; std::__cxx11::string = std::__cxx11::basic_string]: Assertion `origBiasSize == outChans' failed.

Any suggestions? There is my inference code (https://pastebin.com/BP4CbUaz). Thank you.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant