Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyTorch YoloV5 Model is not getting compiled with Neo. #32

Open
amitmukh opened this issue Dec 20, 2021 · 2 comments
Open

PyTorch YoloV5 Model is not getting compiled with Neo. #32

amitmukh opened this issue Dec 20, 2021 · 2 comments

Comments

@amitmukh
Copy link

IHAC who has some use cases where they want to build an object detection model using Pytorch YoloV5. They were able to build the model but not able to compile through Neo. here is the error we are getting :

"ClientError: InputConfiguration: TVM cannot convert the PyTorch model. Invalid model or input-shape mismatch. Make sure that inputs are lexically ordered and of the correct dimensionality. Traceback (most recent call last):\n 2: TVMFuncCall\n 1: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::relay::Tuple (tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)>::AssignTypedLambda<tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}>(tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)\n 0: tvm::runtime::TVMMovableArgValueWithContext::operator tvm::runtime::Array<tvm::RelayExpr, void><tvm::runtime::Array<tvm::RelayExpr, void> >() const\n 3: TVMFuncCall\n 2: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::relay::Tuple (tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)>::AssignTypedLambda<tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}>(tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::M_invoke(std::Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)\n 1: tvm::runtime::TVMMovableArgValueWithContext::operator tvm::runtime::Array<tvm::RelayExpr, void><tvm::runtime::Array<tvm::RelayExpr, void> >() const\n 0: tvm::runtime::Array<tvm::RelayExpr, void> tvm::runtime::TVMPODValue::AsObjectRef<tvm::runtime::Array<tvm::RelayExpr, void> >() const\n File "/tvm/include/tvm/runtime/packed_func.h", line 714\nTVMError: In function relay.ir.Tuple: error while converting argument 0: [09:36:18] /tvm/include/tvm/runtime/packed_func.h:1591: \n---------------------------------------------------------------\nAn error occurred during the execution of TVM.\nFor more information, please see: https://tvm.apache.org/docs/errors.html\n---------------------------------------------------------------\n Check failed: (!checked_type.defined()) is false: Expected Array[RelayExpr], but got Array[index 1: Array]\n”

@jk1333
Copy link

jk1333 commented Mar 31, 2022

I'm not sure this guide also applies to YoloV5, but I am able to run bytetrack/yolox by following this patch https://github.com/mrtj/yolox-panorama-tutorial/blob/main/yolox-torchscript.ipynb

@jk1333
Copy link

jk1333 commented Apr 28, 2022

Here is my approach for neo compile and tested through simulator

  1. Use compatible version of torch and vision
    pip install torch==1.8.1 torchvision==0.9.1

  2. Use yolov5 latest release to export and bring pretrained model using yolov5 v4.0 (higher not supported)

  3. modify export.py of yolov5 latest version

    import models
    import torch.nn as nn
    from utils.activations import Hardswish, SiLU

    In 'def export_torchscript' add following code before 'ts = torch.jit.trace(model, im, strict=False)'

    Update model

     for k, m in model.named_modules():
         m._non_persistent_buffers_set = set()  # pytorch 1.6.0 compatibility
         if isinstance(m, models.common.Conv):  # assign export-friendly activations
             if isinstance(m.act, nn.Hardswish):
                 m.act = Hardswish()
             elif isinstance(m.act, nn.SiLU):
                 m.act = SiLU()
         # elif isinstance(m, models.yolo.Detect):
         #     m.forward = m.forward_export  # assign forward (optional)
     model.model[-1].export = True  # set Detect() layer export=True
    
  4. Run 'python3 yolov5/export.py --weights ./yolov5m.pt' to create torchscript and change ext to .pth
    (yolov5m.pt is yolov5 v4.0 pretrained model asset)

  5. create tar.gz and do neo compile

  6. call yolov5 model and post_process like this (non_max_suppression is in utils/general.py)
    inference_results = self.call({"input0":image_data}, self.MODEL_NODE)[0]
    inference_results = non_max_suppression(torch.tensor(inference_results))[0]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants