You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
IHAC who has some use cases where they want to build an object detection model using Pytorch YoloV5. They were able to build the model but not able to compile through Neo. here is the error we are getting :
"ClientError: InputConfiguration: TVM cannot convert the PyTorch model. Invalid model or input-shape mismatch. Make sure that inputs are lexically ordered and of the correct dimensionality. Traceback (most recent call last):\n 2: TVMFuncCall\n 1: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::relay::Tuple (tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)>::AssignTypedLambda<tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}>(tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)\n 0: tvm::runtime::TVMMovableArgValueWithContext::operator tvm::runtime::Array<tvm::RelayExpr, void><tvm::runtime::Array<tvm::RelayExpr, void> >() const\n 3: TVMFuncCall\n 2: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::relay::Tuple (tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)>::AssignTypedLambda<tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}>(tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::M_invoke(std::Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)\n 1: tvm::runtime::TVMMovableArgValueWithContext::operator tvm::runtime::Array<tvm::RelayExpr, void><tvm::runtime::Array<tvm::RelayExpr, void> >() const\n 0: tvm::runtime::Array<tvm::RelayExpr, void> tvm::runtime::TVMPODValue::AsObjectRef<tvm::runtime::Array<tvm::RelayExpr, void> >() const\n File "/tvm/include/tvm/runtime/packed_func.h", line 714\nTVMError: In function relay.ir.Tuple: error while converting argument 0: [09:36:18] /tvm/include/tvm/runtime/packed_func.h:1591: \n---------------------------------------------------------------\nAn error occurred during the execution of TVM.\nFor more information, please see: https://tvm.apache.org/docs/errors.html\n---------------------------------------------------------------\n Check failed: (!checked_type.defined()) is false: Expected Array[RelayExpr], but got Array[index 1: Array]\n”
The text was updated successfully, but these errors were encountered:
Here is my approach for neo compile and tested through simulator
Use compatible version of torch and vision
pip install torch==1.8.1 torchvision==0.9.1
Use yolov5 latest release to export and bring pretrained model using yolov5 v4.0 (higher not supported)
modify export.py of yolov5 latest version
import models
import torch.nn as nn
from utils.activations import Hardswish, SiLU
In 'def export_torchscript' add following code before 'ts = torch.jit.trace(model, im, strict=False)'
Update model
for k, m in model.named_modules():
m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility
if isinstance(m, models.common.Conv): # assign export-friendly activations
if isinstance(m.act, nn.Hardswish):
m.act = Hardswish()
elif isinstance(m.act, nn.SiLU):
m.act = SiLU()
# elif isinstance(m, models.yolo.Detect):
# m.forward = m.forward_export # assign forward (optional)
model.model[-1].export = True # set Detect() layer export=True
Run 'python3 yolov5/export.py --weights ./yolov5m.pt' to create torchscript and change ext to .pth
(yolov5m.pt is yolov5 v4.0 pretrained model asset)
create tar.gz and do neo compile
call yolov5 model and post_process like this (non_max_suppression is in utils/general.py)
inference_results = self.call({"input0":image_data}, self.MODEL_NODE)[0]
inference_results = non_max_suppression(torch.tensor(inference_results))[0]
IHAC who has some use cases where they want to build an object detection model using Pytorch YoloV5. They were able to build the model but not able to compile through Neo. here is the error we are getting :
"ClientError: InputConfiguration: TVM cannot convert the PyTorch model. Invalid model or input-shape mismatch. Make sure that inputs are lexically ordered and of the correct dimensionality. Traceback (most recent call last):\n 2: TVMFuncCall\n 1: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::relay::Tuple (tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)>::AssignTypedLambda<tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}>(tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)\n 0: tvm::runtime::TVMMovableArgValueWithContext::operator tvm::runtime::Array<tvm::RelayExpr, void><tvm::runtime::Array<tvm::RelayExpr, void> >() const\n 3: TVMFuncCall\n 2: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::relay::Tuple (tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)>::AssignTypedLambda<tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}>(tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::M_invoke(std::Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)\n 1: tvm::runtime::TVMMovableArgValueWithContext::operator tvm::runtime::Array<tvm::RelayExpr, void><tvm::runtime::Array<tvm::RelayExpr, void> >() const\n 0: tvm::runtime::Array<tvm::RelayExpr, void> tvm::runtime::TVMPODValue::AsObjectRef<tvm::runtime::Array<tvm::RelayExpr, void> >() const\n File "/tvm/include/tvm/runtime/packed_func.h", line 714\nTVMError: In function relay.ir.Tuple: error while converting argument 0: [09:36:18] /tvm/include/tvm/runtime/packed_func.h:1591: \n---------------------------------------------------------------\nAn error occurred during the execution of TVM.\nFor more information, please see: https://tvm.apache.org/docs/errors.html\n---------------------------------------------------------------\n Check failed: (!checked_type.defined()) is false: Expected Array[RelayExpr], but got Array[index 1: Array]\n”
The text was updated successfully, but these errors were encountered: