-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Converting keras to .pb #2
Comments
if you are running jetson nano i can provide you the pb and engine file |
@shubham-shahh If possible please share gdrive link of pb file. Thanks. |
Follow https://github.com/nwesem/mtcnn_facenet_cpp_tensorRT/tree/develop, it is updated. do let me know if you face any issues |
@shubham-shahh Having this error AssertionError: Bottleneck_BatchNorm/batchnorm_1/add_1 is not in graph after entering command
Any solution? |
Kindly open a issue here. And this error is most probably a cause of inappropriate versions of dependencies. kindly post the output of pip3 list |
I had this problem too, but i had to downgrade to tensorflow 1.15 and keras 2.3.1. And that solved the issue. |
Thanks, Yes it was the version error but still I was not able to produce Embedding and the default app was crashing with a Core dump. Took me the whole day and the solution was quite simple. vidconvsinkpad= sgie.get_static_pad("src") |
It should work in the stock form if you're using the fork I pointed to. but I'm glad you made it work. |
Well, I m still struggling to make it work with the tracker. I saw the repo you mentioned also commented on the tracker part and ran without it. Do you have any suggestions for running it with the tracker? I did this and it is working but i m not sure if it is the right approach |
There's the reason to remove the tracker. If you include the tracker between pgie and sgie, it won't pass all the frames from pgie to sgie hence you won't get embeddings for all the frames |
There's no point Linking the sgie to the tracker imo |
I got your point but What if I don't want to run it on every frame. I want pgie to be tracked and sgie to give me embeddings. |
If you link the pgie to the tracker, you'll be able to pass frames to the sgie if the tracker thinks it's a new object if you want to do that, just uncomment the tracker code and include the tracker config file. Another approach would be to increase the frame interval for object detection you can set it to detect every 2nd frame or 10th frame etc One more problem you might have is with rhe ROI of the tracker, set it with experimentation or else you won't detect anything. |
Yeah but whenever I use pgie->tracker it gives me this |
Did you use the appropriate tracker config? Is your gstream pipeline based on deepstream test app 2? |
My pipeline is based on deepstream test app 3 and rtsp out. I am using Multistreaming with multi output support |
Okay, this error is probably because of the gstream pipeline |
Well It is difficult to debug it. My pipeline is |
read the examples to understand the compatibility of each element in the pipeline. |
hello @shubham-shahh i also try to work with multirtsp, but i got some problem related to segmentation fault. |
Please post your current pipeline along with what implementation you are using? |
Thanks, I'll check it |
hello @shubham-shahh how is it going? |
I don't have deestream device right now to test. meanwhile, you can see different examples docs to find out compatibility of different elements in your pipeline. |
with classifier:
without classifier:
without face classifier, everything is ok. but if i add face classifier it has error as below
i really don't known why. |
please check this fork for sgie implementation |
Plus I went through your code, there's no point putting a tracker before a pgie, what is it suppose to track? |
i linked pgie to tracker. it works. |
hahaha, I'm glad. placing tracker before pgie has no use |
thank you @shubham-shahh |
I am sorry but I am not a contributor to this repo so I have a shallow understanding of it. In my implementation, I have removed the tracker as I am willing to interpret all the frames consiting faces. |
thank @shubham-shahh very much |
No worries. |
Hi, I was wondering if there is a way where we can dynamically add a new RTSP stream to the deepstream pipeline or remove the old one if input stream stops? Python Implementation? I found something similar in c implementation https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/runtime_source_add_delete/deepstream_test_rt_src_add_del.c |
@ThiagoMateo did you manage to implement face recognition with deepstream test 3? |
How to get the .pb file from .h5 model?? 1. How to run the keras_to_pb.py? 2. What are the dependencies to run this? |
Hi, you can follow this repo for detailed instructions |
@shubham-shahh |
If it can support tensorRT, deepstream SDK 5.0 and cuda 10.2 |
Yeah, It's having DS 5.1, tensorRT 7.2.3, CUDA 11.2. |
Ideally, it should work, but cannot say for sure as I haven't tested it on DS 5.1 |
Hi, the person might have renamed the files, please change the commands accordingly and please open a issue on the repo you're referring |
changed the command.
|
I've mentioned all the dependencies on this branch |
Hi, please mention the version of all dependencies |
|
try downgrading h5py and numpy |
@shubham-shahh |
Awesome, keep us posted. |
Hi, I haven't tried using file sink so I am not the right person to comment but you'll definitely find something if you search through the deepstream forum |
Try sink.set_property("sync", 1) |
@shubham-shahh any update on this? |
Please add a PR |
you can use the approach in exact same way, just some minor changes in DS pipeline for training and inferencing, the problem with that approach is, it uses the face recognition library for face detection, and it is really slow. |
i am facing problems, when i am trying to Convert the TensorFlow/Keras model to a .pb file. I am using Jypiter Notebook to perform the task, but the file is not generated in the specified path. Am i doing it wrong?
`%reload_ext autoreload
%autoreload 2
from keras_to_pb_tf2 import keras_to_pb
from keras.models import load_model
#User defined values
#Input file path
MODEL_PATH = '/home/blaise/Documents/deepstreamfacerecognition/models/facenet_keras_128.h5'
#output files paths
PB_FILE_PATH = '/home/blaise/Documents/deepstreamfacerecognition/tf2trt_with_onnx/facenet_freezed.pb'
ONNX_FILE_PATH = '/home/blaise/Documents/deepstreamfacerecognition/tf2trt_wtih_onnx/facenet_onnx.onnx'
TRT_ENGINE_PATH = '/home/blaise/Documents/deepstreamfacerecognition/tf2trt_wtih_onnx/facenet_engine.plan'
#End user defined values
model = load_model(MODEL_PATH)
input_name, output_node_names = keras_to_pb(model, PB_FILE_PATH, None`
The text was updated successfully, but these errors were encountered: