Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem getting published inference time on single GPU: Compilation of "script/view_tranformation_cuda" or TensorRT necessary? #91

Open
Seyd2 opened this issue Sep 18, 2024 · 0 comments

Comments

@Seyd2
Copy link

Seyd2 commented Sep 18, 2024

Using model M0 I'm able to achieve 2 fps using the provided checkpoints. I just realized that the repo contains a folder with compilable code for the view transformer and would like to ask if the compilation is necessary to get to achieve the published 50 fps on Tesla T4? If not, is a TensorRT implementation necessary to get the published fps?

Else I could be running the test wrong. I'm using run_fastbev.sh without slurm as:

function test {
    GPUS=$1
    EXPNAME=$2
    RESUME=${3:-work_dirs/fastbev/exp/$EXPNAME/epoch_20.pth}

    echo test; sleep 0.5s
    python ./tools/test.py \
        configs/fastbev/exp/$EXPNAME.py \
        $RESUME \
        --eval bbox \
        --out work_dirs/fastbev/exp/$EXPNAME/results/results.pkl \
        --launcher pytorch \
}

test 1 paper/original_configs/fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4

with comamnd line:

./tools/fast_bevrun.sh

@Seyd2 Seyd2 changed the title Problem getting published inference time on single RT4090: Compilation of "script/view_tranformation_cuda" necessary? Problem getting published inference time on single RTX4090: Compilation of "script/view_tranformation_cuda" necessary? Sep 18, 2024
@Seyd2 Seyd2 changed the title Problem getting published inference time on single RTX4090: Compilation of "script/view_tranformation_cuda" necessary? Problem getting published inference time on single GPU: Compilation of "script/view_tranformation_cuda" necessary? Sep 22, 2024
@Seyd2 Seyd2 changed the title Problem getting published inference time on single GPU: Compilation of "script/view_tranformation_cuda" necessary? Problem getting published inference time on single GPU: Compilation of "script/view_tranformation_cuda" or TensorRT necessary? Sep 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant