Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[js/webgpu] Optimize ConvTranspose (Continue) #23429

Merged
merged 3 commits into from
Jan 22, 2025

Conversation

qjia7
Copy link
Contributor

@qjia7 qjia7 commented Jan 20, 2025

BUG #23273

This PR does below optimizations:

  1. When output channels is one, 1) calculate the offset before the inchannel loop to reduce indices to offsets calculation, 2) split the inputChannelsPerGroup into inputChannelsPerGroupInt and inputChannelsRemainder parts so that we can always access 4 data for inputChannelsPerGroupInt.
  2. Use precise initial value to reduce useless loop iterations. Thanks @jiangzhaoming 's suggestion's on this.

With this PR, ConvTranspose becomes 3.7s from 8.4s on Intel Meteor Lake.
On NV RTX 2000 Ada, it becomes 1.6s from 2.7s.

qjia7 added 3 commits January 20, 2025 12:48
This needs to add padding or recalculate the correct offset to get the right
result.

 fix errors
@qjia7
Copy link
Contributor Author

qjia7 commented Jan 20, 2025

@guschmue @fs-eire Please take a look, thanks.

@guschmue guschmue added the ep:WebGPU ort-web webgpu provider label Jan 22, 2025
@guschmue
Copy link
Contributor

/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline,Linux Android Emulator QNN CI Pipeline

@guschmue
Copy link
Contributor

/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@guschmue
Copy link
Contributor

/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models

Copy link

Azure Pipelines could not run because the pipeline triggers exclude this branch/path.

@guschmue
Copy link
Contributor

/azp run Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

Copy link

Azure Pipelines could not run because the pipeline triggers exclude this branch/path.

Copy link
Contributor

@guschmue guschmue left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm.

@guschmue guschmue merged commit 25f4274 into microsoft:main Jan 22, 2025
51 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:WebGPU ort-web webgpu provider
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants