-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix einsum
operator for empty inputs
#23379
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: neNasko1 <[email protected]>
Signed-off-by: neNasko1 <[email protected]>
Signed-off-by: neNasko1 <[email protected]>
/azp run Big Models, Linux Android Emulator QNN CI Pipeline, Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline |
/azp run Linux OpenVINO CI Pipeline, Linux QNN CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, Windows ARM64 QNN CI Pipeline, Windows CPU CI Pipeline, Windows GPU CUDA CI Pipeline |
/azp run Windows GPU DML CI Pipeline, Windows GPU Doc Gen CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows x64 QNN CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline |
Azure Pipelines successfully started running 6 pipeline(s). |
Azure Pipelines successfully started running 7 pipeline(s). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Azure Pipelines successfully started running 5 pipeline(s). |
Tensor& output = *context_->Output(0, output_dims); | ||
|
||
if constexpr (std::is_integral<T>::value) { | ||
std::fill_n(reinterpret_cast<T*>(output.MutableDataRaw()), output.Shape().Size(), T(0)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic needs a device abstraction. The output data could on CUDA for example and we need to populate it appropriately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the quick review!
Can you look over the changes again and tell me whether they make sense to you, and if so run the test suite again?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added relevant comment in the PR
Signed-off-by: Atanas Dimitrov <[email protected]>
@@ -367,14 +367,19 @@ Status EinsumTypedComputeProcessor<T>::Run() { | |||
if (has_empty_input) { | |||
const auto output_dims = einsum_compute_preprocessor_.GetOutputDims(); | |||
Tensor& output = *context_->Output(0, output_dims); | |||
Tensor candidate_output(raw_inputs[0]->DataType(), output_dims, allocator_); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if the Einsum node were partitioned to the CUDA EP (or some non-CPU EP), the allocator would still be associated with the non-CPU device the EP is associated with - so the same issue would persist as before. I think we may need to bubble up the CPU allocator from here to allocate the temporary CPU tensor (
auto status = context->GetTempSpaceAllocator(&allocator); |
Status OpKernelContext::GetTempSpaceCPUAllocator(AllocatorPtr* output) const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the provided suggestion. Can you please check whether the last commit does things properly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/azp run Linux GPU CI Pipeline |
Azure Pipelines successfully started running 1 pipeline(s). |
Signed-off-by: Atanas Dimitrov <[email protected]>
/azp run Linux GPU CI Pipeline |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run Linux CPU CI Pipeline |
Azure Pipelines successfully started running 1 pipeline(s). |
Description
Implement a short-circuit to take care of the case of an empty input in
einsum
for thecpu
provider.Motivation and Context
Currently the
einsum
operator was segmentation faulting whenever it was passed an empty input. All the related test cases are demonstrating this.