Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix einsum operator for empty inputs #23379

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -367,14 +367,19 @@ Status EinsumTypedComputeProcessor<T>::Run() {
if (has_empty_input) {
const auto output_dims = einsum_compute_preprocessor_.GetOutputDims();
Tensor& output = *context_->Output(0, output_dims);
Tensor candidate_output(raw_inputs[0]->DataType(), output_dims, allocator_);
Copy link
Member

@hariharans29 hariharans29 Jan 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if the Einsum node were partitioned to the CUDA EP (or some non-CPU EP), the allocator would still be associated with the non-CPU device the EP is associated with - so the same issue would persist as before. I think we may need to bubble up the CPU allocator from here to allocate the temporary CPU tensor (

auto status = context->GetTempSpaceAllocator(&allocator);
) (I think using this will work -
Status OpKernelContext::GetTempSpaceCPUAllocator(AllocatorPtr* output) const {
?)

Copy link
Contributor Author

@neNasko1 neNasko1 Jan 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the provided suggestion. Can you please check whether the last commit does things properly?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still seeing some issue while running the GPU tests. Are you able to build with CUDA and debug ? FWIW - I kicked off the CPU pipeline to see if it passes the tests with CPU only to help narrow if it is a generic bug or if it is something CUDA specific.

image


if constexpr (std::is_integral<T>::value) {
std::fill_n(reinterpret_cast<T*>(output.MutableDataRaw()), output.Shape().Size(), T(0));
std::fill_n(reinterpret_cast<T*>(candidate_output.MutableDataRaw()), candidate_output.Shape().Size(), T(0));
} else {
std::fill_n(reinterpret_cast<T*>(output.MutableDataRaw()), output.Shape().Size(), T(0.f));
std::fill_n(reinterpret_cast<T*>(candidate_output.MutableDataRaw()), candidate_output.Shape().Size(), T(0.f));
}

return Status::OK();
auto status = device_data_copy_func_(candidate_output, output, einsum_ep_assets_);
ORT_ENFORCE(status.IsOK(), "Einsum op: Could not copy the intermediate output's buffer into the op's output buffer. Error: ",
status.ErrorMessage());

return status;
}
}

Expand Down
Loading