Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MLT Windows] multihead_attention float16 got accuracy issue #1237

Open
daisyden opened this issue Dec 31, 2024 · 0 comments
Open

[MLT Windows] multihead_attention float16 got accuracy issue #1237

daisyden opened this issue Dec 31, 2024 · 0 comments
Assignees
Milestone

Comments

@daisyden
Copy link
Contributor

🐛 Describe the bug

FAILED test_native_mha_xpu.py::TestMHADeviceTypeXPU::test_native_multihead_attention_xpu_float16
================================== FAILURES ===================================
______ TestMHADeviceTypeXPU.test_native_multihead_attention_xpu_float16 _______
Traceback (most recent call last):
File "C:\Users\gta\penghuic\pytorch\third_party\torch-xpu-ops\test\xpu../../../../test\test_native_mha.py", line 323, in test_native_multihead_attention
self._test_multihead_attention_impl(
File "C:\Users\gta\penghuic\pytorch\third_party\torch-xpu-ops\test\xpu../../../../test\test_native_mha.py", line 256, in _test_multihead_attention_impl
torch.testing.assert_close(ypt, ynpt.to(torch.float32), atol=1e-3, rtol=1e-3)
File "C:\Users\gta\Miniforge3\envs\ut-py310\lib\site-packages\torch\testing_comparison.py", line 1530, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!

Mismatched elements: 416 / 8192 (5.1%)
Greatest absolute difference: 0.0026940107345581055 at index (2, 7, 62) (up to 0.001 allowed)
Greatest relative difference: 10.974992752075195 at index (10, 4, 51) (up to 0.001 allowed)

To execute this test, run the following from the base repo dir:
python test\test_native_mha.py TestMHADeviceTypeXPU.test_native_multihead_attention_xpu_float16

FAILED test_native_mha_xpu.py::TestMHADeviceTypeXPU::test_native_multihead_encoder_decoder_attention_xpu_float16

================================== FAILURES ===================================
_ TestMHADeviceTypeXPU.test_native_multihead_encoder_decoder_attention_xpu_float16 _
Traceback (most recent call last):
File "C:\Users\gta\penghuic\pytorch\third_party\torch-xpu-ops\test\xpu../../../../test\test_native_mha.py", line 309, in test_native_multihead_encoder_decoder_attention
self._test_multihead_attention_impl(
File "C:\Users\gta\penghuic\pytorch\third_party\torch-xpu-ops\test\xpu../../../../test\test_native_mha.py", line 256, in _test_multihead_attention_impl
torch.testing.assert_close(ypt, ynpt.to(torch.float32), atol=1e-3, rtol=1e-3)
File "C:\Users\gta\Miniforge3\envs\ut-py310\lib\site-packages\torch\testing_comparison.py", line 1530, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!

Mismatched elements: 454 / 8192 (5.5%)
Greatest absolute difference: 0.0034734010696411133 at index (1, 3, 48) (up to 0.001 allowed)
Greatest relative difference: 1.2274675369262695 at index (1, 6, 26) (up to 0.001 allowed)

To execute this test, run the following from the base repo dir:
python test\test_native_mha.py TestMHADeviceTypeXPU.test_native_multihead_encoder_decoder_attention_xpu_float16

Versions

nightly 20241230

@daisyden daisyden self-assigned this Jan 21, 2025
@daisyden daisyden added this to the PT2.7 milestone Jan 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant