Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: RTX 5090 support missing #7683

Open
1 task done
GewoonJaap opened this issue Feb 25, 2025 · 1 comment
Open
1 task done

[bug]: RTX 5090 support missing #7683

GewoonJaap opened this issue Feb 25, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@GewoonJaap
Copy link

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

RTX 5090 FE

GPU VRAM

32GB

Version number

5.6.2

Browser

Firefox

Python dependencies

{
"accelerate": "1.0.1",
"compel": "2.0.2",
"cuda": "12.4",
"diffusers": "0.31.0",
"numpy": "1.26.4",
"opencv": "4.9.0.80",
"onnx": "1.16.1",
"pillow": "11.1.0",
"python": "3.10.11",
"torch": "2.4.1+cu124",
"torchvision": "0.19.1+cu124",
"transformers": "4.46.3",
"xformers": "0.0.28.post1"
}

What happened

Getting the following error while trying to generate an image:

Error while invoking session d47d1ba3-bc52-4d04-b425-ce08a91d84df, invocation 28b7c880-8463-4e2c-8ed2-4c674f25a610 (flux_text_encoder): CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

[2025-02-25 19:18:26,153]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\invokeai\app\services\session_processor\session_processor_default.py", line 129, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\invokeai\app\invocations\baseinvocation.py", line 300, in invoke_internal
    output = self.invoke(context)
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\invokeai\app\invocations\flux_text_encoder.py", line 60, in invoke
    t5_embeddings = self._t5_encode(context)
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\invokeai\app\invocations\flux_text_encoder.py", line 78, in _t5_encode
    with (
  File "contextlib.py", line 135, in __enter__
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\invokeai\backend\model_manager\load\load_base.py", line 75, in model_on_device
    self._cache.lock(self._cache_record, working_mem_bytes)
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\invokeai\backend\model_manager\load\model_cache\model_cache.py", line 52, in wrapper
    return method(self, *args, **kwargs)
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\invokeai\backend\model_manager\load\model_cache\model_cache.py", line 256, in lock
    self._load_locked_model(cache_entry, working_mem_bytes)
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\invokeai\backend\model_manager\load\model_cache\model_cache.py", line 328, in _load_locked_model
    model_bytes_loaded = self._move_model_to_vram(cache_entry, vram_available + MB)
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\invokeai\backend\model_manager\load\model_cache\model_cache.py", line 350, in _move_model_to_vram
    return cache_entry.cached_model.full_load_to_vram()
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\invokeai\backend\model_manager\load\model_cache\cached_model\cached_model_only_full_load.py", line 78, in full_load_to_vram
    self._model.load_state_dict(new_state_dict, assign=True)
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\venv\lib\site-packages\torch\nn\modules\module.py", line 2201, in load_state_dict
    load(self, state_dict)
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\venv\lib\site-packages\torch\nn\modules\module.py", line 2189, in load
    load(child, child_state_dict, child_prefix)  # noqa: F821
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\venv\lib\site-packages\torch\nn\modules\module.py", line 2189, in load
    load(child, child_state_dict, child_prefix)  # noqa: F821
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\venv\lib\site-packages\torch\nn\modules\module.py", line 2189, in load
    load(child, child_state_dict, child_prefix)  # noqa: F821
  [Previous line repeated 4 more times]
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\venv\lib\site-packages\torch\nn\modules\module.py", line 2183, in load
    module._load_from_state_dict(
  File "C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\invokeai\backend\quantization\bnb_llm_int8.py", line 58, in _load_from_state_dict
    assert weight_format == 0
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

What you expected to happen

I expected an image to be generated

How to reproduce the problem

Bootup Invoke.AI on a RTX 50xx card
Generate an image
Observe the error.

Additional context

Seems like a new CUDA version is being used by the RT50 series cards:
lllyasviel/Fooocus#3862 (comment)

Discord username

No response

@GewoonJaap GewoonJaap added the bug Something isn't working label Feb 25, 2025
@GewoonJaap
Copy link
Author

C:\Users\Jaap\AppData\Roaming\StabilityMatrix\Packages\InvokeAI\venv\lib\site-packages\torch\cuda\__init__.py:230: UserWarning: 
NVIDIA GeForce RTX 5090 with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant