You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
报错信息:
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 54.00 MiB. GPU 0 has a total capacity of 23.64 GiB of which 24.81 MiB is free. Process 202215 has 23.61 GiB memory in use. Of the allocated memory 23.23 GiB is allocated by PyTorch, and 9.13 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
下拉菜单选择Flux的时候,加载模型时出错
报错信息:
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 54.00 MiB. GPU 0 has a total capacity of 23.64 GiB of which 24.81 MiB is free. Process 202215 has 23.61 GiB memory in use. Of the allocated memory 23.23 GiB is allocated by PyTorch, and 9.13 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
看显卡:
第一张卡24186MiB / 24564MiB;第二张卡4MiB / 24564MiB。
说明推理代码全都在第一个显卡跑,结果超过24G了。有什么办法把一些模型加载到第二张卡。
我看加载模型用的是diffsynth库的ModelManager的load_models方法,有什么办法在这个方法中,让不同的显卡加载不同的模型?
The text was updated successfully, but these errors were encountered: