Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats #2

Open
SeekPoint opened this issue Apr 18, 2023 · 0 comments
Open

libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats #2

SeekPoint opened this issue Apr 18, 2023 · 0 comments

Comments

@SeekPoint
Copy link

(gh_alpaca-7b-chinese) ub2004@ub2004-B85M-A0:~/llm_dev/alpaca-7b-chinese/finetune$ python3 finetune.py --base_model decapoda-research/llama-7b-hf --data_dir ../data/alpaca-en-zh.json --output_dir ../finetuned/llama-7b-hf_alpaca-en-zh --lora_target_modules '["q_proj", "v_proj"]'

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

bin /home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/home/ub2004/anaconda3/envs/gh_alpaca-7b-chinese/lib')}
warn(msg)
/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: /home/ub2004/anaconda3/envs/gh_alpaca-7b-chinese did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('@/tmp/.ICE-unix/1648,unix/ub2004-B85M-A0'), PosixPath('local/ub2004-B85M-A0')}
warn(msg)
/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/etc/xdg/xdg-ubuntu')}
warn(msg)
/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('1'), PosixPath('0')}
warn(msg)
/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/org/gnome/Terminal/screen/50429ffc_73e7_436a_ae26_12ca37ff5ff1')}
warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/cuda/lib64')}
warn(msg)
ERROR: python3: undefined symbol: cudaRuntimeGetVersion
CUDA SETUP: libcudart.so path is None
CUDA SETUP: Is seems that your cuda installation is not in your path. See bitsandbytes-foundation/bitsandbytes#85 for more information.
CUDA SETUP: CUDA version lower than 11 are currently not supported for LLM.int8(). You will be only to use 8-bit optimizers and quantization routines!!
/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)!
warn(msg)
CUDA SETUP: Highest compute capability among GPUs detected: 6.1
CUDA SETUP: Detected CUDA version 00
/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU!
warn(msg)
CUDA SETUP: Loading binary /home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.15) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Finetune parameters:
base_model: decapoda-research/llama-7b-hf
model_type: llama
data_dir: ../data/alpaca-en-zh.json
output_dir: ../finetuned/llama-7b-hf_alpaca-en-zh
batch_size: 128
micro_batch_size: 4
num_epochs: 20
learning_rate: 0.0003
cutoff_len: 512
val_set_size: 2000
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules: ['q_proj', 'v_proj']
train_on_inputs: True
group_by_length: True

The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
Overriding torch_dtype=None with torch_dtype=torch.float16 due to requirements of bitsandbytes to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning.
Loading checkpoint shards: 0%| | 0/33 [00:00<?, ?it/s]
Traceback (most recent call last):
File "finetune.py", line 245, in
main()
File "/usr/lib/python3/dist-packages/click/core.py", line 764, in call
return self.main(*args, **kwargs)
File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "finetune.py", line 173, in main
tokenizer, model = decide_model(args=local_args, device_map=device_map)
File "finetune.py", line 63, in decide_model
model = _MODEL_CLASSES[model_type].model.from_pretrained(
File "/home/ub2004/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2795, in from_pretrained
) = cls._load_pretrained_model(
File "/home/ub2004/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 3123, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/home/ub2004/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 706, in _load_state_dict_into_meta_model
set_module_8bit_tensor_to_device(
File "/home/ub2004/.local/lib/python3.8/site-packages/transformers/utils/bitsandbytes.py", line 78, in set_module_8bit_tensor_to_device
new_value = bnb.nn.Int8Params(new_value, requires_grad=False, has_fp16_weights=has_fp16_weights).to(device)
File "/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/nn/modules.py", line 227, in to
return self.cuda(device)
File "/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/nn/modules.py", line 191, in cuda
CB, CBt, SCB, SCBt, coo_tensorB = bnb.functional.double_quant(B)
File "/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/functional.py", line 1642, in double_quant
row_stats, col_stats, nnz_row_ptr = get_colrow_absmax(
File "/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/functional.py", line 1531, in get_colrow_absmax
lib.cget_col_row_stats(ptrA, ptrRowStats, ptrColStats, ptrNnzrows, ct.c_float(threshold), rows, cols)
File "/usr/lib/python3.8/ctypes/init.py", line 386, in getattr
func = self.getitem(name)
File "/usr/lib/python3.8/ctypes/init.py", line 391, in getitem
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: /home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats
(gh_alpaca-7b-chinese) ub2004@ub2004-B85M-A0:~/llm_dev/alpaca-7b-chinese/finetune$

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant