Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about environment #2

Open
Maybeetw opened this issue Nov 5, 2024 · 3 comments
Open

about environment #2

Maybeetw opened this issue Nov 5, 2024 · 3 comments

Comments

@Maybeetw
Copy link

Maybeetw commented Nov 5, 2024

Can you provide more detailed environmental details (such as versions of some important libraries)? I always encounter various problems according to the conda.md file

@NieShenRuc
Copy link
Collaborator

Thank you for your interest in our work! I have committed our environment.yaml file for your reference. Please note that directly using this .yaml file to create a new Anaconda environment might lead to some issues.

If you encounter any difficulties setting up the environment following the instructions in conda.md, you can also refer to TinyLlama’s setup guide. We use the same training environment as TinyLlama, but please be aware that SMDM requires a few additional libraries for evaluation.

@pengzhangzhi
Copy link

Hi, I'm having the issue below:

 main* 130 ± bash eval_mdm.sh 
Traceback (most recent call last):
  File "/home/mila/a/alexander.tong/SMDM/evaluate_diff.py", line 18, in <module>
    from lit_gpt.diffmodel import TransEncoder, Config
  File "/home/mila/a/alexander.tong/SMDM/lit_gpt/__init__.py", line 1, in <module>
    from lit_gpt.model import GPT
  File "/home/mila/a/alexander.tong/SMDM/lit_gpt/model.py", line 16, in <module>
    from .fused_rotary_embedding import apply_rotary_emb_func
  File "/home/mila/a/alexander.tong/SMDM/lit_gpt/fused_rotary_embedding.py", line 6, in <module>
    import rotary_emb
ModuleNotFoundError: No module named 'rotary_emb'

using the following cmd to install:


# Install PyTorch and dependencies
pip install torch

# Install flash-attention using --no-build-isolation
pip install flash-attn --no-build-isolation

# Install xformers
pip install -U xformers

# Install TinyLlama requirements
git clone https://github.com/jzhang38/TinyLlama.git
cd TinyLlama
pip install -r requirements.txt tokenizers sentencepiece
cd .. && rm -rf TinyLlama

# Install the dependencies needed for evaluation
pip install lm-eval==0.4.4 numpy==1.25.0 bitsandbytes==0.43.1
pip install openai==0.28 fschat==0.2.34 anthropic

@akhauriyash
Copy link

From those commands, I think you forgot to do the following from the flash-attn codebase

cd csrc/rotary && pip install .
cd ../layer_norm && pip install .
cd ../xentropy && pip install .

They dropped official support and moved to triton kernels in flash-attn, but this codebases uses this (i believe) deprecated code, so you have to manually install.

Hope this helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants