Regarding using Llama 7b and 13b #637
Unanswered
TheAthleticCoder
asked this question in
Q&A
Replies: 3 comments 3 replies
-
Tell us the hardware spec? |
Beta Was this translation helpful? Give feedback.
2 replies
-
Have you encountered a situation where using llama results in the output of /n or other characters being repeated many times?How should I go about resolving this issue? |
Beta Was this translation helpful? Give feedback.
1 reply
-
Yes, I have. Add EOS term or reduce your max token limit. I think that is
what sovled the issue but I am 100% sure about it because it was a long
time ago. Lemme know what fixes it though.
…On Fri, Nov 24, 2023 at 11:14 AM guaguace ***@***.***> wrote:
Have you encountered a situation where using llama results in the output
of /n or other characters being repeated many times?How should I go about
resolving this issue?
—
Reply to this email directly, view it on GitHub
<#637 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AZRQUWKGQUP4WSEXVQ45OGTYGBJPVAVCNFSM6AAAAAA3ACPVCWVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TMNJYGA3TQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey!
Does anyone have any idea how to use Llama-7b in
vllm
?My computing resources do not allow for using the
13b
version :(Beta Was this translation helpful? Give feedback.
All reactions