Skip to content

Commit

Permalink
updated
Browse files Browse the repository at this point in the history
  • Loading branch information
shaheennabi committed Nov 24, 2024
1 parent 8d22b55 commit c59d19b
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/8. Quantized-Low Rank Adaptation(Qlora).md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ When we **quantize** the model, we convert the floating-point weights from a hig
Example:
- After scaling, **-3.75** might be quantized to an integer, say **8**, which fits into the 4-bit integer range [0, 15].

### Steps in the Quantization Process:
### Steps in the Quantization Process: (maybe using bitsandbytes library, we use)

1. **Scale the Values**:
The range of the weights (e.g., from -5.0 to 5.0) is scaled to fit within the range of the target precision (e.g., 0 to 15 for 4-bit precision).
Expand Down

0 comments on commit c59d19b

Please sign in to comment.