Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
shaheennabi authored Nov 28, 2024
1 parent 9ce1c9a commit 28d3d44
Showing 1 changed file with 5 additions and 13 deletions.
18 changes: 5 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,23 +88,15 @@ This approach ensures a resource-efficient, scalable, and production-ready model


---
## Dataset Information πŸŽ‰

The dataset used in this project consists of two files:
## **Challenges Encountered** πŸŽ‹

- **train.jsonl**: Contains 42,500 rows of training data.
- **test.jsonl**: Contains 2,310 rows of test data.

---

## Challenges Encountered πŸŽ‹
The project faced several challenges, including:

The project encountered several challenges, including:
- **Limited GPU Resources**: Fine-tuning a large model was challenging due to the scarcity of available GPU resources.
- **Timeline Constraints**: A tight project timeline, driven by the large user base, required rapid action and attention.
- **Model Inference on AWS**: Running inference on AWS incurred high costs, raising concerns around both **storage** and **compute expenses**.

- **Limited GPU Resources**: Fine-tuning a large model was difficult due to the scarcity of available GPU resources.
- **Human Preferences and Safe Responses**: Ensuring the model generated **accurate responses** without harmful or biased content was a key concern, requiring proper mitigation strategies.
- **Timeline Constraints**: The project timeline posed significant challenges, due to the large user base of the model, requiring quick action and immediate attention.
- **Model Inference on AWS**: Running inference on AWS was costly. This raised concerns regarding both **storage** and **compute costs**.


## How I Fixed Challenges 🌟
Expand Down

0 comments on commit 28d3d44

Please sign in to comment.