Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add JGLUE tasks #469

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Conversation

ryan-minato
Copy link
Contributor

JGLUE is a widely used test set in the Japanese LLM research community, consisting of five sub-tests (with MARC-ja removed due to a request from Amazon):

JSTS
JNLI
JSQuAD
JCommonsenseQA

Finished Issue #455

@ryan-minato ryan-minato changed the title feat: add JGLUE tests feat: add JGLUE tasks Dec 19, 2024
Comment on lines +70 to +97
def correlation_metric(golds: list[int], predictions: list[str], **kwargs):
def convert_to_float(score):
try:
return float(score)
except ValueError:
return None

predicted_score = convert_to_float(predictions[0])
gold_score = convert_to_float(golds[0])

return {
"predicted_score": predicted_score,
"gold_score": gold_score,
}


def spearman_corpus_metric(items):
predicted_scores, gold_scores = zip(
*[
(item["predicted_score"], item["gold_score"])
for item in items
if (item["gold_score"] is not None and item["predicted_score"] is not None)
]
)
r, _ = spearmanr(predicted_scores, gold_scores)
if np.isnan(r):
return 0.0
frac = len(predicted_scores) / len(items)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NathanHB I believe we could add these 2 to core metrics, wdyt?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

definitely !

Copy link
Member

@clefourrier clefourrier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi! This looks good to me from a glance, thanks for the very detailed work!
Did you try to reproduce with this implementation the results obtained with llm-jp-eval, to make sure it is correct?

@HuggingFaceDocBuilderDev
Copy link
Collaborator

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@ryan-minato
Copy link
Contributor Author

Hi! This looks good to me from a glance, thanks for the very detailed work! Did you try to reproduce with this implementation the results obtained with llm-jp-eval, to make sure it is correct?

Sorry for the delay—I was on vacation last week and didn't check anything on GitHub. Wishing you a Happy New Year!

It seems my earlier explanation may have caused some confusion.
The tasks created here are not entirely following the llm-jp-eval approach but are instead based on the Stability-AI/lm-evaluation-harness method, which has been out of maintenance for over a year and is currently non-functional.

I am in the process of creating the llm-jp-eval task set, but I plan to submit it in a new PR.

@clefourrier
Copy link
Member

Thanks for the explanation! Do you have any other implementation against which you could check your results?

You'll need to run the code quality too :)

@clefourrier
Copy link
Member

You can also add the 2 metrics I highlighted to the core metrics file if you want, as they are very valuable

@ryan-minato
Copy link
Contributor Author

Thanks for the explanation! Do you have any other implementation against which you could check your results?

You'll need to run the code quality too :)

I’ll start by fixing the CI tonight and then transfer the metrics to the core.

I might also fix the Stability-AI/lm-evaluation-harness to validate the results. This library relies on an outdated Transformers API—it was forked from lm-evaluation-harness back when quantization wasn’t supported, and some datasets have been deprecated, so there could be other unforeseen errors. This could take some time.

@clefourrier
Copy link
Member

Hm, would you have a simpler way to make sure your results are within range? Maybe a paper reported results with their implementation and you could try to reproduce it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants