Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bib entry conversion incorrect #27

Open
danieldeutsch opened this issue Jul 2, 2021 · 0 comments
Open

Bib entry conversion incorrect #27

danieldeutsch opened this issue Jul 2, 2021 · 0 comments

Comments

@danieldeutsch
Copy link
Collaborator

Here is a bib entry from ACL:

@inproceedings{bhandari-etal-2020-evaluating,
    title = "Re-evaluating Evaluation in Text Summarization",
    author = "Bhandari, Manik  and
      Gour, Pranav Narayan  and
      Ashfaq, Atabak  and
      Liu, Pengfei  and
      Neubig, Graham",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.emnlp-main.751",
    doi = "10.18653/v1/2020.emnlp-main.751",
    pages = "9347--9359",
    abstract = "Automated evaluation metrics as a stand-in for manual evaluation are an essential part of the development of text-generation tasks such as text summarization. However, while the field has progressed, our standard metrics have not {--} for nearly 20 years ROUGE has been the standard evaluation in most summarization papers. In this paper, we make an attempt to re-evaluate the evaluation method for text summarization: assessing the reliability of automatic metrics using top-scoring system outputs, both abstractive and extractive, on recently popular datasets for both system-level and summary-level evaluation settings. We find that conclusions about evaluation metrics on older datasets do not necessarily hold on modern datasets and systems. We release a dataset of human judgments that are collected from 25 top-scoring neural summarization systems (14 abstractive and 11 extractive).",
}

The bib entry from the conversion tool is this:

@InProceedings{BaGALN20,
 Address = {Online},
 Author = { Bh and Manik ari and Pranav Narayan Gour and Atabak Ashfaq and Pengfei Liu and Graham Neubig},
 Booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
 Doi = {10.18653/v1/2020.emnlp-main.751},
 Month = {Nov},
 Pages = {9347--9359},
 Publisher = {Association for Computational Linguistics},
 Title = {{Re-evaluating Evaluation in Text Summarization}},
 Url = {https://aclanthology.org/2020.emnlp-main.751},
 Year = {2020}
}

Somehow the first author's name got messed up. Changing the last name from "Bhandari" to "Bhand" causes an exception in the code. "Bhan" gives the intended result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant