Why do we need to write a function to "Compute Metrics" with Huggingface Question Answering Trainer when evaluating SQuAD?

huangapple go评论120阅读模式
英文:

Why do we need to write a function to "Compute Metrics" with Huggingface Question Answering Trainer when evaluating SQuAD?

问题

Currently, I'm trying to build a Extractive QA pipeline, following the Huggingface Course on the matter. There, they show how to create a compute_metrics() function to evaluate the model after training. However, I was wondering if there's a way to obtain those metrics on training, and pass the compute_metrics() function directly to the trainer. They are training using only the training loss, and I would like to have the evaluation f1 score on training.

But, as I see it, it might be a little bit tricky, because they need the original spans to calculate the squad metrics, but you don't get those original spans passed on your tokenized training dataset.

predicted_answer = {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'}
theoretical_answer = {'id': '56be4db0acb8001400a502ec', 'answers': {'text': ['Denver Broncos', 'Denver Broncos', 'Denver Broncos'], 'answer_start': [177, 177, 177]}}

metric.compute(predictions=predicted_answers, references=theoretical_answers)

That's why they make the whole compute_metrics() function, taking a few extra parameters than the prediction outputted in the evaluation loop, as they need to rebuild those spans.

Q: How do I make the squad metric outputs F1 and accuracy scores from evaluate? How do I use the squad metric with the Trainer object?

英文:

Currently, I'm trying to build a Extractive QA pipeline, following the Huggingface Course on the matter. There, they show how to create a compute_metrics() function to evaluate the model after training. However, I was wondering if there's a way to obtain those metrics on training, and pass the compute_metrics() function directly to the trainer. They are training using only the training loss, and I would like to have the evaluation f1 score on training.

But, as I see it, it might be a little bit tricky, because they need the original spans to calculate the squad metrics, but you don't get those original spans passed on your tokenized training dataset.

  1. predicted_answer = {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'}
  2. theoretical_answer = {'id': '56be4db0acb8001400a502ec', 'answers': {'text': ['Denver Broncos', 'Denver Broncos', 'Denver Broncos'], 'answer_start': [177, 177, 177]}}
  3. metric.compute(predictions=predicted_answers, references=theoretical_answers)

That's why they make the whole compute_metrics() function, taking a few extra parameters than the prediction outputted in the evaluation loop, as they need to rebuild those spans.

Q: How do I make the squad metric outputs F1 and accuracy scores from evaluate? How do I use the squad metric with the Trainer object?

答案1

得分: 5

The compute_metrics function can be passed into the Trainer so that it validating on the metrics you need, e.g.

  1. from transformers import Trainer
  2. trainer = Trainer(
  3. model=model,
  4. args=args,
  5. train_dataset=train_dataset,
  6. eval_dataset=validation_dataset,
  7. tokenizer=tokenizer,
  8. compute_metrics=compute_metrics
  9. )
  10. trainer.train()

I'm not sure if it works out of the box with the code to process the train_dataset and validation_dataset in the course code link to Huggingface course

But this ones shows how the Trainer + compute_metrics work link to Huggingface course

英文:

The compute_metrics function can be passed into the Trainer so that it validating on the metrics you need, e.g.

  1. from transformers import Trainer
  2. trainer = Trainer(
  3. model=model,
  4. args=args,
  5. train_dataset=train_dataset,
  6. eval_dataset=validation_dataset,
  7. tokenizer=tokenizer,
  8. compute_metrics=compute_metrics
  9. )
  10. trainer.train()

I'm not sure if it works out of the box with the code to process the train_dataset and validation_dataset in the course code https://huggingface.co/course/chapter7

But this ones shows how the Trainer + compute_metrics work https://huggingface.co/course/chapter3/3


Before proceeding to read the rest of the answer, here's some disclaimers:


And now, here goes...

Firstly, lets take a look at what the evaluate library is/does

From https://huggingface.co/spaces/evaluate-metric/squad

  1. from evaluate import load
  2. squad_metric = load("squad")
  3. predictions = [{'prediction_text': '1976', 'id': '56e10a3be3433e1400422b22'}]
  4. references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': '56e10a3be3433e1400422b22'}]
  5. results = squad_metric.compute(predictions=predictions, references=references)
  6. print(results)

[out]:

  1. {'exact_match': 100.0, 'f1': 100.0}

Next, we take a look at what the compute_metrics argument in the Trainer expects

From Line 600 https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py

  1. metric = evaluate.load("squad_v2" if data_args.version_2_with_negative else "squad")
  2. def compute_metrics(p: EvalPrediction):
  3. return metric.compute(predictions=p.predictions, references=p.label_ids)
  4. # Initialize our Trainer
  5. trainer = QuestionAnsweringTrainer(
  6. model=model,
  7. args=training_args,
  8. train_dataset=train_dataset if training_args.do_train else None,
  9. eval_dataset=eval_dataset if training_args.do_eval else None,
  10. eval_examples=eval_examples if training_args.do_eval else None,
  11. tokenizer=tokenizer,
  12. data_collator=data_collator,
  13. post_process_function=post_processing_function,
  14. compute_metrics=compute_metrics,
  15. )

The compute_metrics argument in the QuestionAnsweringTrainer is expecting a function that:

  • [in]: Takes in an EvalPrediction object as input
  • [out]: Returns a dict of keys-value pairs where the key is the name of the output metric in string type and the value is expected to a floating point

Un momento! (Wait a minute!) What are these QuestionAnsweringTrainer and EvalPrediction objects?

Q: Why are you not using the normal Trainer object?

A: The QuestionAnsweringTrainer is a specific sub-class of the Trainer object that is used for the QA task. If you're going to train a model to evaluate on the SQUAD dataset, then the QuestionAnsweringTrainer is the most appropriate Trainer object to use.

[Suggestion]: Most probably HuggingFace devs and dev-advocate should add some notes on the object in QuestionAnsweringTrainer https://huggingface.co/course/chapter7/7?fw=pt

Q: What is this EvalPrediction object then?

A: Officially, I guess it's this: https://discuss.huggingface.co/t/what-does-evalprediction-predictions-contain-exactly/1691/5

If we look at the doc: https://huggingface.co/docs/transformers/internal/trainer_utils and the code, it looks like the object is a custom container class that holds the (i) predictions, (ii) label_ids and (iii) inputs np.ndarray. These are what the model's inference function need to return in order for the compute_metrics to work as expected.

  1. class EvalPrediction:
  2. """
  3. Evaluation output (always contains labels), to be used to compute metrics.
  4. Parameters:
  5. predictions (`np.ndarray`): Predictions of the model.
  6. label_ids (`np.ndarray`): Targets to be matched.
  7. inputs (`np.ndarray`, *optional*)
  8. """
  9. def __init__(
  10. self,
  11. predictions: Union[np.ndarray, Tuple[np.ndarray]],
  12. label_ids: Union[np.ndarray, Tuple[np.ndarray]],
  13. inputs: Optional[Union[np.ndarray, Tuple[np.ndarray]]] = None,
  14. ):
  15. self.predictions = predictions
  16. self.label_ids = label_ids
  17. self.inputs = inputs
  18. def __iter__(self):
  19. if self.inputs is not None:
  20. return iter((self.predictions, self.label_ids, self.inputs))
  21. else:
  22. return iter((self.predictions, self.label_ids))
  23. def __getitem__(self, idx):
  24. if idx == 0:
  25. return self.predictions
  26. elif idx == 1:
  27. return self.label_ids
  28. elif idx == 2:
  29. return self.inputs

Hey, you still haven't answer the question of how I can use the evaluate.metrics('squad') directly to the the compute_metrics args!

Yes, for now, you can't directly use it but it's a simple wrapper.

Step 1. Make sure the model you want to use outputs the required EvalPrediction object that contains, predictions and label_ids

If you're using most the models supported for QA in Huggingface's transformers library, they should already output the expected EvalPrediction.

Otherwise, take a look at models supported by https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering

Step 2: Since the model inference outputs EvalPrediction but the compute_metrics expects a dictionary outputs, _you have to wrap the evaluate.metrics function

E.g.

  1. metric = evaluate.load("squad_v2" if data_args.version_2_with_negative else "squad")
  2. def compute_metrics(p: EvalPrediction):
  3. return metric.compute(predictions=p.predictions, references=p.label_ids)

Q: Do we really always need to write that wrapper function?

A: For now, yes, it is by design not directly integrated with the outputs of the evaluate.metrics to give the different metrics' developers freedom to define how they want their inputs/outputs to look like.

But there might be hope to make compute_metrics more integrated with evaluate.metric if someone picks this feature request up! https://discuss.huggingface.co/t/feature-request-adding-default-compute-metrics-to-popular-evaluate-metrics/33909/3

huangapple
  • 本文由 发表于 2023年3月15日 19:22:10
  • 转载请务必保留本文链接:https://go.coder-hub.com/75744031.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定