Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pass correct num_items_in_batch value into the training_step function #35438

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

hiyouga
Copy link
Contributor

@hiyouga hiyouga commented Dec 27, 2024

What does this PR do?

This PR follows #34511 #34915 and partially reverts #35121 . We want to handle the loss calculation correctly for models that don't accept loss_kwargs when gradient accumulation was enabled.

In #35121 , the author always passes num_items_in_batch into the training_step function, which makes the logic of the following lines invalid, and consequently leads to incorrect loss calculation.

# Finally we need to normalize the loss for reporting
if num_items_in_batch is None:
loss = loss / self.args.gradient_accumulation_steps
self.accelerator.backward(loss, **kwargs)
return loss.detach()

Currently, there are still many models that don't accept loss_kwargs (such as Qwen2-VL). For these models, we need to set the num_items_in_batch parameter in training_step function to None, so as to scale the loss correctly in training.

However, we've found that since transformers 4.46.0, the loss of the models that don't accept loss_kwargs hasn't been calculated correctly!

Specifically, in transformers 4.46.0-4.46.1, the loss was wrongly multiplied by the gradient accumulation steps, resulting in a large loss value (see https://twitter.com/TheZachMueller/status/1851677628656423347 ).

https://github.com/huggingface/transformers/blob/v4.46.0/src/transformers/trainer.py#L3605

loss *= self.args.gradient_accumulation_steps
self.accelerator.backward(loss, **kwargs)
return loss.detach() / self.args.gradient_accumulation_steps

In transformers 4.46.2-4.47.1, the loss was scaled after the backward pass, which also led to larger gradients (also mentioned in #35207 ).

https://github.com/huggingface/transformers/blob/v4.47.1/src/transformers/trainer.py#L3689

self.accelerator.backward(loss, **kwargs)
# Finally we need to normalize the loss for reporting
if num_items_in_batch is None:
return loss.detach() / self.args.gradient_accumulation_steps
return loss.detach()

In the latest version, due to the issues in this PR, the loss isn't scaled correctly either, and the final loss remains large. We hope the maintainers can attach importance to this problem and look forward to a timely fix.

We also conducted a group of experiments to verify the above conclusion. We trained the Qwen2-VL model (which does not accept loss_kwargs) using transformers 4.45.2, the main branch (5c75087), and the main branch after the PR fix. We can find that both version 4.45.2 and the fixed version have reasonable loss values and grad norms, while the main branch does not.

transformers version micro batchsize grad accumulation steps avg loss avg grad norm
4.45.2 4 2 1.17 0.32
4.45.2 1 8 0.99 0.43
main (4.48.0.dev0) 4 2 2.34 0.65
main (4.48.0.dev0) 1 8 7.93 3.52
main + this PR 4 2 1.17 0.32
main + this PR 1 8 0.99 0.43

image

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

@ArthurZucker @muellerzr

@hiyouga hiyouga changed the title Pass correct num_items_in_batch value into the training_step method Pass correct num_items_in_batch value into the training_step function Dec 27, 2024
@hiyouga
Copy link
Contributor Author

hiyouga commented Dec 27, 2024

The test cases have almost passed, except for a few that encountered a timeout error.

pipelines_tf - Failed
tests_exotic_models - Success
collection_job - Success
tests_onnx - Success
tests_torch_and_tf - Success
tests_torch - Success
tests_hub - Success
tests_processors - Success
examples_torch - Success
tests_custom_tokenizers - Success
tests_generate - Success
tests_tf - Success
tests_non_model - Success
examples_tensorflow - Success
tests_torch_and_flax - Success
tests_tokenization - Success
pipelines_torch - Success

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant