Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dpo_trainer gather metrics across ranks before logging #2474

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

zhc7
Copy link

@zhc7 zhc7 commented Dec 13, 2024

according to #2468

What does this PR do?

Fixes #2468

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@@ -1424,7 +1424,11 @@ def log(self, logs: dict[str, float], start_time: Optional[float] = None) -> Non
train_eval = "train" if "loss" in logs else "eval"
# Add averaged stored metrics to logs
for key, metrics in self._stored_metrics[train_eval].items():
logs[key] = torch.tensor(metrics).mean().item()
if isinstance(metrics[0], torch.Tensor):
gathered = self._nested_gather([m.cuda() for m in metrics])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you need .cuda()?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe self.accelerator.gather(metrics).mean().item() would be simpler?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you need .cuda()?

metrics are moved cpu before, some backends (e.g. nccl) does not support gathering tensors on cpu. but I admit .cuda here loses some genrality. maybe .to(self.accelerator.device) is better?

Copy link
Author

@zhc7 zhc7 Dec 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe self.accelerator.gather(metrics).mean().item() would be simpler?

I agree self.accelerator.gather is better. but metrics in the loop is a list[torch.Tensor] or list[float], so gather actually returns a list[torch.Tensor]. So I think I should change it into:

            if isinstance(metrics[0], torch.Tensor):
                gathered = self.accelerator.gather([m.to(self.accelerator.device) for m in metrics])
                metrics = [g.mean() for g in gathered]
            meaned = torch.tensor(metrics).mean()
            logs[key] = meaned.item()

I know creating a new tensor on metrics seems a little weird, but that is how it was originally written. I don't why but I don't why to break anything so I left it there.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

DPOTrainer log metrics are not gathered and meaned across ranks
3 participants