Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: Gemini saved an additional portion of the weights while using tie_word_embeddings=True #6160

Open
1 task done
ericxsun opened this issue Dec 13, 2024 · 0 comments
Open
1 task done
Labels
bug Something isn't working

Comments

@ericxsun
Copy link
Contributor

ericxsun commented Dec 13, 2024

Is there an existing issue for this bug?

  • I have searched the existing issues

🐛 Describe the bug

When fine-tuning a base model (Qwen2.5 3B Base) that uses tie_word_embeddings=True with GeminiPlugin and saving the checkpoint, I noticed an additional set of weights being saved:

    "lm_head.weight": "pytorch_model-00004.bin"

This leads to the following error when reloading the saved checkpoint:

[rank2]: RuntimeError: Error(s) in loading state_dict for GeminiDDP:
[rank2]:     Unexpected key(s) in state_dict: "lm_head.weight". 

Could you provide some advice on how to solve this issue and avoid saving the not needed weights?

Environment

Python: 3.11
colossalai: 0.4.6

@ericxsun ericxsun added the bug Something isn't working label Dec 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant