Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apply class transformers.SequenceBiasLogitsProcessor on Qwen model #35432

Open
buptspig opened this issue Dec 27, 2024 · 0 comments
Open

apply class transformers.SequenceBiasLogitsProcessor on Qwen model #35432

buptspig opened this issue Dec 27, 2024 · 0 comments
Labels
Feature request Request for a new feature

Comments

@buptspig
Copy link

Feature request

I notice that your wrote:
In order to get the token ids of the sequences that you want to bias, make sure to set add_prefix_space=True when initializing the tokenizer, and use tokenizer(bad_words, add_special_tokens=False).input_ids. The add_prefix_space argument is only supported for some slow tokenizers, as fast tokenizers’ prefixing behaviours come from pre tokenizers. Read more here.

So if I am using Qwen model, since its tokenizer is based on fast tokenizers, I can't use this bias logits feature?

Is there anyway I can use the Qwen model and still use this feature?

Motivation

the old code is not working on the new model like qwen, want update.

Your contribution

none

@buptspig buptspig added the Feature request Request for a new feature label Dec 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature request Request for a new feature
Projects
None yet
Development

No branches or pull requests

1 participant