You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I notice that your wrote:
In order to get the token ids of the sequences that you want to bias, make sure to set add_prefix_space=True when initializing the tokenizer, and use tokenizer(bad_words, add_special_tokens=False).input_ids. The add_prefix_space argument is only supported for some slow tokenizers, as fast tokenizers’ prefixing behaviours come from pre tokenizers. Read more here.
So if I am using Qwen model, since its tokenizer is based on fast tokenizers, I can't use this bias logits feature?
Is there anyway I can use the Qwen model and still use this feature?
Motivation
the old code is not working on the new model like qwen, want update.
Your contribution
none
The text was updated successfully, but these errors were encountered:
Feature request
I notice that your wrote:
In order to get the token ids of the sequences that you want to bias, make sure to set add_prefix_space=True when initializing the tokenizer, and use tokenizer(bad_words, add_special_tokens=False).input_ids. The add_prefix_space argument is only supported for some slow tokenizers, as fast tokenizers’ prefixing behaviours come from pre tokenizers. Read more here.
So if I am using Qwen model, since its tokenizer is based on fast tokenizers, I can't use this bias logits feature?
Is there anyway I can use the Qwen model and still use this feature?
Motivation
the old code is not working on the new model like qwen, want update.
Your contribution
none
The text was updated successfully, but these errors were encountered: