-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to dequantize a model with 4 groups and centroids greater than 4096? #128
Comments
Sorry about that. It seems to be an issue with the CUDA kernel. The multi-group CUDA kernel does seem to have some problems. If you want to evaluate accuracy for now, you can try using the Torch version and perform layer-by-layer evaluation, although it might be a bit slow. We’ll work on fixing this part soon. I’m a bit busy these days, but I’ll look into it as soon as possible. Thank you for your patience! |
Okay, thanks for the notice. Looking forward to the update. |
Hi @ShawnzzWu Would you mind sharing your quantized model so I can debug into it? |
Sorry, for information security reasons, I'm not allowed to share my file with you directly, but I just basically changed the --group_num, and --num_centroids these two options during the quantization |
Can you shared the configuration or args of quantization? Thanks! |
python -u run_vptq.py These are the quantization args that I can provide, the quantization process looked fine to me, but would run into error when running the model using "python -m vptq --model=VPTQ-community/Meta-Llama-3.1-8B --chat". |
I've been trying to quantize and run the Meta-Llama-3.1-8B-Instruct-2.3bit model with group number set to 4, and successfully run the model when k1(centroids) is 4096 as in the paper. However, anything k1 setting above that(8192, 16384, 65536) would lead to a successful quantization but a model running failure. The error logs show that the reason could be some illegal memory access during the dequant function.
So here's what I want to ask, does the code support running a model with group number option on and centroids set to 8k and greater? Or do I need to do some adjustment to make it work?
Looking forward to your reply.
The text was updated successfully, but these errors were encountered: