-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable FP8 Per-Tensor Scales and Integrate Marlin/MoE Kernels Repo for ROCm #2825
base: main
Are you sure you want to change the base?
Conversation
RUN git clone https://github.com/danieldk/marlin-kernels.git && \ | ||
cd marlin-kernels && \ | ||
git checkout ${MARLIN_KERNELS_BRANCH} && \ | ||
python setup.py install |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure how hard it'll be, but it would be nice to have precompiled wheels in the future to cut down the build times.
def per_tensor_dequantize( | ||
tensor: torch.Tensor, inv_scale: Union[float, torch.Tensor] | ||
) -> torch.Tensor: | ||
fake_qweight = tensor.to(torch.float16) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be the model dtype? (The quantized numbers could represent values that are only in bfloat16
's range.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes made the changes
input_scale = ( | ||
weights.get_tensor(f"{prefix}.input_scale", to_dtype=False) | ||
.reshape(-1) | ||
.max() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this safe? If input_scale
is a vector, doesn't it need to be dequantized with the original scales first and then requantized with the new max?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be fine since it is the input_scale. Dequantization doesn’t make sense in this context because the input will be unquantized and will simply be quantized using this representative scale.
What does this PR do?
Fixes following issues:
scaled_mm
efficiently.marlin_kernels
andmoe_kernels
repositories for ROCMBefore submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.