You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that Hyper-FLUX.1-dev-8steps-lora can not support Flux-dev-fp8, the image seems the same when I load or not load Hyper-FLUX.1-dev-8steps-lora.
These are my code, Can any one use Hyper-FLUX.1-dev-8steps-lora on Flux-dev-fp8
self.transformer = FluxTransformer2DModel.from_single_file(os.path.join(self.model_root, self.config["transformer_path"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.transformer, weights=qfloat8)
freeze(self.transformer)
It seems that Hyper-FLUX.1-dev-8steps-lora can not support Flux-dev-fp8, the image seems the same when I load or not load Hyper-FLUX.1-dev-8steps-lora.
These are my code, Can any one use Hyper-FLUX.1-dev-8steps-lora on Flux-dev-fp8
self.transformer = FluxTransformer2DModel.from_single_file(os.path.join(self.model_root, self.config["transformer_path"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.transformer, weights=qfloat8)
freeze(self.transformer)
It seems that Hyper-FLUX.1-dev-8steps-lora can not support Flux-dev-fp8, the image seems the same when I load or not load Hyper-FLUX.1-dev-8steps-lora.
These are my code, Can any one use Hyper-FLUX.1-dev-8steps-lora on Flux-dev-fp8
self.transformer = FluxTransformer2DModel.from_single_file(os.path.join(self.model_root, self.config["transformer_path"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.transformer, weights=qfloat8)
freeze(self.transformer)
It seems that Hyper-FLUX.1-dev-8steps-lora can not support Flux-dev-fp8, the image seems the same when I load or not load Hyper-FLUX.1-dev-8steps-lora.
These are my code, Can any one use Hyper-FLUX.1-dev-8steps-lora on Flux-dev-fp8
self.transformer = FluxTransformer2DModel.from_single_file(os.path.join(self.model_root, self.config["transformer_path"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.transformer, weights=qfloat8)
freeze(self.transformer)
I have known the reason.
If you load two loras A and B.
If you set lora weights respectively:
self.pipe.set_adapters(["A"], adapter_weights=[0.125]), self.pipe.set_adapters(["B"], adapter_weights=[0.85]),the lora will not be effective.
I must set lora together:
self.pipe.set_adapters(["A", "B"], adapter_weights=[0.125, 0.85])
So,there must be some bugs in the function "set_adapters"
Describe the bug
It seems that Hyper-FLUX.1-dev-8steps-lora can not support Flux-dev-fp8, the image seems the same when I load or not load Hyper-FLUX.1-dev-8steps-lora.
These are my code, Can any one use Hyper-FLUX.1-dev-8steps-lora on Flux-dev-fp8
self.transformer = FluxTransformer2DModel.from_single_file(os.path.join(self.model_root, self.config["transformer_path"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.transformer, weights=qfloat8)
freeze(self.transformer)
self.text_encoder_2 = T5EncoderModel.from_pretrained(os.path.join(self.model_root, self.config["text_encoder_2_repo"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.text_encoder_2, weights=qfloat8)
freeze(self.text_encoder_2)
self.pipe = FluxPipeline.from_pretrained(os.path.join(self.model_root, self.config["flux_repo"]), transformer=None, text_encoder_2=None, torch_dtype=torch.bfloat16).to(self.device)
self.pipe.transformer = self.transformer
self.pipe.text_encoder_2 = self.text_encoder_2
self.pipe.load_lora_weights(load_file(os.path.join(self.model_root, self.config["8steps_lora"]), device=self.device), adapter_name="8steps")
self.pipe.fuse_lora(lora_scale=1.0)
Reproduction
It seems that Hyper-FLUX.1-dev-8steps-lora can not support Flux-dev-fp8, the image seems the same when I load or not load Hyper-FLUX.1-dev-8steps-lora.
These are my code, Can any one use Hyper-FLUX.1-dev-8steps-lora on Flux-dev-fp8
self.transformer = FluxTransformer2DModel.from_single_file(os.path.join(self.model_root, self.config["transformer_path"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.transformer, weights=qfloat8)
freeze(self.transformer)
self.text_encoder_2 = T5EncoderModel.from_pretrained(os.path.join(self.model_root, self.config["text_encoder_2_repo"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.text_encoder_2, weights=qfloat8)
freeze(self.text_encoder_2)
self.pipe = FluxPipeline.from_pretrained(os.path.join(self.model_root, self.config["flux_repo"]), transformer=None, text_encoder_2=None, torch_dtype=torch.bfloat16).to(self.device)
self.pipe.transformer = self.transformer
self.pipe.text_encoder_2 = self.text_encoder_2
self.pipe.load_lora_weights(load_file(os.path.join(self.model_root, self.config["8steps_lora"]), device=self.device), adapter_name="8steps")
self.pipe.fuse_lora(lora_scale=1.0)
Logs
No response
System Info
It seems that Hyper-FLUX.1-dev-8steps-lora can not support Flux-dev-fp8, the image seems the same when I load or not load Hyper-FLUX.1-dev-8steps-lora.
These are my code, Can any one use Hyper-FLUX.1-dev-8steps-lora on Flux-dev-fp8
self.transformer = FluxTransformer2DModel.from_single_file(os.path.join(self.model_root, self.config["transformer_path"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.transformer, weights=qfloat8)
freeze(self.transformer)
self.text_encoder_2 = T5EncoderModel.from_pretrained(os.path.join(self.model_root, self.config["text_encoder_2_repo"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.text_encoder_2, weights=qfloat8)
freeze(self.text_encoder_2)
self.pipe = FluxPipeline.from_pretrained(os.path.join(self.model_root, self.config["flux_repo"]), transformer=None, text_encoder_2=None, torch_dtype=torch.bfloat16).to(self.device)
self.pipe.transformer = self.transformer
self.pipe.text_encoder_2 = self.text_encoder_2
self.pipe.load_lora_weights(load_file(os.path.join(self.model_root, self.config["8steps_lora"]), device=self.device), adapter_name="8steps")
self.pipe.fuse_lora(lora_scale=1.0)
Who can help?
It seems that Hyper-FLUX.1-dev-8steps-lora can not support Flux-dev-fp8, the image seems the same when I load or not load Hyper-FLUX.1-dev-8steps-lora.
These are my code, Can any one use Hyper-FLUX.1-dev-8steps-lora on Flux-dev-fp8
self.transformer = FluxTransformer2DModel.from_single_file(os.path.join(self.model_root, self.config["transformer_path"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.transformer, weights=qfloat8)
freeze(self.transformer)
self.text_encoder_2 = T5EncoderModel.from_pretrained(os.path.join(self.model_root, self.config["text_encoder_2_repo"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.text_encoder_2, weights=qfloat8)
freeze(self.text_encoder_2)
self.pipe = FluxPipeline.from_pretrained(os.path.join(self.model_root, self.config["flux_repo"]), transformer=None, text_encoder_2=None, torch_dtype=torch.bfloat16).to(self.device)
self.pipe.transformer = self.transformer
self.pipe.text_encoder_2 = self.text_encoder_2
self.pipe.load_lora_weights(load_file(os.path.join(self.model_root, self.config["8steps_lora"]), device=self.device), adapter_name="8steps")
self.pipe.fuse_lora(lora_scale=1.0)
The text was updated successfully, but these errors were encountered: