How to convert Stable Diffusion model from FP16 to FP32

Futuristic

I found that my video card GTX 1660 Super doesn’t fully support FP16 models. So sometimes I need to convert FP16 models to FP32. Here is the python scripts that I usually use for that…

How to use the scripts

Find your python, in my case it located in

  • D:\AI\automatic1111\system\python\python.exe

Then you can execute python.exe and paste the script code OR you can save the script to some file, eg script.py and run it as from the folder where your script is

D:\AI\automatic1111\system\python\python.exe script.py

Script to check which FP is used in the model

from safetensors.torch import load_file

model_path = "D:/AI/automatic1111/webui/models/Stable-diffusion/cyberrealistic_v50-inpainting.safetensors"
model = load_file(model_path)

for key, value in model.items():
    print(f"{key}: {value.dtype}")
    break

(!) Don’t forget to change model’s path to yours.

Script will output something like

alphas_cumprod: torch.float16

and from here you can understand that FP16 is used

Script to convert safetensors FP16 model to FP32 model

from safetensors.torch import load_file, save_file
import torch

# FP 16 model file path
input_path = "D:/AI/automatic1111/webui/models/Stable-diffusion/cyberrealistic_v50-inpainting.safetensors"
# FP 32 model file path
output_path = "D:/AI/automatic1111/webui/models/Stable-diffusion/cyberrealistic_v50-inpainting-fp32.safetensors"

model = load_file(input_path)

for key in model.keys():
    model[key] = model[key].float()

save_file(model, output_path)

print("Completed! New FP32 model is:", output_path)

(!) As previously don’t forget to change path to your model.

Script will output something like

Completed! New FP32 model is: D:/AI/automatic1111/webui/models/Stable-diffusion/cyberrealistic_v50-inpainting-fp32.safetensors

Script to convert safetensors FP16 model to ckpt FP32 model

from safetensors.torch import load_file
import torch

# FP 16 model file path
input_path = "D:/AI/automatic1111/webui/models/Stable-diffusion/cyberrealistic_v50-inpainting.safetensors"
# FP 32 model file path
output_path = "D:/AI/automatic1111/webui/models/Stable-diffusion/cyberrealistic_v50-inpainting-fp32.ckpt"

model = load_file(input_path)

for key in model.keys():
    model[key] = model[key].float()

torch.save(model, output_path)

print("Completed! New FP32 model is:", output_path)

(!) As previously don’t forget to change path to your model.

Script will output something like

Completed! New FP32 model is: D:/AI/automatic1111/webui/models/Stable-diffusion/cyberrealistic_v50-inpainting-fp32.ckpt

Script to check your videocard compatability with FP16

import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

if device.type == "cuda":
    capability = torch.cuda.get_device_capability(device)
    print(f"🚀 Videocard: {torch.cuda.get_device_name(device)}")
    print(f"🛠 Compute Capability: {capability}")
    
    if capability[0] >= 7:  # Turing (GTX 1660 Ti, RTX 20xx) and newer
        print("✅ FP16 is supported but may be software limited")
    else:
        print("❌ FP16 is not supported")
else:
    print("❌ CUDA not found")

Output

 Videocard: NVIDIA GeForce GTX 1660 SUPER
 Compute Capability: (7, 5)
 FP16 is supported but may be software limited

You May Also Like

About the Author: vo