
I found that my videocard GTX 1660 Super doesn’t fully support FP16 models. So sometimes I need to convert FP16 models to FP32. Here is the python scripts that I usually use for that…
How to use the scripts
Find your python, in my case it located in
- D:\AI\automatic1111\system\python\python.exe
Then you can execute python.exe and paste the script code OR you can save the script to some file, eg script.py and run it as from the folder where your script is
1 2 3 |
D:\AI\automatic1111\system\python\python.exe script.py |
Script to check which FP is used in the model
1 2 3 4 5 6 7 8 9 10 |
from safetensors.torch import load_file model_path = "D:/AI/automatic1111/webui/models/Stable-diffusion/cyberrealistic_v50-inpainting.safetensors" model = load_file(model_path) for key, value in model.items(): print(f"{key}: {value.dtype}") break |
(!) Don’t forget to change model’s path to yours.
Script will output something like
1 2 3 |
alphas_cumprod: torch.float16 |
and from here you can understand that FP16 is used
Script to convert safetensors FP16 model to FP32 model
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
from safetensors.torch import load_file, save_file import torch # FP 16 model file path input_path = "D:/AI/automatic1111/webui/models/Stable-diffusion/cyberrealistic_v50-inpainting.safetensors" # FP 32 model file path output_path = "D:/AI/automatic1111/webui/models/Stable-diffusion/cyberrealistic_v50-inpainting-fp32.safetensors" model = load_file(input_path) for key in model.keys(): model[key] = model[key].float() save_file(model, output_path) print("Completed! New FP32 model is:", output_path) |
(!) As previously don’t forget to change path to your model.
Script will output something like
1 2 3 |
Completed! New FP32 model is: D:/AI/automatic1111/webui/models/Stable-diffusion/cyberrealistic_v50-inpainting-fp32.safetensors |
Script to convert safetensors FP16 model to ckpt FP32 model
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
from safetensors.torch import load_file import torch # FP 16 model file path input_path = "D:/AI/automatic1111/webui/models/Stable-diffusion/cyberrealistic_v50-inpainting.safetensors" # FP 32 model file path output_path = "D:/AI/automatic1111/webui/models/Stable-diffusion/cyberrealistic_v50-inpainting-fp32.ckpt" model = load_file(input_path) for key in model.keys(): model[key] = model[key].float() torch.save(model, output_path) print("Completed! New FP32 model is:", output_path) |
(!) As previously don’t forget to change path to your model.
Script will output something like
1 2 3 |
Completed! New FP32 model is: D:/AI/automatic1111/webui/models/Stable-diffusion/cyberrealistic_v50-inpainting-fp32.ckpt |
Script to check your videocard compatability with FP16
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") if device.type == "cuda": capability = torch.cuda.get_device_capability(device) print(f"🚀 Videocard: {torch.cuda.get_device_name(device)}") print(f"🛠 Compute Capability: {capability}") if capability[0] >= 7: # Turing (GTX 1660 Ti, RTX 20xx) and newer print("✅ FP16 is supported but may be software limited") else: print("❌ FP16 is not supported") else: print("❌ CUDA not found") |
Output
1 2 3 4 5 |
Videocard: NVIDIA GeForce GTX 1660 SUPER Compute Capability: (7, 5) FP16 is supported but may be software limited |