![]() webui.sh it will take a long time at Installing torch and torchvision this is normal because it is building PyTorch from source instead of installing an already built package. Venv_dir= "venv-torch-2.0-alpha " # script to launch to start the app #export LAUNCH_SCRIPT="launch.py" # install command for torch export USE_DISTRIBUTED=1Įxport TORCH_COMMAND= "pip install -pre torchvision=0.15.0.dev20230106 -f " # Requirements file to use for stable-diffusion-webui #export REQS_FILE="requirements_versions.txt" # Fixed git repos #export K_DIFFUSION_PACKAGE="" #export GFPGAN_PACKAGE="" # Fixed git commits #export STABLE_DIFFUSION_COMMIT_HASH="" #export TAMING_TRANSFORMERS_COMMIT_HASH="" #export CODEFORMER_COMMIT_HASH="" #export BLIP_COMMIT_HASH="" # Uncomment to enable accelerated launch #export ACCELERATE="True" # ![]() #!/bin/bash # Uncomment and change the variables below to your need:# Install directory without trailing slash #install_dir="/home/$(whoami)" # Name of the subdirectory #clone_dir="stable-diffusion-webui" # Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="-medvram -opt-split-attention" export COMMANDLINE_ARGS= " $COMMANDLINE_ARGS -opt-sub-quad-attention " # python3 executable #python_cmd="python3" # git executable #export GIT="git" # python3 venv without trailing slash (defaults to $/venv) I was going to post those changes today but trying to figure out why the baseline was so much faster has consumed most of the See if torch 1.13.1 is just as good as torch 2.īeta Was this translation helpful? Give feedback. I can still get another 16% perf improvement with the two other changes I mentioned in another post but now torch 2.0 has nothing to do with it. I have a feeling that CUDA-12.0 won't make any further difference. Then I installed torch 2.0.0.dev20230106+cu117 and I no longer see a perf improvement over 1.13.1. I now have learned that "git pull" to update A1111 doesn't upgrade python packages to newer versions. After a lot of work to figure out what the difference was I found the old version had torch 1.12.1+cu113 and if you remove and reinstall you get 1.13.1+cu117. ? I found an older A1111 I still had and it was slow. Today I explored deeper and to do so I did a clean install of A1111 and it was very fast without torch 2. Yesterday I was experimenting with building torch 2.0 so that I could ALSO use CUDA-12.0.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |