提交 d91bfc4b 编写于 作者: A a2569875

update to sd webui v1.2.1

上级 95eee755
......@@ -22,6 +22,26 @@ https://github.com/a2569875/stable-diffusion-webui-composable-lora.git
```
インストールして再起動します。
## デモ
ここでは2つのLoRA(1つはLoHA、もう1つはLoCon)を紹介します。
* [`<lora:roukin8_loha:0.8>`](https://civitai.com/models/17336/roukin8-character-lohaloconfullckpt-8) に対応するトリガーワード: `yamanomitsuha`
* `<lora:dia_viekone_locon:0.7>` に対応するトリガーワード: `dia_viekone_\(ansatsu_kizoku\)`
[Latent Couple extension](https://github.com/opparco/stable-diffusion-webui-two-shot)と組み合わせます。
以下はその効果です。
![](readme/fig11.png)
以下のことが分かります。
- `<lora:roukin8_loha:0.8>`を`yamanomitsuha`と組み合わせ、そして`<lora:dia_viekone_locon:0.7>`を`dia_viekone_\(ansatsu_kizoku\)`と組み合わせることで、対応するキャラクターを描画できます。
- モデルのトリガーワードが互いに交換され、一致しなくなった場合、2つのキャラクターは描画できません。これは`<lora:roukin8_loha:0.8>`が画像の左側のブロックにのみ制限されているため、そして`<lora:dia_viekone_locon:0.7>`が画像の右側のブロックにのみ制限されているためです。したがって、このアルゴリズムは有効です。
画像のヒントの文法には[sd-webui-prompt-highlight](https://github.com/a2569875/sd-webui-prompt-highlight)プラグインが使用されています。
このテストは2023年5月14日に行われ、使用されたStable Diffusion WebUIのバージョンは[v1.2 (89f9faa)](https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/89f9faa63388756314e8a1d96cf86bf5e0663045)です。
(Note: You should enable \[`Lora: use old method that takes longer when you have multiple Loras active and produces same results as kohya-ss/sd-webui-additional-networks extension`\] in setting page.)
## 機能
### Composable-Diffusionと互換性がある
LoRAの挿入箇所を`AND`構文と関連付け、LoRAの影響範囲を特定のサブプロンプト内に限定します(特定の`AND...AND`ブロック内)。
......
......@@ -22,6 +22,26 @@ https://github.com/a2569875/stable-diffusion-webui-composable-lora.git
```
Install and restart to complete the process.
## Demo
Here we demonstrate two LoRAs (one LoHA and one LoCon), where
* [`<lora:roukin8_loha:0.8>`](https://civitai.com/models/17336/roukin8-character-lohaloconfullckpt-8) corresponds to the trigger word `yamanomitsuha`
* `<lora:dia_viekone_locon:0.7>` corresponds to the trigger word `dia_viekone_\(ansatsu_kizoku\)`
We use the [Latent Couple extension](https://github.com/opparco/stable-diffusion-webui-two-shot) for generating the images.
The results are shown below:
![](readme/fig11.png)
It can be observed that:
- The combination of `<lora:roukin8_loha:0.8>` with `yamanomitsuha`, and `<lora:dia_viekone_locon:0.7>` with `dia_viekone_\(ansatsu_kizoku\)` can successfully generate the corresponding characters.
- When the trigger words are swapped, causing a mismatch, both characters cannot be generated successfully. This demonstrates that `<lora:roukin8_loha:0.8>` is restricted to the left half of the image, while `<lora:dia_viekone_locon:0.7>` is restricted to the right half of the image. Therefore, the algorithm is effective.
The highlighting of the prompt words on the image is done using the [sd-webui-prompt-highlight](https://github.com/a2569875/sd-webui-prompt-highlight) plugin.
This test was conducted on May 14, 2023, using Stable Diffusion WebUI version [v1.2 (89f9faa)](https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/89f9faa63388756314e8a1d96cf86bf5e0663045).
(Note: You should enable \[`Lora: use old method that takes longer when you have multiple Loras active and produces same results as kohya-ss/sd-webui-additional-networks extension`\] in setting page.)
## Features
### Compatible with Composable-Diffusion
By associating LoRA's insertion position in the prompt with `AND` syntax, LoRA's scope of influence is limited to a specific subprompt.
......
......@@ -22,6 +22,25 @@ https://github.com/a2569875/stable-diffusion-webui-composable-lora.git
```
安装并重启即可
## 演示
这里示范两个LoRA (分别为LoHA和LoCon) ,其中
* [`<lora:roukin8_loha:0.8>`](https://civitai.com/models/17336/roukin8-character-lohaloconfullckpt-8) 对应的触发词: `yamanomitsuha`
* `<lora:dia_viekone_locon:0.7>` 对应的触发词: `dia_viekone_\(ansatsu_kizoku\)`
并搭配[Latent Couple extension](https://github.com/opparco/stable-diffusion-webui-two-shot)
效果如下:
![](readme/fig11.png)
可以看到:
- 当我`<lora:roukin8_loha:0.8>`搭配`yamanomitsuha`,以及`<lora:dia_viekone_locon:0.7>`搭配`dia_viekone_\(ansatsu_kizoku\)`的组合可以顺利画出对应角色;
- 当模型触发词互相交换而导致不匹配时,两个角色都无法顺利画出,可见`<lora:roukin8_loha:0.8>`被限制在只作用于图片的左半边区块、而`<lora:dia_viekone_locon:0.7>`被限制在只作用于图片的右半边区块,因此这个算法是有效的。
图片上的提示词语法使用[sd-webui-prompt-highlight](https://github.com/a2569875/sd-webui-prompt-highlight)插件進行上色。
本次测试于2023年5月14日完成,使用Stable Diffusion WebUI版本为[v1.2 (89f9faa)](https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/89f9faa63388756314e8a1d96cf86bf5e0663045)
(Note: You should enable \[`Lora: use old method that takes longer when you have multiple Loras active and produces same results as kohya-ss/sd-webui-additional-networks extension`\] in setting page.)
## 功能
### 与 Composable-Diffusion 兼容
将 LoRA 在提示词中的插入位置与`AND`语法相关系,让 LoRA 的影响范围限制在特定的子提示词中 (特定 AND...AND区块中)。
......
......@@ -22,6 +22,24 @@ https://github.com/a2569875/stable-diffusion-webui-composable-lora.git
```
安裝並重新啟動即可
## 演示
這裡示範兩個LoRA (分別為LoHA和LoCon),其中
* [`<lora:roukin8_loha:0.8>`](https://civitai.com/models/17336/roukin8-character-lohaloconfullckpt-8) 對應的觸發詞: `yamanomitsuha`
* `<lora:dia_viekone_locon:0.7>` 對應的觸發詞: `dia_viekone_\(ansatsu_kizoku\)`
並搭配[Latent Couple extension](https://github.com/opparco/stable-diffusion-webui-two-shot)
效果如下:
![](readme/fig11.png)
可以看到:
- 當我`<lora:roukin8_loha:0.8>`搭配`yamanomitsuha`,以及`<lora:dia_viekone_locon:0.7>`搭配`dia_viekone_\(ansatsu_kizoku\)`的組合可以順利畫出對應角色;
- 當模型觸發詞互相交換而導致不匹配時,兩個角色都無法順利畫出,可見`<lora:roukin8_loha:0.8>`被限制在只作用於圖片的左半邊區塊、而`<lora:dia_viekone_locon:0.7>`被限制在只作用於圖片的右半邊區塊,因此這個演算法是有效的。
圖片上的提示詞語法使用[sd-webui-prompt-highlight](https://github.com/a2569875/sd-webui-prompt-highlight)插件進行上色。
本次測試於2023年5月14日完成,使用Stable Diffusion WebUI版本為[v1.2 (89f9faa)](https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/89f9faa63388756314e8a1d96cf86bf5e0663045)
## 功能
### 與 Composable-Diffusion 相容
將 LoRA 在提示詞中的插入位置與`AND`語法相關聯,讓 LoRA 的影響範圍限制在特定的子提示詞中 (特定 AND...AND區塊中)。
......
......@@ -30,6 +30,9 @@ def lora_forward(compvis_module: Union[torch.nn.Conv2d, torch.nn.Linear, torch.n
if len(lora.loaded_loras) == 0:
return res
if hasattr(devices, "cond_cast_unet"):
input = devices.cond_cast_unet(input)
lora_layer_name_loading : Optional[str] = getattr(compvis_module, 'lora_layer_name', None)
if lora_layer_name_loading is None:
......@@ -391,7 +394,7 @@ def apply_composable_lora(lora_layer_name, m_lora, module, m_type: str, patch, a
def lora_Linear_forward(self, input):
clear_cache_lora(self)
if (not self.weight.is_cuda) and input.is_cuda: #if variables not on the same device (between cpu and gpu)
self_weight_cuda = self.weight.cuda() #pass to GPU
self_weight_cuda = self.weight.to(device=devices.device) #pass to GPU
to_del = self.weight
self.weight = None #delete CPU variable
del to_del
......@@ -406,7 +409,7 @@ def lora_Linear_forward(self, input):
def lora_Conv2d_forward(self, input):
clear_cache_lora(self)
if (not self.weight.is_cuda) and input.is_cuda:
self_weight_cuda = self.weight.cuda()
self_weight_cuda = self.weight.to(device=devices.device)
to_del = self.weight
self.weight = None
del to_del
......@@ -421,7 +424,7 @@ def lora_Conv2d_forward(self, input):
def lora_MultiheadAttention_forward(self, input):
clear_cache_lora(self)
if (not self.weight.is_cuda) and input.is_cuda:
self_weight_cuda = self.weight.cuda()
self_weight_cuda = self.weight.to(device=devices.device)
to_del = self.weight
self.weight = None
del to_del
......
from typing import Optional, Union
import re
import torch
from modules import shared
from modules import shared, devices
#support for <lyco:MODEL>
def lycoris_forward(compvis_module: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn.MultiheadAttention], input, res):
......@@ -11,6 +11,9 @@ def lycoris_forward(compvis_module: Union[torch.nn.Conv2d, torch.nn.Linear, torc
if len(lycoris.loaded_lycos) == 0:
return res
if hasattr(devices, "cond_cast_unet"):
input = devices.cond_cast_unet(input)
lycoris_layer_name_loading : Optional[str] = getattr(compvis_module, 'lyco_layer_name', None)
if lycoris_layer_name_loading is None:
return res
......@@ -73,6 +76,10 @@ def get_lora_inference(module, input):
if hasattr(module, 'inference'): #support for lyCORIS
return module.inference(input)
elif hasattr(module, 'up'): #LoRA
if hasattr(module.up, "to"):
module.up.to(device=devices.device)
if hasattr(module.down, "to"):
module.down.to(device=devices.device)
return module.up(module.down(input))
else:
return None
......@@ -458,7 +465,7 @@ def pass_loha_to_gpu(m_loha):
if hasattr(m_loha, 'bias'):
if isinstance(m_loha.bias, torch.Tensor):
if not m_loha.bias.is_cuda:
to_cuda = m_loha.bias.cuda()
to_cuda = m_loha.bias.to(device=devices.device)
to_del = m_loha.bias
m_loha.bias = None
del to_del
......@@ -467,7 +474,7 @@ def pass_loha_to_gpu(m_loha):
if hasattr(m_loha, 't1'):
if isinstance(m_loha.t1, torch.Tensor):
if not m_loha.t1.is_cuda:
to_cuda = m_loha.t1.cuda()
to_cuda = m_loha.t1.to(device=devices.device)
to_del = m_loha.t1
m_loha.t1 = None
del to_del
......@@ -476,7 +483,7 @@ def pass_loha_to_gpu(m_loha):
if hasattr(m_loha, 't2'):
if isinstance(m_loha.t2, torch.Tensor):
if not m_loha.t2.is_cuda:
to_cuda = m_loha.t2.cuda()
to_cuda = m_loha.t2.to(device=devices.device)
to_del = m_loha.t2
m_loha.t2 = None
del to_del
......@@ -485,7 +492,7 @@ def pass_loha_to_gpu(m_loha):
if hasattr(m_loha, 'w'):
if isinstance(m_loha.w, torch.Tensor):
if not m_loha.w.is_cuda:
to_cuda = m_loha.w.cuda()
to_cuda = m_loha.w.to(device=devices.device)
to_del = m_loha.w
m_loha.w = None
del to_del
......@@ -494,7 +501,7 @@ def pass_loha_to_gpu(m_loha):
if hasattr(m_loha, 'w1'):
if isinstance(m_loha.w1, torch.Tensor):
if not m_loha.w1.is_cuda:
to_cuda = m_loha.w1.cuda()
to_cuda = m_loha.w1.to(device=devices.device)
to_del = m_loha.w1
m_loha.w1 = None
del to_del
......@@ -503,7 +510,7 @@ def pass_loha_to_gpu(m_loha):
if hasattr(m_loha, 'w1a'):
if isinstance(m_loha.w1a, torch.Tensor):
if not m_loha.w1a.is_cuda:
to_cuda = m_loha.w1a.cuda()
to_cuda = m_loha.w1a.to(device=devices.device)
to_del = m_loha.w1a
m_loha.w1a = None
del to_del
......@@ -512,7 +519,7 @@ def pass_loha_to_gpu(m_loha):
if hasattr(m_loha, 'w1b'):
if isinstance(m_loha.w1b, torch.Tensor):
if not m_loha.w1b.is_cuda:
to_cuda = m_loha.w1b.cuda()
to_cuda = m_loha.w1b.to(device=devices.device)
to_del = m_loha.w1b
m_loha.w1b = None
del to_del
......@@ -521,7 +528,7 @@ def pass_loha_to_gpu(m_loha):
if hasattr(m_loha, 'w2'):
if isinstance(m_loha.w2, torch.Tensor):
if not m_loha.w2.is_cuda:
to_cuda = m_loha.w2.cuda()
to_cuda = m_loha.w2.to(device=devices.device)
to_del = m_loha.w2
m_loha.w2 = None
del to_del
......@@ -530,7 +537,7 @@ def pass_loha_to_gpu(m_loha):
if hasattr(m_loha, 'w2a'):
if isinstance(m_loha.w2a, torch.Tensor):
if not m_loha.w2a.is_cuda:
to_cuda = m_loha.w2a.cuda()
to_cuda = m_loha.w2a.to(device=devices.device)
to_del = m_loha.w2a
m_loha.w2a = None
del to_del
......@@ -539,7 +546,7 @@ def pass_loha_to_gpu(m_loha):
if hasattr(m_loha, 'w2b'):
if isinstance(m_loha.w2b, torch.Tensor):
if not m_loha.w2b.is_cuda:
to_cuda = m_loha.w2b.cuda()
to_cuda = m_loha.w2b.to(device=devices.device)
to_del = m_loha.w2b
m_loha.w2b = None
del to_del
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册