Similar to CycleGAN, [U-GAT-IT](https://arxiv.org/abs/1907.10830) uses unpaired pictures for image translation, input two different images with different styles, and automatically perform style transfer. Differently, U-GAT-IT is a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner.
## 1.2 How to use
### 1.2.1 Prepare Datasets
Selfie2anime dataset used by U-GAT-IT can be download from [here](https://www.kaggle.com/arnaud58/selfie2anime). You can also use your own dataset.
The structure of dataset is as following:
```
├── dataset
└── YOUR_DATASET_NAME
├── trainA
├── trainB
├── testA
└── testB
```
### 1.2.2 Train/Test
Datasets used in example is selfie2anime, you can change it to your own dataset in the config file.