File size: 8,237 Bytes
fce304d 8e26b1a e93fb19 7930ce0 8e26b1a d502034 d992148 8e26b1a 7930ce0 8e26b1a 6b396d0 7930ce0 8e26b1a 7930ce0 8e26b1a 7930ce0 fd1065b 7930ce0 3152ac5 7930ce0 341b820 7930ce0 341b820 7930ce0 f0d859a 3152ac5 7930ce0 3152ac5 7930ce0 20803d9 3a1d3f5 7930ce0 341b820 7930ce0 341b820 7930ce0 f0d859a 7930ce0 e4050d7 f0d859a 3152ac5 7930ce0 19a4010 7930ce0 341b820 7930ce0 8d06f98 7930ce0 341b820 7930ce0 3152ac5 7930ce0 8e26b1a 3b8677e 8e26b1a 3b8677e 341b820 3b8677e 8e26b1a 3b8677e 341b820 8e26b1a 3b8677e 341b820 8e26b1a 26a129e 8e26b1a 26a129e bae955a 26a129e bae955a 26a129e bae955a 8e26b1a 7930ce0 8e26b1a fce304d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 |
---
title: AdaIN Style Transfer
sdk: gradio
emoji: 🐨
colorFrom: blue
colorTo: indigo
---
# 2022-AdaIN-pytorch
This is an unofficial Pytorch implementation of the paper, `Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization, ICCV 2017` [arxiv](https://arxiv.org/abs/1703.06868). I referred to the [official implementation](https://github.com/xunhuang1995/AdaIN-style) in Torch. I used pretrained weights of vgg19 and decoder from [naoto0804](https://github.com/naoto0804/pytorch-AdaIN).
## Requirements
Install requirements by `$ pip install -r requirements.txt`
- Python 3.7+
- PyTorch 1.10
- Pillow
- TorchVision
- Numpy
- imageio
- tqdm
## Usage
### Demo Website
You can access a demo and perform style transfer at [2022-AdaIN-pytorch-Demo](https://huggingface.co/spaces/subatomicseer/2022-AdaIN-pytorch-Demo) Huggingface Space.
### Local Web App
If you would like to run the Streamlit app on your local system do the following steps:
Install requirements by:
`$ pip install -r streamlit_app/requirements.txt`
Following additional packages are for the web app:
- streamlit
- gdown
- packaging
Run the webapp by:
`$ streamlit run streamlit_app/app.py`
The above command will open a window in your default browser (if available), and will also display the local url, which you can navigate to, to use the app.
### Training
The encoder uses pretrained vgg19 network. Download the [vgg19 weight](https://drive.google.com/file/d/1UcSl-Zn3byEmn15NIPXMf9zaGCKc2gfx/view?usp=sharing). The decoder is trained on MSCOCO and wikiart dataset.
Run the script train.py
```
$ python train.py --content_dir $CONTENT_DIR --style_dir STYLE_DIR --cuda
usage: train.py [-h] [--content_dir CONTENT_DIR] [--style_dir STYLE_DIR]
[--epochs EPOCHS] [--batch_size BATCH_SIZE] [--resume RESUME] [--cuda]
optional arguments:
-h, --help show this help message and exit
--content_dir CONTENT_DIR
content images folder path
--style_dir STYLE_DIR
style images folder path
--epochs EPOCHS Number of epoch
--batch_size BATCH_SIZE
Batch size
--resume RESUME Continue training from epoch
--cuda Use CUDA
```
### Test Image Style Transfer
Download [vgg19 weight](https://drive.google.com/file/d/1UcSl-Zn3byEmn15NIPXMf9zaGCKc2gfx/view?usp=sharing), [decoder weight](https://drive.google.com/file/d/18JpLtMOapA-vwBz-LRomyTl24A9GwhTF/view?usp=sharing) under the main directory.
To test basic style transfer, run the script test.py. Specify `--content_image`, `--style_image` to the image path, or specify `--content_dir`, `--style_dir` to iterate all images under this directory. All outputs are saved in `./results/`. Specify `--grid_pth` to collect all outputs in a grid image. Specify `--color_control` to preserve the content image color.
```
$ python test.py --content_image $IMG --style_image $STYLE --cuda
optional arguments:
-h, --help show this help message and exit
--content_image CONTENT_IMAGE
single content image file
--content_dir CONTENT_DIR
content image directory, iterate all images under this directory
--style_image STYLE_IMAGE
single style image
--style_dir STYLE_DIR
style image directory, iterate all images under this directory
--decoder_weight DECODER_WEIGHT decoder weight file (default='decoder.pth')
--alpha {Alpha Range}
Alpha [0.0, 1.0] controls style transfer level
--cuda Use CUDA
--grid_pth GRID_PTH
Specify a grid image path (default=None) if generate a grid image
that contains all style transferred images
--color_control Preserve content image color
```
### Test Image Interpolation Style Transfer
To test style transfer interpolation, run the script test_interpolate.py. Specify `--style_image` with multiple paths separated by comma. Specify `--interpolation_weights` to interpolate once. All outputs are saved in `./results_interpolate/`. Specify `--grid_pth` to interpolate with different built-in weights and provide 4 style images. Specify `--color_control` to preserve the content image color.
```
$ python test_interpolate.py --content_image $IMG --style_image $STYLE $WEIGHT --cuda
optional arguments:
-h, --help show this help message and exit
--content_image CONTENT_IMAGE
single content image file
--style_image STYLE_IMAGE
multiple style images file separated by comma
--decoder_weight DECODER_WEIGHT decoder weight file (default='decoder.pth')
--alpha {Alpha Range}
Alpha [0.0, 1.0] (default=1.0) controls style transfer level
--interpolation_weights INTERPOLATION_WEIGHTS
Interpolation weight of each style image, separated by comma.
Do not specify if input grid_pth.
--cuda Use CUDA
--grid_pth GRID_PTH
Specify a grid image path (default=None) to perform interpolation style
transfer multiple times with different built-in weights and generate a
grid image that contains all style transferred images. Provide 4 style
images. Do not specify if input interpolation_weights.
--color_control Preserve content image color
```
### Test Video Style Transfer
To test video style transfer, run the script test_video.py. All outputs are saved in `./results_video/`.
```
$ python test_video.py --content_video $VID --style_image $STYLE --cuda
optional arguments:
-h, --help show this help message and exit
--content_video CONTENT_IMAGE
single content video file
--style_image STYLE_IMAGE
single style image
--decoder_weight DECODER_WEIGHT decoder weight file (default='decoder.pth')
--alpha {Alpha Range}
Alpha [0.0, 1.0] controls style transfer level
--cuda Use CUDA
--color_control Preserve content image color
```
## Examples
### Basic Style Transfer

### Different levels of style transfer

### Interpolation Style Transfer

### Style Transfer with color control
|||
|---|---|
|w/o color control|w/ color control|
### Video Style Transfer
Original Video
https://user-images.githubusercontent.com/42717345/163805137-d7ba350b-a42e-4b91-ac2b-4916b1715153.mp4
Style Image
<img src="https://github.com/media-comp/2022-AdaIN-pytorch/blob/main/images/art/picasso_self_portrait.jpg" alt="drawing" width="200"/>
Style Transfer Video
https://user-images.githubusercontent.com/42717345/163805886-a1199a40-6032-4baf-b2d4-30e6e05b3385.mp4
## References
- X. Huang and S. Belongie. "Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization.", in ICCV, 2017. [arxiv](https://arxiv.org/abs/1703.06868)
- [Original implementation in Torch](https://github.com/xunhuang1995/AdaIN-style)
- [Pretrained weights](https://github.com/naoto0804/pytorch-AdaIN)
- List of all source URLs of images collected from the internet. [Image_sources.txt](https://github.com/media-comp/2022-AdaIN-pytorch/blob/main/Image_sources.txt)
- L. A. Gatys, A. S. Ecker, M. Bethge, A. Hertzmann, and E. Shechtman. Controlling perceptual factors in neural style transfer. In CVPR, 2017. [arxiv](https://arxiv.org/abs/1611.07865)
- A. Hertzmann. Algorithms for Rendering in Artistic Styles. PhD thesis, New York University, 2001. |