diff --git a/README.md b/README.md
index 634714961aba78c57b08ef0214f2006b010143bb..2d777e8aaa5e421e0fd8e0d1a2ad342468efd37d 100755
--- a/README.md
+++ b/README.md
@@ -4,6 +4,17 @@ You can use it to automatically remove the mosaics in images and videos, or add
This porject based on ‘semantic segmentation’ and ‘Image-to-Image Translation’.
* [中文版](./README_CN.md)
+### More example
+origin | auto add mosaic | auto clean mosaic
+:-:|:-:|:-:
+![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/lena.jpg) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/lena_add.jpg) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/lena_clean.jpg)
+![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/youknow.png) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/youknow_add.png) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/youknow_clean.png)
+* Compared with [DeepCreamPy](https://github.com/deeppomf/DeepCreamPy)
+mosaic image | DeepCreamPy | ours
+:-:|:-:|:-:
+![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/face_a_mosaic.jpg) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/a_dcp.png) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/face_a_clean.jpg)
+![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/face_b_mosaic.jpg) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/b_dcp.png) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/face_b_clean.jpg)
+
## Notice
The code do not include the part of training, I will finish it in my free time.
@@ -21,6 +32,7 @@ Attentions:
- Different pre-trained models are suitable for different effects.
- Run time depends on computer performance.
- If output video cannot be played, you can try with [potplayer](https://daumpotplayer.com/download/).
+ - GUI version update slower than source.
### Run from source
#### Prerequisites
diff --git a/README_CN.md b/README_CN.md
index b8ac9eda18b9ee0e447ea3d4dc454733fb700de3..30e638300739f731a24bef8c6f08e40a78e6fb19 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -2,6 +2,17 @@
# DeepMosaics
这是一个通过深度学习自动的为图片/视频添加马赛克,或消除马赛克的项目.
它基于“语义分割”以及“图像翻译”.
+### 更多例子
+原始 | 自动打码 | 自动去码
+:-:|:-:|:-:
+![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/lena.jpg) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/lena_add.jpg) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/lena_clean.jpg)
+![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/youknow.png) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/youknow_add.png) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/youknow_clean.png)
+* 与 [DeepCreamPy](https://github.com/deeppomf/DeepCreamPy)相比较
+马赛克图片 | DeepCreamPy | ours
+:-:|:-:|:-:
+![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/face_a_mosaic.jpg) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/a_dcp.png) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/face_a_clean.jpg)
+![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/face_b_mosaic.jpg) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/b_dcp.png) | ![image](https://github.com/HypoX64/DeepMosaics_example/blob/master/face_b_clean.jpg)
+
## 一些说明
代码暂不包含训练部分,训练方法我将在空闲时间给出.
现在,代码已经支持基于[pix2pixHD](https://github.com/NVIDIA/pix2pixHD)训练出的模型,但网络仍在训练中,这将使得输出结果看起来更加清晰,"真实".
@@ -19,7 +30,8 @@
- 程序的运行要求在64位Windows操作系统,我仅在Windows10运行过,其他版本暂未经过测试
- 请根据需求选择合适的预训练模型进行测试
- 运行时间取决于电脑性能,对于视频文件,我们建议可以先使用截图进行测试.
- - 如果输出的视频无法播放,这边建议您尝试[potplayer](https://daumpotplayer.com/download/).
+ - 如果输出的视频无法播放,这边建议您尝试[potplayer](https://daumpotplayer.com/download/).
+ - 相比于源码,该版本的更新将会延后.
### 通过源代码运行
#### 前提要求
diff --git a/util/ffmpeg.py b/util/ffmpeg.py
index 94b57bde937261a4932cc79bae43c42a93ad8f30..6ed6ecc7c03eea931200ddf92d559ee1cc72ead6 100755
--- a/util/ffmpeg.py
+++ b/util/ffmpeg.py
@@ -4,12 +4,12 @@ def video2image(videopath,imagepath):
os.system('ffmpeg -i "'+videopath+'" -f image2 '+imagepath)
def video2voice(videopath,voicepath):
- os.system('ffmpeg -i '+videopath+' -f mp3 '+voicepath)
+ os.system('ffmpeg -i "'+videopath+'" -f mp3 '+voicepath)
def image2video(fps,imagepath,voicepath,videopath):
os.system('ffmpeg -y -r '+str(fps)+' -i '+imagepath+' -vcodec libx264 '+'./tmp/video_tmp.mp4')
#os.system('ffmpeg -f image2 -i '+imagepath+' -vcodec libx264 -r '+str(fps)+' ./tmp/video_tmp.mp4')
- os.system('ffmpeg -i ./tmp/video_tmp.mp4 -i '+voicepath+' -vcodec copy -acodec copy '+videopath)
+ os.system('ffmpeg -i ./tmp/video_tmp.mp4 -i "'+voicepath+'" -vcodec copy -acodec copy '+videopath)
def get_video_infos(videopath):
cmd_str = 'ffprobe -v quiet -print_format json -show_format -show_streams -i "' + videopath + '"'