diff --git a/new/adv-dl-tf2-keras/02.md b/new/adv-dl-tf2-keras/02.md index 2cd630162cba175e2e05c3e082a0fbaa43736e43..026ac21c08233d2663f31cd946a8f6f1cec4aff0 100644 --- a/new/adv-dl-tf2-keras/02.md +++ b/new/adv-dl-tf2-keras/02.md @@ -890,7 +890,7 @@ DenseNet 完成了我们对深度神经网络的讨论。 与 ResNet 一起, 1. `Kaiming He et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Proceedings of the IEEE international conference on computer vision, 2015 (https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdfspm=5176.100239.blogcont55892.28.pm8zm1&file=He_Delving_Deep_into_ICCV_2015_paper.pdf).` 1. `Kaiming He et al. Deep Residual Learning for Image Recognition. Proceedings of the IEEE conference on computer vision and pattern recognition, 2016a (http://openaccess.thecvf.com/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf).` 1. `Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. ICLR, 2015 (https://arxiv.org/pdf/1409.1556/).` -1. K`aiming He et al. Identity Mappings in Deep Residual Networks. European Conference on Computer Vision. Springer International Publishing, 2016b (https://arxiv.org/pdf/1603.05027.pdf).` +1. `Kaiming He et al. Identity Mappings in Deep Residual Networks. European Conference on Computer Vision. Springer International Publishing, 2016b (https://arxiv.org/pdf/1603.05027.pdf).` 1. `Gao Huang et al. Densely Connected Convolutional Networks. Proceedings of the IEEE conference on computer vision and pattern recognition, 2017 (http://openaccess.thecvf.com/content_cvpr_2017/papers/Huang_Densely_Connected_Convolutional_CVPR_2017_paper.pdf).` 1. `Saining Xie et al. Aggregated Residual Transformations for Deep Neural Networks. Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017 (http://openaccess.thecvf.com/content_cvpr_2017/papers/Xie_Aggregated_Residual_Transformations_CVPR_2017_paper.pdf).` 1. `Zagoruyko, Sergey, and Nikos Komodakis. "Wide residual networks." arXiv preprint arXiv:1605.07146 (2016).` \ No newline at end of file diff --git a/new/adv-dl-tf2-keras/06.md b/new/adv-dl-tf2-keras/06.md index 3320076365ae634cfa85096214e83bc5c286ec43..d49542fd3047a1c85ee0c27f08b53a09d1c61c5d 100644 --- a/new/adv-dl-tf2-keras/06.md +++ b/new/adv-dl-tf2-keras/06.md @@ -618,7 +618,7 @@ def train(models, data, params): 与以前提供给我们的所有 GAN 相似,我们已经对 InfoGAN 进行了 40,000 步的训练。 训练完成后,我们可以运行 InfoGAN 生成器,以使用`infogan_mnist.h5`文件中保存的模型生成新输出。 进行以下验证: -1. Generate digits 0 to 9 by varying the discrete labels from 0 to 9\. Both continuous codes are set to zero. The results are shown in *Figure 6.1.5*. We can see that the InfoGAN discrete code can control the digits produced by the generator: +1. 通过将离散标签从 0 更改为 9,可生成数字 0 至 9。 两个连续代码都设置为零。 结果显示在“图 6.1.5”中。 我们可以看到,InfoGAN 离散代码可以控制生成器产生的数字: ```py python3 infogan-mnist-6.1.1.py --generator=infogan_mnist.h5 @@ -638,7 +638,7 @@ def train(models, data, params): 图 6.1.5:当离散代码从 0 变为 9 时,InfoGAN 生成的图像都被设置为零。 -2. Examine the effect of the first continuous code to understand which attribute has been affected. We vary the first continuous code from -2.0 to 2.0 for digits 0 to 9\. The second continuous code is set to 0.0\. *Figure 6.1.6* shows that the first continuous code controls the thickness of the digit: +2. 检查第一个连续代码的效果,以了解哪个属性已受到影响。 我们将 0 到 9 的第一个连续代码从 -2.0 更改为 2.0。 第二个连续代码设置为 0.0。 “图 6.1.6”显示了第一个连续代码控制数字的粗细: ```py python3 infogan-mnist-6.1.1.py --generator=infogan_mnist.h5 @@ -1305,7 +1305,7 @@ def train(models, data, params): StackedGAN 生成器可以通过以下方式进行定性验证: -1. Varying the discrete labels from 0 to 9 with both noise codes,`z[0]`and`z[1]`sampled from a normal distribution with a mean of 0.5 and a standard deviation of 1.0\. The results are shown in *Figure 6.2.9*. We're able to see that the StackedGAN discrete code can control the digits produced by the generator: +1. 从两个噪声代码`z[0]`和`z[1]`的离散标签从 0 变到 9,从正态分布中采样,均值为 0.5,标准偏差为 1.0。 结果显示在“图 6.2.9”中。 我们可以看到 StackedGAN 离散代码可以控制生成器生成的数字: ```py python3 stackedgan-mnist-6.2.1.py --generator0=stackedgan_mnist-gen0.h5 --generator1=stackedgan_mnist-gen1.h5 --digit=0 @@ -1321,7 +1321,7 @@ StackedGAN 生成器可以通过以下方式进行定性验证: 图 6.2.9:当离散代码从 0 变为 9 时,StackedGAN 生成的图像。`z0`和`z1`均从正态分布中采样,平均值为 0,标准偏差为 0.5。 -2. Varying the first noise code,`z[0]`, as a constant vector from -4.0 to 4.0 for digits 0 to 9 is shown as follows. The second noise code,`z[1]`, is set to a zero vector. *Figure 6.2.10* shows that the first noise code controls the thickness of the digit. For example, for digit 8: +2. 如下所示,将第一噪声码`z[0]`从 -4.0 到 4.0 的恒定向量变为从 0 到 9 的数字。 第二噪声代码`z[1]`被设置为零向量。 “图 6.2.10”显示第一个噪声代码控制数字的粗细。 例如,对于数字 8: ```py python3 stackedgan-mnist-6.2.1.py --generator0=stackedgan_mnist-gen0.h5 --generator1=stackedgan_mnist-gen1.h5 --z0=0 --z1=0 --p0 --digit=8 diff --git a/new/adv-dl-tf2-keras/12.md b/new/adv-dl-tf2-keras/12.md index 65b57c0c632399d6b0709b275874630ca3811285..ac6b4281cc5f708f19a7e4b9f5f046f3c1d11a53 100644 --- a/new/adv-dl-tf2-keras/12.md +++ b/new/adv-dl-tf2-keras/12.md @@ -416,7 +416,7 @@ python3 fcn-12.3.1.py --evaluate # 7.参考 1. `Kirillov, Alexander, et al.: Panoptic Segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition. 2019.` -1. L`ong, Jonathan, Evan Shelhamer, and Trevor Darrell: Fully Convolutional Networks for Semantic Segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.` +1. `Long, Jonathan, Evan Shelhamer, and Trevor Darrell: Fully Convolutional Networks for Semantic Segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.` 1. `Zhao, Hengshuang, et al.: Pyramid Scene Parsing Network. Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.` 1. `Dutta, et al.: VGG Image Annotator http://www.robots.ox.ac.uk/~vgg/software/via/` 1. `He Kaiming, et al.: Mask R-CNN. Proceedings of the IEEE international conference on computer vision. 2017.` \ No newline at end of file diff --git a/new/handson-dl-arch-py/7.md b/new/handson-dl-arch-py/7.md index bd6b611d8e6d41518cae000b9e1f90a3ec7f8cb2..6b8fbdcbca266efac5c08f7b153f5c18704876b5 100644 --- a/new/handson-dl-arch-py/7.md +++ b/new/handson-dl-arch-py/7.md @@ -17,9 +17,9 @@ GAN 在合成图像,声波,视频和任何其他信号方面几乎具有相 * 产生影像 * 深度卷积 GAN 架构 * 实施深度卷积 GAN -* Conditional GAN architecture +* 条件 GAN 架构 * 实施条件 GAN -* Information-maximizing GAN architecture +* 信息最大化 GAN 架构 * 实施信息最大化的 GAN # 什么是 GAN? diff --git a/new/handson-py-dl-web/02.md b/new/handson-py-dl-web/02.md index f9c4ba96d3b6f6d08f63b117459610046b3f6378..aa84cba31a83f07cb4b98d58284f357fc31bc661 100644 --- a/new/handson-py-dl-web/02.md +++ b/new/handson-py-dl-web/02.md @@ -110,8 +110,8 @@ 除 Sigmoid 函数外,以下是一些众所周知的常用函数,这些函数使神经元具有非线性特征: -* h -* 恢复 +* Tanh +* ReLU * 泄漏的 ReLU 在文献中,这些功能以及我们刚刚研究的两个功能都称为激活功能。 目前,ReLU 及其变体是迄今为止最成功的激活功能。 diff --git a/new/master-cv-tf-2x/4.md b/new/master-cv-tf-2x/4.md index 8e22bd5be158aff6d3014f257db76e09db3e1f63..1f424218489e8505d8d3a9b4f0dc723d2716df8e 100644 --- a/new/master-cv-tf-2x/4.md +++ b/new/master-cv-tf-2x/4.md @@ -137,9 +137,9 @@ CNN 的图像过滤和处理方法包括执行多种操作,所有这些操作 使用了多种类型的激活函数,但最常见的激活函数如下: -* 乙状结肠 -* h -* 恢复 +* Sigmoid +* Tanh +* ReLU 下图显示了上述激活功能: