未验证 提交 b1dcb209 编写于 作者: Z zhengya01 提交者: GitHub

Merge pull request #17 from PaddlePaddle/develop

update
...@@ -16,55 +16,56 @@ PaddlePaddle 提供了丰富的计算单元,使得用户可以采用模块化 ...@@ -16,55 +16,56 @@ PaddlePaddle 提供了丰富的计算单元,使得用户可以采用模块化
## PaddleCV ## PaddleCV
模型|简介|模型优势|参考论文 模型|简介|模型优势|参考论文
--|:--:|:--:|:--: --|:--:|:--:|:--:
[AlexNet](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/image_classification/models)|图像分类经典模型|首次在CNN中成功的应用了ReLU、Dropout和LRN,并使用GPU进行运算加速|[ImageNet Classification with Deep Convolutional Neural Networks](https://www.researchgate.net/publication/267960550_ImageNet_Classification_with_Deep_Convolutional_Neural_Networks) [AlexNet](./fluid/PaddleCV/image_classification/models)|图像分类经典模型|首次在CNN中成功的应用了ReLU、Dropout和LRN,并使用GPU进行运算加速|[ImageNet Classification with Deep Convolutional Neural Networks](https://www.researchgate.net/publication/267960550_ImageNet_Classification_with_Deep_Convolutional_Neural_Networks)
[VGG](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/image_classification/models)|图像分类经典模型|在AlexNet的基础上使用3*3小卷积核,增加网络深度,具有很好的泛化能力|[Very Deep ConvNets for Large-Scale Inage Recognition](https://arxiv.org/pdf/1409.1556.pdf) [VGG](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/image_classification/models)|图像分类经典模型|在AlexNet的基础上使用3*3小卷积核,增加网络深度,具有很好的泛化能力|[Very Deep ConvNets for Large-Scale Inage Recognition](https://arxiv.org/pdf/1409.1556.pdf)
[GoogleNet](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/image_classification/models)|图像分类经典模型|在不增加计算负载的前提下增加了网络的深度和宽度,性能更加优越|[Going deeper with convolutions](https://ieeexplore.ieee.org/document/7298594) [GoogleNet](./fluid/PaddleCV/image_classification/models)|图像分类经典模型|在不增加计算负载的前提下增加了网络的深度和宽度,性能更加优越|[Going deeper with convolutions](https://ieeexplore.ieee.org/document/7298594)
[ResNet](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/image_classification/models)|残差网络|引入了新的残差结构,解决了随着网络加深,准确率下降的问题|[Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) [ResNet](./fluid/PaddleCV/image_classification/models)|残差网络|引入了新的残差结构,解决了随着网络加深,准确率下降的问题|[Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)
[Inception-v4](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/image_classification/models)|图像分类经典模型|更加deeper和wider的inception结构|[Inception-ResNet and the Impact of Residual Connections on Learning](http://arxiv.org/abs/1602.07261) [Inception-v4](./fluid/PaddleCV/image_classification/models)|图像分类经典模型|更加deeper和wider的inception结构|[Inception-ResNet and the Impact of Residual Connections on Learning](http://arxiv.org/abs/1602.07261)
[MobileNet](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/image_classification/models)|轻量级网络模型|为移动和嵌入式设备提出的高效模型|[MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) [MobileNet](./fluid/PaddleCV/image_classification/models)|轻量级网络模型|为移动和嵌入式设备提出的高效模型|[MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861)
[DPN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/image_classification/models)|图像分类模型|结合了DenseNet和ResNeXt的网络结构,对图像分类效果有所提升|[Dual Path Networks](https://arxiv.org/abs/1707.01629) [DPN](./fluid/PaddleCV/image_classification/models)|图像分类模型|结合了DenseNet和ResNeXt的网络结构,对图像分类效果有所提升|[Dual Path Networks](https://arxiv.org/abs/1707.01629)
[SE-ResNeXt](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/image_classification/models)|图像分类模型|ResNeXt中加入了SE block,提高了模型准确率|[Squeeze-and-excitation networks](https://arxiv.org/abs/1709.01507) [SE-ResNeXt](./fluid/PaddleCV/image_classification/models)|图像分类模型|ResNeXt中加入了SE block,提高了模型准确率|[Squeeze-and-excitation networks](https://arxiv.org/abs/1709.01507)
[SSD](https://github.com/PaddlePaddle/models/blob/develop/fluid/PaddleCV/object_detection/README_cn.md)|单阶段目标检测器|在不同尺度的特征图上检测对应尺度的目标,可以方便地插入到任何一种标准卷积网络中|[SSD: Single Shot MultiBox Detector](https://arxiv.org/abs/1512.02325) [SSD](./fluid/PaddleCV/object_detection/README_cn.md)|单阶段目标检测器|在不同尺度的特征图上检测对应尺度的目标,可以方便地插入到任何一种标准卷积网络中|[SSD: Single Shot MultiBox Detector](https://arxiv.org/abs/1512.02325)
[Face Detector: PyramidBox](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/face_detection/README_cn.md)|基于SSD的单阶段人脸检测器|利用上下文信息解决困难人脸的检测问题,网络表达能力高,鲁棒性强|[PyramidBox: A Context-assisted Single Shot Face Detector](https://arxiv.org/pdf/1803.07737.pdf) [Face Detector: PyramidBox](./fluid/PaddleCV/face_detection/README_cn.md)|基于SSD的单阶段人脸检测器|利用上下文信息解决困难人脸的检测问题,网络表达能力高,鲁棒性强|[PyramidBox: A Context-assisted Single Shot Face Detector](https://arxiv.org/pdf/1803.07737.pdf)
[Faster RCNN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/rcnn/README_cn.md)|典型的两阶段目标检测器|创造性地采用卷积网络自行产生建议框,并且和目标检测网络共享卷积网络,建议框数目减少,质量提高|[Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks](https://arxiv.org/abs/1506.01497) [Faster RCNN](./fluid/PaddleCV/rcnn/README_cn.md)|典型的两阶段目标检测器|创造性地采用卷积网络自行产生建议框,并且和目标检测网络共享卷积网络,建议框数目减少,质量提高|[Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks](https://arxiv.org/abs/1506.01497)
[Mask RCNN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/rcnn/README_cn.md)|基于Faster RCNN模型的经典实例分割模型|在原有Faster RCNN模型基础上添加分割分支,得到掩码结果,实现了掩码和类别预测关系的解藕。|[Mask R-CNN](https://arxiv.org/abs/1703.06870) [Mask RCNN](./fluid/PaddleCV/rcnn/README_cn.md)|基于Faster RCNN模型的经典实例分割模型|在原有Faster RCNN模型基础上添加分割分支,得到掩码结果,实现了掩码和类别预测关系的解藕。|[Mask R-CNN](https://arxiv.org/abs/1703.06870)
[ICNet](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/icnet)|图像实时语义分割模型|即考虑了速度,也考虑了准确性,在高分辨率图像的准确性和低复杂度网络的效率之间获得平衡|[ICNet for Real-Time Semantic Segmentation on High-Resolution Images](https://arxiv.org/abs/1704.08545) [ICNet](./fluid/PaddleCV/icnet)|图像实时语义分割模型|即考虑了速度,也考虑了准确性,在高分辨率图像的准确性和低复杂度网络的效率之间获得平衡|[ICNet for Real-Time Semantic Segmentation on High-Resolution Images](https://arxiv.org/abs/1704.08545)
[DCGAN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/gan/c_gan)|图像生成模型|深度卷积生成对抗网络,将GAN和卷积网络结合起来,以解决GAN训练不稳定的问题|[Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks](https://arxiv.org/pdf/1511.06434.pdf) [DCGAN](./fluid/PaddleCV/gan/c_gan)|图像生成模型|深度卷积生成对抗网络,将GAN和卷积网络结合起来,以解决GAN训练不稳定的问题|[Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks](https://arxiv.org/pdf/1511.06434.pdf)
[ConditionalGAN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/gan/c_gan)|图像生成模型|条件生成对抗网络,一种带条件约束的GAN,使用额外信息对模型增加条件,可以指导数据生成过程|[Conditional Generative Adversarial Nets](https://arxiv.org/abs/1411.1784) [ConditionalGAN](./fluid/PaddleCV/gan/c_gan)|图像生成模型|条件生成对抗网络,一种带条件约束的GAN,使用额外信息对模型增加条件,可以指导数据生成过程|[Conditional Generative Adversarial Nets](https://arxiv.org/abs/1411.1784)
[CycleGAN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/gan/cycle_gan)|图片转化模型|自动将某一类图片转换成另外一类图片,可用于风格迁移|[Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks](https://arxiv.org/abs/1703.10593) [CycleGAN](./fluid/PaddleCV/gan/cycle_gan)|图片转化模型|自动将某一类图片转换成另外一类图片,可用于风格迁移|[Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks](https://arxiv.org/abs/1703.10593)
[CRNN-CTC模型](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/ocr_recognition)|场景文字识别模型|使用CTC model识别图片中单行英文字符|[Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks](https://www.researchgate.net/publication/221346365_Connectionist_temporal_classification_Labelling_unsegmented_sequence_data_with_recurrent_neural_'networks) [CRNN-CTC模型](./fluid/PaddleCV/ocr_recognition)|场景文字识别模型|使用CTC model识别图片中单行英文字符|[Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks](https://www.researchgate.net/publication/221346365_Connectionist_temporal_classification_Labelling_unsegmented_sequence_data_with_recurrent_neural_'networks)
[Attention模型](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/ocr_recognition)|场景文字识别模型|使用attention 识别图片中单行英文字符|[Recurrent Models of Visual Attention](https://arxiv.org/abs/1406.6247) [Attention模型](./fluid/PaddleCV/ocr_recognition)|场景文字识别模型|使用attention 识别图片中单行英文字符|[Recurrent Models of Visual Attention](https://arxiv.org/abs/1406.6247)
[Metric Learning](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/metric_learning)|度量学习模型|能够用于分析对象时间的关联、比较关系,可应用于辅助分类、聚类问题,也广泛用于图像检索、人脸识别等领域|- [Metric Learning](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/metric_learning)|度量学习模型|能够用于分析对象时间的关联、比较关系,可应用于辅助分类、聚类问题,也广泛用于图像检索、人脸识别等领域|-
[TSN](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/video_classification)|视频分类模型|基于长范围时间结构建模,结合了稀疏时间采样策略和视频级监督来保证使用整段视频时学习得有效和高效|[Temporal Segment Networks: Towards Good Practices for Deep Action Recognition](https://arxiv.org/abs/1608.00859) [TSN](./fluid/PaddleCV/video_classification)|视频分类模型|基于长范围时间结构建模,结合了稀疏时间采样策略和视频级监督来保证使用整段视频时学习得有效和高效|[Temporal Segment Networks: Towards Good Practices for Deep Action Recognition](https://arxiv.org/abs/1608.00859)
[caffe2fluid](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleCV/caffe2fluid)|将Caffe模型转换为Paddle Fluid配置和模型文件工具|-|- [视频模型库](./fluid/PaddleCV/video)|视频模型库|给开发者提供基于PaddlePaddle的便捷、高效的使用深度学习算法解决视频理解、视频编辑、视频生成等一系列模型||
[caffe2fluid](./fluid/PaddleCV/caffe2fluid)|将Caffe模型转换为Paddle Fluid配置和模型文件工具|-|-
## PaddleNLP ## PaddleNLP
模型|简介|模型优势|参考论文 模型|简介|模型优势|参考论文
--|:--:|:--:|:--: --|:--:|:--:|:--:
[Transformer](https://github.com/PaddlePaddle/models/blob/develop/fluid/PaddleNLP/neural_machine_translation/transformer/README_cn.md)|机器翻译模型|基于self-attention,计算复杂度小,并行度高,容易学习长程依赖,翻译效果更好|[Attention Is All You Need](https://arxiv.org/abs/1706.03762) [Transformer](./fluid/PaddleNLP/neural_machine_translation/transformer/README_cn.md)|机器翻译模型|基于self-attention,计算复杂度小,并行度高,容易学习长程依赖,翻译效果更好|[Attention Is All You Need](https://arxiv.org/abs/1706.03762)
[LAC](https://github.com/baidu/lac/blob/master/README.md)|联合的词法分析模型|能够整体性地完成中文分词、词性标注、专名识别任务|[Chinese Lexical Analysis with Deep Bi-GRU-CRF Network](https://arxiv.org/abs/1807.01882) [LAC](https://github.com/baidu/lac/blob/master/README.md)|联合的词法分析模型|能够整体性地完成中文分词、词性标注、专名识别任务|[Chinese Lexical Analysis with Deep Bi-GRU-CRF Network](https://arxiv.org/abs/1807.01882)
[Senta](https://github.com/baidu/Senta/blob/master/README.md)|情感倾向分析模型集|百度AI开放平台中情感倾向分析模型|- [Senta](https://github.com/baidu/Senta/blob/master/README.md)|情感倾向分析模型集|百度AI开放平台中情感倾向分析模型|-
[DAM](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleNLP/deep_attention_matching_net)|语义匹配模型|百度自然语言处理部发表于ACL-2018的工作,用于检索式聊天机器人多轮对话中应答的选择|[Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network](http://aclweb.org/anthology/P18-1103) [DAM](./fluid/PaddleNLP/deep_attention_matching_net)|语义匹配模型|百度自然语言处理部发表于ACL-2018的工作,用于检索式聊天机器人多轮对话中应答的选择|[Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network](http://aclweb.org/anthology/P18-1103)
[SimNet](https://github.com/baidu/AnyQ/blob/master/tools/simnet/train/paddle/README.md)|语义匹配框架|使用SimNet构建出的模型可以便捷的加入AnyQ系统中,增强AnyQ系统的语义匹配能力|- [SimNet](https://github.com/baidu/AnyQ/blob/master/tools/simnet/train/paddle/README.md)|语义匹配框架|使用SimNet构建出的模型可以便捷的加入AnyQ系统中,增强AnyQ系统的语义匹配能力|-
[DuReader](https://github.com/PaddlePaddle/models/blob/develop/fluid/PaddleNLP/machine_reading_comprehension/README.md)|阅读理解模型|百度MRC数据集上的机器阅读理解模型|- [DuReader](./fluid/PaddleNLP/machine_reading_comprehension/README.md)|阅读理解模型|百度MRC数据集上的机器阅读理解模型|-
[Bi-GRU-CRF](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleNLP/sequence_tagging_for_ner/README.md)|命名实体识别|结合了CRF和双向GRU的命名实体识别模型|- [Bi-GRU-CRF](./fluid/PaddleNLP/sequence_tagging_for_ner/README.md)|命名实体识别|结合了CRF和双向GRU的命名实体识别模型|-
## PaddleRec ## PaddleRec
模型|简介|模型优势|参考论文 模型|简介|模型优势|参考论文
--|:--:|:--:|:--: --|:--:|:--:|:--:
[TagSpace](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleRec/tagspace)|文本及标签的embedding表示学习模型|应用于工业级的标签推荐,具体应用场景有feed新闻标签推荐等|[#TagSpace: Semantic embeddings from hashtags](https://www.bibsonomy.org/bibtex/0ed4314916f8e7c90d066db45c293462) [TagSpace](./fluid/PaddleRec/tagspace)|文本及标签的embedding表示学习模型|应用于工业级的标签推荐,具体应用场景有feed新闻标签推荐等|[#TagSpace: Semantic embeddings from hashtags](https://www.bibsonomy.org/bibtex/0ed4314916f8e7c90d066db45c293462)
[GRU4Rec](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleRec/gru4rec)|个性化推荐模型|首次将RNN(GRU)运用于session-based推荐,相比传统的KNN和矩阵分解,效果有明显的提升|[Session-based Recommendations with Recurrent Neural Networks](https://arxiv.org/abs/1511.06939) [GRU4Rec](./fluid/PaddleRec/gru4rec)|个性化推荐模型|首次将RNN(GRU)运用于session-based推荐,相比传统的KNN和矩阵分解,效果有明显的提升|[Session-based Recommendations with Recurrent Neural Networks](https://arxiv.org/abs/1511.06939)
[SSR](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleRec/ssr)|序列语义检索推荐模型|使用参考论文中的思想,使用多种时间粒度进行用户行为预测|[Multi-Rate Deep Learning for Temporal Recommendation](https://dl.acm.org/citation.cfm?id=2914726) [SSR](./fluid/PaddleRec/ssr)|序列语义检索推荐模型|使用参考论文中的思想,使用多种时间粒度进行用户行为预测|[Multi-Rate Deep Learning for Temporal Recommendation](https://dl.acm.org/citation.cfm?id=2914726)
[DeepCTR](https://github.com/PaddlePaddle/models/blob/develop/fluid/PaddleRec/ctr/README.cn.md)|点击率预估模型|只实现了DeepFM论文中介绍的模型的DNN部分,DeepFM会在其他例子中给出|[DeepFM: A Factorization-Machine based Neural Network for CTR Prediction](https://arxiv.org/abs/1703.04247) [DeepCTR](./fluid/PaddleRec/ctr/README.cn.md)|点击率预估模型|只实现了DeepFM论文中介绍的模型的DNN部分,DeepFM会在其他例子中给出|[DeepFM: A Factorization-Machine based Neural Network for CTR Prediction](https://arxiv.org/abs/1703.04247)
[Multiview-Simnet](https://github.com/PaddlePaddle/models/tree/develop/fluid/PaddleRec/multiview_simnet)|个性化推荐模型|基于多元视图,将用户和项目的多个功能视图合并为一个统一模型|[A Multi-View Deep Learning Approach for Cross Domain User Modeling in Recommendation Systems](http://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/frp1159-songA.pdf) [Multiview-Simnet](./fluid/PaddleRec/multiview_simnet)|个性化推荐模型|基于多元视图,将用户和项目的多个功能视图合并为一个统一模型|[A Multi-View Deep Learning Approach for Cross Domain User Modeling in Recommendation Systems](http://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/frp1159-songA.pdf)
## Other Models ## Other Models
模型|简介|模型优势|参考论文 模型|简介|模型优势|参考论文
--|:--:|:--:|:--: --|:--:|:--:|:--:
[DeepASR](https://github.com/PaddlePaddle/models/blob/develop/fluid/DeepASR/README_cn.md)|语音识别系统|利用Fluid框架完成语音识别中声学模型的配置和训练,并集成 Kaldi 的解码器|- [DeepASR](./fluid/DeepASR/README_cn.md)|语音识别系统|利用Fluid框架完成语音识别中声学模型的配置和训练,并集成 Kaldi 的解码器|-
[DQN](https://github.com/PaddlePaddle/models/blob/develop/fluid/DeepQNetwork/README_cn.md)|深度Q网络|value based强化学习算法,第一个成功地将深度学习和强化学习结合起来的模型|[Human-level control through deep reinforcement learning](https://www.nature.com/articles/nature14236) [DQN](./fluid/DeepQNetwork/README_cn.md)|深度Q网络|value based强化学习算法,第一个成功地将深度学习和强化学习结合起来的模型|[Human-level control through deep reinforcement learning](https://www.nature.com/articles/nature14236)
[DoubleDQN](https://github.com/PaddlePaddle/models/blob/develop/fluid/DeepQNetwork/README_cn.md)|DQN的变体|将Double Q的想法应用在DQN上,解决过优化问题|[Font Size: Deep Reinforcement Learning with Double Q-Learning](https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/viewPaper/12389) [DoubleDQN](./fluid/DeepQNetwork/README_cn.md)|DQN的变体|将Double Q的想法应用在DQN上,解决过优化问题|[Font Size: Deep Reinforcement Learning with Double Q-Learning](https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/viewPaper/12389)
[DuelingDQN](https://github.com/PaddlePaddle/models/blob/develop/fluid/DeepQNetwork/README_cn.md)|DQN的变体|改进了DQN模型,提高了模型的性能|[Dueling Network Architectures for Deep Reinforcement Learning](http://proceedings.mlr.press/v48/wangf16.html) [DuelingDQN](./fluid/DeepQNetwork/README_cn.md)|DQN的变体|改进了DQN模型,提高了模型的性能|[Dueling Network Architectures for Deep Reinforcement Learning](http://proceedings.mlr.press/v48/wangf16.html)
## License ## License
This tutorial is contributed by [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) and licensed under the [Apache-2.0 license](LICENSE). This tutorial is contributed by [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) and licensed under the [Apache-2.0 license](LICENSE).
......
DeepLab运行本目录下的程序示例需要使用PaddlePaddle Fluid v1.0.0版本或以上。如果您的PaddlePaddle安装版本低于此要求,请按照安装文档中的说明更新PaddlePaddle安装版本,如果使用GPU,该程序需要使用cuDNN v7版本。 DeepLab运行本目录下的程序示例需要使用PaddlePaddle Fluid v1.3.0版本或以上。如果您的PaddlePaddle安装版本低于此要求,请按照安装文档中的说明更新PaddlePaddle安装版本,如果使用GPU,该程序需要使用cuDNN v7版本。
## 代码结构 ## 代码结构
...@@ -38,15 +38,16 @@ data/cityscape/ ...@@ -38,15 +38,16 @@ data/cityscape/
# 预训练模型准备 # 预训练模型准备
我们为了节约更多的显存,在这里我们使用Group Norm作为我们的归一化手段。
如果需要从头开始训练模型,用户需要下载我们的初始化模型 如果需要从头开始训练模型,用户需要下载我们的初始化模型
``` ```
wget https://paddle-deeplab.bj.bcebos.com/deeplabv3plus_xception65_initialize.tgz wget https://paddle-deeplab.bj.bcebos.com/deeplabv3plus_gn_init.tgz
tar -xf deeplabv3plus_xception65_initialize.tgz && rm deeplabv3plus_xception65_initialize.tgz tar -xf deeplabv3plus_gn_init.tgz && rm deeplabv3plus_gn_init.tgz
``` ```
如果需要最终训练模型进行fine tune或者直接用于预测,请下载我们的最终模型 如果需要最终训练模型进行fine tune或者直接用于预测,请下载我们的最终模型
``` ```
wget https://paddle-deeplab.bj.bcebos.com/deeplabv3plus.tgz wget https://paddle-deeplab.bj.bcebos.com/deeplabv3plus_gn.tgz
tar -xf deeplabv3plus.tgz && rm deeplabv3plus.tgz tar -xf deeplabv3plus_gn.tgz && rm deeplabv3plus_gn.tgz
``` ```
...@@ -59,6 +60,7 @@ python ./train.py \ ...@@ -59,6 +60,7 @@ python ./train.py \
--batch_size=1 \ --batch_size=1 \
--train_crop_size=769 \ --train_crop_size=769 \
--total_step=50 \ --total_step=50 \
--norm_type=gn \
--init_weights_path=$INIT_WEIGHTS_PATH \ --init_weights_path=$INIT_WEIGHTS_PATH \
--save_weights_path=$SAVE_WEIGHTS_PATH \ --save_weights_path=$SAVE_WEIGHTS_PATH \
--dataset_path=$DATASET_PATH --dataset_path=$DATASET_PATH
...@@ -72,19 +74,25 @@ python train.py --help ...@@ -72,19 +74,25 @@ python train.py --help
``` ```
python ./train.py \ python ./train.py \
--batch_size=8 \ --batch_size=8 \
--parallel=true \ --parallel=True \
--norm_type=gn \
--train_crop_size=769 \ --train_crop_size=769 \
--total_step=90000 \ --total_step=90000 \
--init_weights_path=deeplabv3plus_xception65_initialize.params \ --base_lr=0.001 \
--save_weights_path=output/ \ --init_weights_path=deeplabv3plus_gn_init \
--save_weights_path=output \
--dataset_path=$DATASET_PATH --dataset_path=$DATASET_PATH
``` ```
如果您的显存不足,可以尝试减小`batch_size`,同时等比例放大`total_step`, 保证相乘的值不变,这得益于Group Norm的特性,改变 `batch_size` 并不会显著影响结果,而且能够节约更多显存, 比如您可以设置`--batch_size=4 --total_step=180000`
如果您希望使用多卡进行训练,可以同比增加`batch_size`,减小`total_step`, 比如原来单卡训练是`--batch_size=4 --total_step=180000`,使用4卡训练则是`--batch_size=16 --total_step=45000`
### 测试 ### 测试
执行以下命令在`Cityscape`测试数据集上进行测试: 执行以下命令在`Cityscape`测试数据集上进行测试:
``` ```
python ./eval.py \ python ./eval.py \
--init_weights=deeplabv3plus.params \ --init_weights=deeplabv3plus_gn \
--norm_type=gn \
--dataset_path=$DATASET_PATH --dataset_path=$DATASET_PATH
``` ```
需要通过选项`--model_path`指定模型文件。测试脚本的输出的评估指标为mean IoU。 需要通过选项`--model_path`指定模型文件。测试脚本的输出的评估指标为mean IoU。
...@@ -93,16 +101,17 @@ python ./eval.py \ ...@@ -93,16 +101,17 @@ python ./eval.py \
## 实验结果 ## 实验结果
训练完成以后,使用`eval.py`在验证集上进行测试,得到以下结果: 训练完成以后,使用`eval.py`在验证集上进行测试,得到以下结果:
``` ```
load from: ../models/deeplabv3p load from: ../models/deeplabv3plus_gn
total number 500 total number 500
step: 500, mIoU: 0.7873 step: 500, mIoU: 0.7881
``` ```
## 其他信息 ## 其他信息
|数据集 | pretrained model | trained model | mean IoU |数据集 | norm type | pretrained model | trained model | mean IoU
|---|---|---|---| |---|---|---|---|---|
|CityScape | [deeplabv3plus_xception65_initialize.tgz](https://paddle-deeplab.bj.bcebos.com/deeplabv3plus_xception65_initialize.tgz) | [deeplabv3plus.tgz](https://paddle-deeplab.bj.bcebos.com/deeplabv3plus.tgz) | 0.7873 | |CityScape | batch norm | [deeplabv3plus_xception65_initialize.tgz](https://paddle-deeplab.bj.bcebos.com/deeplabv3plus_xception65_initialize.tgz) | [deeplabv3plus.tgz](https://paddle-deeplab.bj.bcebos.com/deeplabv3plus.tgz) | 0.7873 |
|CityScape | group norm | [deeplabv3plus_gn_init.tgz](https://paddle-deeplab.bj.bcebos.com/deeplabv3plus_gn_init.tgz) | [deeplabv3plus_gn.tgz](https://paddle-deeplab.bj.bcebos.com/deeplabv3plus_gn.tgz) | 0.7881 |
## 参考 ## 参考
......
...@@ -27,6 +27,7 @@ add_arg('verbose', bool, False, "Print mIoU for each step if ver ...@@ -27,6 +27,7 @@ add_arg('verbose', bool, False, "Print mIoU for each step if ver
add_arg('use_gpu', bool, True, "Whether use GPU or CPU.") add_arg('use_gpu', bool, True, "Whether use GPU or CPU.")
add_arg('num_classes', int, 19, "Number of classes.") add_arg('num_classes', int, 19, "Number of classes.")
add_arg('use_py_reader', bool, True, "Use py_reader.") add_arg('use_py_reader', bool, True, "Use py_reader.")
add_arg('norm_type', str, 'bn', "Normalization type, should be 'bn' or 'gn'.")
#yapf: enable #yapf: enable
...@@ -58,6 +59,7 @@ args = parser.parse_args() ...@@ -58,6 +59,7 @@ args = parser.parse_args()
models.clean() models.clean()
models.is_train = False models.is_train = False
models.default_norm_type = args.norm_type
deeplabv3p = models.deeplabv3p deeplabv3p = models.deeplabv3p
image_shape = [1025, 2049] image_shape = [1025, 2049]
......
...@@ -4,7 +4,6 @@ from __future__ import print_function ...@@ -4,7 +4,6 @@ from __future__ import print_function
import os import os
if 'FLAGS_fraction_of_gpu_memory_to_use' not in os.environ: if 'FLAGS_fraction_of_gpu_memory_to_use' not in os.environ:
os.environ['FLAGS_fraction_of_gpu_memory_to_use'] = '0.98' os.environ['FLAGS_fraction_of_gpu_memory_to_use'] = '0.98'
os.environ['FLAGS_enable_parallel_graph'] = '1'
import paddle import paddle
import paddle.fluid as fluid import paddle.fluid as fluid
...@@ -34,7 +33,7 @@ add_arg('use_gpu', bool, True, "Whether use GPU or CPU.") ...@@ -34,7 +33,7 @@ add_arg('use_gpu', bool, True, "Whether use GPU or CPU.")
add_arg('num_classes', int, 19, "Number of classes.") add_arg('num_classes', int, 19, "Number of classes.")
add_arg('load_logit_layer', bool, True, "Load last logit fc layer or not. If you are training with different number of classes, you should set to False.") add_arg('load_logit_layer', bool, True, "Load last logit fc layer or not. If you are training with different number of classes, you should set to False.")
add_arg('memory_optimize', bool, True, "Using memory optimizer.") add_arg('memory_optimize', bool, True, "Using memory optimizer.")
add_arg('norm_type', str, 'bn', "Normalization type, should be bn or gn.") add_arg('norm_type', str, 'bn', "Normalization type, should be 'bn' or 'gn'.")
add_arg('profile', bool, False, "Enable profiler.") add_arg('profile', bool, False, "Enable profiler.")
add_arg('use_py_reader', bool, True, "Use py reader.") add_arg('use_py_reader', bool, True, "Use py reader.")
parser.add_argument( parser.add_argument(
...@@ -225,7 +224,6 @@ with profile_context(args.profile): ...@@ -225,7 +224,6 @@ with profile_context(args.profile):
print("Training done. Model is saved to", args.save_weights_path) print("Training done. Model is saved to", args.save_weights_path)
save_model() save_model()
py_reader.stop()
if args.enable_ce: if args.enable_ce:
gpu_num = fluid.core.get_cuda_device_count() gpu_num = fluid.core.get_cuda_device_count()
......
...@@ -81,7 +81,7 @@ python train.py \ ...@@ -81,7 +81,7 @@ python train.py \
* **lr**: initialized learning rate. Default: 0.1. * **lr**: initialized learning rate. Default: 0.1.
* **pretrained_model**: model path for pretraining. Default: None. * **pretrained_model**: model path for pretraining. Default: None.
* **checkpoint**: the checkpoint path to resume. Default: None. * **checkpoint**: the checkpoint path to resume. Default: None.
* **model_category**: the category of models, ("models"|"models_name"). Default: "models". * **model_category**: the category of models, ("models"|"models_name"). Default: "models_name".
Or can start the training step by running the ```run.sh```. Or can start the training step by running the ```run.sh```.
...@@ -221,6 +221,8 @@ Models are trained by starting with learning rate ```0.1``` and decaying it by ` ...@@ -221,6 +221,8 @@ Models are trained by starting with learning rate ```0.1``` and decaying it by `
- Released models: not specify parameter names - Released models: not specify parameter names
**NOTE: These are trained by using model_category=models**
|model | top-1/top-5 accuracy(PIL)| top-1/top-5 accuracy(CV2) | |model | top-1/top-5 accuracy(PIL)| top-1/top-5 accuracy(CV2) |
|- |:-: |:-:| |- |:-: |:-:|
|[ResNet152](http://paddle-imagenet-models.bj.bcebos.com/ResNet152_pretrained.zip) | 78.18%/93.93% | 78.11%/94.04% | |[ResNet152](http://paddle-imagenet-models.bj.bcebos.com/ResNet152_pretrained.zip) | 78.18%/93.93% | 78.11%/94.04% |
......
...@@ -79,7 +79,7 @@ python train.py \ ...@@ -79,7 +79,7 @@ python train.py \
* **lr**: initialized learning rate. Default: 0.1. * **lr**: initialized learning rate. Default: 0.1.
* **pretrained_model**: model path for pretraining. Default: None. * **pretrained_model**: model path for pretraining. Default: None.
* **checkpoint**: the checkpoint path to resume. Default: None. * **checkpoint**: the checkpoint path to resume. Default: None.
* **model_category**: the category of models, ("models"|"models_name"). Default:"models". * **model_category**: the category of models, ("models"|"models_name"). Default:"models_name".
**数据读取器说明:** 数据读取器定义在```reader.py``````reader_cv2.py```中, 一般, CV2 reader可以提高数据读取速度, reader(PIL)可以得到相对更高的精度, 在[训练阶段](#training-a-model), 默认采用的增广方式是随机裁剪与水平翻转, 而在[评估](#inference)[推断](#inference)阶段用的默认方式是中心裁剪。当前支持的数据增广方式有: **数据读取器说明:** 数据读取器定义在```reader.py``````reader_cv2.py```中, 一般, CV2 reader可以提高数据读取速度, reader(PIL)可以得到相对更高的精度, 在[训练阶段](#training-a-model), 默认采用的增广方式是随机裁剪与水平翻转, 而在[评估](#inference)[推断](#inference)阶段用的默认方式是中心裁剪。当前支持的数据增广方式有:
* 旋转 * 旋转
...@@ -213,6 +213,8 @@ Models包括两种模型:带有参数名字的模型,和不带有参数名 ...@@ -213,6 +213,8 @@ Models包括两种模型:带有参数名字的模型,和不带有参数名
- Released models: not specify parameter names - Released models: not specify parameter names
**注意:这是model_category = models 的预训练模型**
|model | top-1/top-5 accuracy(PIL)| top-1/top-5 accuracy(CV2) | |model | top-1/top-5 accuracy(PIL)| top-1/top-5 accuracy(CV2) |
|- |:-: |:-:| |- |:-: |:-:|
|[ResNet152](http://paddle-imagenet-models.bj.bcebos.com/ResNet152_pretrained.zip) | 78.18%/93.93% | 78.11%/94.04% | |[ResNet152](http://paddle-imagenet-models.bj.bcebos.com/ResNet152_pretrained.zip) | 78.18%/93.93% | 78.11%/94.04% |
......
...@@ -7,8 +7,6 @@ import time ...@@ -7,8 +7,6 @@ import time
import sys import sys
import paddle import paddle
import paddle.fluid as fluid import paddle.fluid as fluid
#import models
import models_name as models
#import reader_cv2 as reader #import reader_cv2 as reader
import reader as reader import reader as reader
import argparse import argparse
...@@ -26,10 +24,21 @@ add_arg('class_dim', int, 1000, "Class number.") ...@@ -26,10 +24,21 @@ add_arg('class_dim', int, 1000, "Class number.")
add_arg('image_shape', str, "3,224,224", "Input image size") add_arg('image_shape', str, "3,224,224", "Input image size")
add_arg('with_mem_opt', bool, True, "Whether to use memory optimization or not.") add_arg('with_mem_opt', bool, True, "Whether to use memory optimization or not.")
add_arg('pretrained_model', str, None, "Whether to use pretrained model.") add_arg('pretrained_model', str, None, "Whether to use pretrained model.")
add_arg('model', str, "SE_ResNeXt50_32x4d", "Set the network to use.") add_arg('model', str, "SE_ResNeXt50_32x4d", "Set the network to use.")
add_arg('model_category', str, "models_name", "Whether to use models_name or not, valid value:'models','models_name'." )
# yapf: enable # yapf: enable
model_list = [m for m in dir(models) if "__" not in m]
def set_models(model_category):
global models
assert model_category in ["models", "models_name"
], "{} is not in lists: {}".format(
model_category, ["models", "models_name"])
if model_category == "models_name":
import models_name as models
else:
import models as models
def eval(args): def eval(args):
...@@ -40,6 +49,7 @@ def eval(args): ...@@ -40,6 +49,7 @@ def eval(args):
with_memory_optimization = args.with_mem_opt with_memory_optimization = args.with_mem_opt
image_shape = [int(m) for m in args.image_shape.split(",")] image_shape = [int(m) for m in args.image_shape.split(",")]
model_list = [m for m in dir(models) if "__" not in m]
assert model_name in model_list, "{} is not in lists: {}".format(args.model, assert model_name in model_list, "{} is not in lists: {}".format(args.model,
model_list) model_list)
...@@ -63,11 +73,11 @@ def eval(args): ...@@ -63,11 +73,11 @@ def eval(args):
acc_top5 = fluid.layers.accuracy(input=out0, label=label, k=5) acc_top5 = fluid.layers.accuracy(input=out0, label=label, k=5)
else: else:
out = model.net(input=image, class_dim=class_dim) out = model.net(input=image, class_dim=class_dim)
cost = fluid.layers.cross_entropy(input=out, label=label) cost, pred = fluid.layers.softmax_with_cross_entropy(
out, label, return_softmax=True)
avg_cost = fluid.layers.mean(x=cost) avg_cost = fluid.layers.mean(x=cost)
acc_top1 = fluid.layers.accuracy(input=out, label=label, k=1) acc_top1 = fluid.layers.accuracy(input=pred, label=label, k=1)
acc_top5 = fluid.layers.accuracy(input=out, label=label, k=5) acc_top5 = fluid.layers.accuracy(input=pred, label=label, k=5)
test_program = fluid.default_main_program().clone(for_test=True) test_program = fluid.default_main_program().clone(for_test=True)
...@@ -125,6 +135,7 @@ def eval(args): ...@@ -125,6 +135,7 @@ def eval(args):
def main(): def main():
args = parser.parse_args() args = parser.parse_args()
print_arguments(args) print_arguments(args)
set_models(args.model_category)
eval(args) eval(args)
......
...@@ -7,7 +7,6 @@ import time ...@@ -7,7 +7,6 @@ import time
import sys import sys
import paddle import paddle
import paddle.fluid as fluid import paddle.fluid as fluid
import models
import reader import reader
import argparse import argparse
import functools import functools
...@@ -23,9 +22,19 @@ add_arg('image_shape', str, "3,224,224", "Input image size") ...@@ -23,9 +22,19 @@ add_arg('image_shape', str, "3,224,224", "Input image size")
add_arg('with_mem_opt', bool, True, "Whether to use memory optimization or not.") add_arg('with_mem_opt', bool, True, "Whether to use memory optimization or not.")
add_arg('pretrained_model', str, None, "Whether to use pretrained model.") add_arg('pretrained_model', str, None, "Whether to use pretrained model.")
add_arg('model', str, "SE_ResNeXt50_32x4d", "Set the network to use.") add_arg('model', str, "SE_ResNeXt50_32x4d", "Set the network to use.")
add_arg('model_category', str, "models_name", "Whether to use models_name or not, valid value:'models','models_name'." )
# yapf: enable # yapf: enable
model_list = [m for m in dir(models) if "__" not in m]
def set_models(model_category):
global models
assert model_category in ["models", "models_name"
], "{} is not in lists: {}".format(
model_category, ["models", "models_name"])
if model_category == "models_name":
import models_name as models
else:
import models as models
def infer(args): def infer(args):
...@@ -35,7 +44,7 @@ def infer(args): ...@@ -35,7 +44,7 @@ def infer(args):
pretrained_model = args.pretrained_model pretrained_model = args.pretrained_model
with_memory_optimization = args.with_mem_opt with_memory_optimization = args.with_mem_opt
image_shape = [int(m) for m in args.image_shape.split(",")] image_shape = [int(m) for m in args.image_shape.split(",")]
model_list = [m for m in dir(models) if "__" not in m]
assert model_name in model_list, "{} is not in lists: {}".format(args.model, assert model_name in model_list, "{} is not in lists: {}".format(args.model,
model_list) model_list)
...@@ -85,6 +94,7 @@ def infer(args): ...@@ -85,6 +94,7 @@ def infer(args):
def main(): def main():
args = parser.parse_args() args = parser.parse_args()
print_arguments(args) print_arguments(args)
set_models(args.model_category)
infer(args) infer(args)
......
#Hyperparameters config #Hyperparameters config
#Example: SE_ResNext50_32x4d
python train.py \ python train.py \
--model=SE_ResNeXt50_32x4d \ --model=SE_ResNeXt50_32x4d \
--batch_size=32 \ --batch_size=400 \
--total_images=1281167 \ --total_images=1281167 \
--class_dim=1000 \ --class_dim=1000 \
--image_shape=3,224,224 \ --image_shape=3,224,224 \
--model_save_dir=output/ \ --model_save_dir=output/ \
--with_mem_opt=True \ --with_mem_opt=True \
--lr_strategy=piecewise_decay \ --lr_strategy=cosine_decay \
--lr=0.1 --lr=0.1 \
--num_epochs=200 \
--l2_decay=1.2e-4 \
--model_category=models_name \
# >log_SE_ResNeXt50_32x4d.txt 2>&1 & # >log_SE_ResNeXt50_32x4d.txt 2>&1 &
#AlexNet: #AlexNet:
#python train.py \ #python train.py \
# --model=AlexNet \ # --model=AlexNet \
...@@ -20,23 +23,11 @@ python train.py \ ...@@ -20,23 +23,11 @@ python train.py \
# --image_shape=3,224,224 \ # --image_shape=3,224,224 \
# --model_save_dir=output/ \ # --model_save_dir=output/ \
# --with_mem_opt=True \ # --with_mem_opt=True \
# --model_category=models_name \
# --lr_strategy=piecewise_decay \ # --lr_strategy=piecewise_decay \
# --num_epochs=120 \ # --num_epochs=120 \
# --lr=0.01 # --lr=0.01 \
# --l2_decay=1e-4
#VGG11:
#python train.py \
# --model=VGG11 \
# --batch_size=512 \
# --total_images=1281167 \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --lr_strategy=piecewise_decay \
# --num_epochs=120 \
# --lr=0.1
#MobileNet v1: #MobileNet v1:
#python train.py \ #python train.py \
...@@ -47,9 +38,11 @@ python train.py \ ...@@ -47,9 +38,11 @@ python train.py \
# --image_shape=3,224,224 \ # --image_shape=3,224,224 \
# --model_save_dir=output/ \ # --model_save_dir=output/ \
# --with_mem_opt=True \ # --with_mem_opt=True \
# --model_category=models_name \
# --lr_strategy=piecewise_decay \ # --lr_strategy=piecewise_decay \
# --num_epochs=120 \ # --num_epochs=120 \
# --lr=0.1 # --lr=0.1 \
# --l2_decay=3e-5
#python train.py \ #python train.py \
# --model=MobileNetV2 \ # --model=MobileNetV2 \
...@@ -58,10 +51,12 @@ python train.py \ ...@@ -58,10 +51,12 @@ python train.py \
# --class_dim=1000 \ # --class_dim=1000 \
# --image_shape=3,224,224 \ # --image_shape=3,224,224 \
# --model_save_dir=output/ \ # --model_save_dir=output/ \
# --model_category=models_name \
# --with_mem_opt=True \ # --with_mem_opt=True \
# --lr_strategy=cosine_decay \ # --lr_strategy=cosine_decay \
# --num_epochs=200 \ # --num_epochs=240 \
# --lr=0.1 # --lr=0.1 \
# --l2_decay=4e-5
#ResNet50: #ResNet50:
#python train.py \ #python train.py \
# --model=ResNet50 \ # --model=ResNet50 \
...@@ -71,9 +66,11 @@ python train.py \ ...@@ -71,9 +66,11 @@ python train.py \
# --image_shape=3,224,224 \ # --image_shape=3,224,224 \
# --model_save_dir=output/ \ # --model_save_dir=output/ \
# --with_mem_opt=True \ # --with_mem_opt=True \
# --model_category=models_name \
# --lr_strategy=piecewise_decay \ # --lr_strategy=piecewise_decay \
# --num_epochs=120 \ # --num_epochs=120 \
# --lr=0.1 # --lr=0.1 \
# --l2_decay=1e-4
#ResNet101: #ResNet101:
#python train.py \ #python train.py \
...@@ -83,44 +80,58 @@ python train.py \ ...@@ -83,44 +80,58 @@ python train.py \
# --class_dim=1000 \ # --class_dim=1000 \
# --image_shape=3,224,224 \ # --image_shape=3,224,224 \
# --model_save_dir=output/ \ # --model_save_dir=output/ \
# --with_mem_opt=False \ # --model_category=models_name \
# --with_mem_opt=True \
# --lr_strategy=piecewise_decay \ # --lr_strategy=piecewise_decay \
# --num_epochs=120 \ # --num_epochs=120 \
# --lr=0.1 # --lr=0.1 \
# --l2_decay=1e-4
#ResNet152: #ResNet152:
#python train.py \ #python train.py \
# --model=ResNet152 \ # --model=ResNet152 \
# --batch_size=256 \ # --batch_size=256 \
# --total_images=1281167 \ # --total_images=1281167 \
# --class_dim=1000 \
# --image_shape=3,224,224 \ # --image_shape=3,224,224 \
# --model_save_dir=output/ \
# --lr_strategy=piecewise_decay \ # --lr_strategy=piecewise_decay \
# --model_category=models_name \
# --with_mem_opt=True \
# --lr=0.1 \ # --lr=0.1 \
# --num_epochs=120 \ # --num_epochs=120 \
# --l2_decay=1e-4 # --l2_decay=1e-4
#SE_ResNeXt50: #SE_ResNeXt50_32x4d:
#python train.py \ #python train.py \
# --model=SE_ResNeXt50 \ # --model=SE_ResNeXt50_32x4d \
# --batch_size=400 \ # --batch_size=400 \
# --total_images=1281167 \ # --total_images=1281167 \
# --class_dim=1000 \
# --image_shape=3,224,224 \ # --image_shape=3,224,224 \
# --lr_strategy=cosine_decay \ # --lr_strategy=cosine_decay \
# --model_category=models_name \
# --model_save_dir=output/ \
# --lr=0.1 \ # --lr=0.1 \
# --num_epochs=200 \ # --num_epochs=200 \
# --l2_decay=12e-5 # --with_mem_opt=True \
# --l2_decay=1.2e-4
#SE_ResNeXt101: #SE_ResNeXt101_32x4d:
#python train.py \ #python train.py \
# --model=SE_ResNeXt101 \ # --model=SE_ResNeXt101_32x4d \
# --batch_size=400 \ # --batch_size=400 \
# --total_images=1281167 \ # --total_images=1281167 \
# --class_dim=1000 \
# --image_shape=3,224,224 \ # --image_shape=3,224,224 \
# --lr_strategy=cosine_decay \ # --lr_strategy=cosine_decay \
# --model_category=models_name \
# --model_save_dir=output/ \
# --lr=0.1 \ # --lr=0.1 \
# --num_epochs=200 \ # --num_epochs=200 \
# --l2_decay=15e-5 # --with_mem_opt=True \
# --l2_decay=1.5e-5
#VGG11: #VGG11:
#python train.py \ #python train.py \
...@@ -129,8 +140,12 @@ python train.py \ ...@@ -129,8 +140,12 @@ python train.py \
# --total_images=1281167 \ # --total_images=1281167 \
# --image_shape=3,224,224 \ # --image_shape=3,224,224 \
# --lr_strategy=cosine_decay \ # --lr_strategy=cosine_decay \
# --class_dim=1000 \
# --model_category=models_name \
# --model_save_dir=output/ \
# --lr=0.1 \ # --lr=0.1 \
# --num_epochs=90 \ # --num_epochs=90 \
# --with_mem_opt=True \
# --l2_decay=2e-4 # --l2_decay=2e-4
#VGG13: #VGG13:
...@@ -138,8 +153,42 @@ python train.py \ ...@@ -138,8 +153,42 @@ python train.py \
# --model=VGG13 \ # --model=VGG13 \
# --batch_size=256 \ # --batch_size=256 \
# --total_images=1281167 \ # --total_images=1281167 \
# --class_dim=1000 \
# --image_shape=3,224,224 \ # --image_shape=3,224,224 \
# --lr_strategy=cosine_decay \ # --lr_strategy=cosine_decay \
# --lr=0.01 \ # --lr=0.01 \
# --num_epochs=90 \ # --num_epochs=90 \
# --model_category=models_name \
# --model_save_dir=output/ \
# --with_mem_opt=True \
# --l2_decay=3e-4 # --l2_decay=3e-4
#VGG16:
#python train.py
# --model=VGG16 \
# --batch_size=256 \
# --total_images=1281167 \
# --class_dim=1000 \
# --lr_strategy=cosine_decay \
# --image_shape=3,224,224 \
# --model_category=models_name \
# --model_save_dir=output/ \
# --lr=0.01 \
# --num_epochs=90 \
# --with_mem_opt=True \
# --l2_decay=3e-4
#VGG19:
#python train.py
# --model=VGG19 \
# --batch_size=256 \
# --total_images=1281167 \
# --class_dim=1000 \
# --image_shape=3,224,224 \
# --lr_strategy=cosine_decay \
# --lr=0.01 \
# --num_epochs=90 \
# --with_mem_opt=True \
# --model_category=models_name \
# --model_save_dir=output/ \
# --l2_decay=3e-4
...@@ -39,7 +39,7 @@ add_arg('lr_strategy', str, "piecewise_decay", "Set the learning rate ...@@ -39,7 +39,7 @@ add_arg('lr_strategy', str, "piecewise_decay", "Set the learning rate
add_arg('model', str, "SE_ResNeXt50_32x4d", "Set the network to use.") add_arg('model', str, "SE_ResNeXt50_32x4d", "Set the network to use.")
add_arg('enable_ce', bool, False, "If set True, enable continuous evaluation job.") add_arg('enable_ce', bool, False, "If set True, enable continuous evaluation job.")
add_arg('data_dir', str, "./data/ILSVRC2012", "The ImageNet dataset root dir.") add_arg('data_dir', str, "./data/ILSVRC2012", "The ImageNet dataset root dir.")
add_arg('model_category', str, "models", "Whether to use models_name or not, valid value:'models','models_name'." ) add_arg('model_category', str, "models_name", "Whether to use models_name or not, valid value:'models','models_name'." )
add_arg('fp16', bool, False, "Enable half precision training with fp16." ) add_arg('fp16', bool, False, "Enable half precision training with fp16." )
add_arg('scale_loss', float, 1.0, "Scale loss for fp16." ) add_arg('scale_loss', float, 1.0, "Scale loss for fp16." )
add_arg('l2_decay', float, 1e-4, "L2_decay parameter.") add_arg('l2_decay', float, 1e-4, "L2_decay parameter.")
......
...@@ -85,13 +85,13 @@ def eval(): ...@@ -85,13 +85,13 @@ def eval():
im_info = [] im_info = []
for data in batch_data: for data in batch_data:
im_info.append(data[1]) im_info.append(data[1])
result = exe.run(fetch_list=[v.name for v in fetch_list], results = exe.run(fetch_list=[v.name for v in fetch_list],
feed=feeder.feed(batch_data), feed=feeder.feed(batch_data),
return_numpy=False) return_numpy=False)
pred_boxes_v = result[0] pred_boxes_v = results[0]
if cfg.MASK_ON: if cfg.MASK_ON:
masks_v = result[1] masks_v = results[1]
new_lod = pred_boxes_v.lod() new_lod = pred_boxes_v.lod()
nmsed_out = pred_boxes_v nmsed_out = pred_boxes_v
...@@ -108,6 +108,12 @@ def eval(): ...@@ -108,6 +108,12 @@ def eval():
eval_end = time.time() eval_end = time.time()
total_time = eval_end - eval_start total_time = eval_end - eval_start
print('average time of eval is: {}'.format(total_time / (batch_id + 1))) print('average time of eval is: {}'.format(total_time / (batch_id + 1)))
assert len(dts_res) > 0, "The number of valid bbox detected is zero.\n \
Please use reasonable model and check input data."
assert len(segms_res) > 0, "The number of valid mask detected is zero.\n \
Please use reasonable model and check input data.."
with open("detection_bbox_result.json", 'w') as outfile: with open("detection_bbox_result.json", 'w') as outfile:
json.dump(dts_res, outfile) json.dump(dts_res, outfile)
print("start evaluate bbox using coco api") print("start evaluate bbox using coco api")
......
# VideoClassification
Video Classification
To run train: ## 简介
bash ./scripts/train/train_${model_name}.sh 本教程期望给开发者提供基于PaddlePaddle的便捷、高效的使用深度学习算法解决视频理解、视频编辑、视频生成等一系列模型。目前包含视频分类模型,后续会不断的扩展到其他更多场景。
To run test:
bash ./scripts/test/test_${model_name}.sh 目前视频分类模型包括:
| 模型 | 类别 | 描述 |
| :--------------- | :--------: | :------------: |
| [Attention Cluster](./models/attention_cluster/README.md) | 视频分类| CVPR'18提出的视频多模态特征注意力聚簇融合方法 |
| [Attention LSTM](./models/attention_lstm/README.md) | 视频分类| 常用模型,速度快精度高 |
| [NeXtVLAD](./models/nextvlad/README.md) | 视频分类| 2nd-Youtube-8M最优单模型 |
| [StNet](./models/stnet/README.md) | 视频分类| AAAI'19提出的视频联合时空建模方法 |
| [TSN](./models/tsn/README.md) | 视频分类| ECCV'16提出的基于2D-CNN经典解决方案 |
### 主要特点
- 包含视频分类方向的多个主流领先模型,其中Attention LSTM,Attention Cluster和NeXtVLAD是比较流行的特征序列模型,TSN和StNet是两个End-to-End的视频分类模型。Attention LSTM模型速度快精度高,NeXtVLAD是2nd-Youtube-8M比赛中最好的单模型, TSN是基于2D-CNN的经典解决方案。Attention Cluster和StNet是百度自研模型,分别发表于CVPR2018和AAAI2019,是Kinetics600比赛第一名中使用到的模型。
- 提供了适合视频分类任务的通用骨架代码,用户可一键式高效配置模型完成训练和评测。
## 安装
在当前模型库运行样例代码需要PadddlePaddle Fluid v.1.2.0或以上的版本。如果你的运行环境中的PaddlePaddle低于此版本,请根据[安装文档](http://www.paddlepaddle.org/documentation/docs/zh/1.3/beginners_guide/install/index_cn.html)中的说明来更新PaddlePaddle。
## 数据准备
视频模型库使用Youtube-8M和Kinetics数据集, 具体使用方法请参考[数据说明](./dataset/README.md)
## 快速使用
视频模型库提供通用的train/test/infer框架,通过`train.py/test.py/infer.py`指定模型名、模型配置参数等可一键式进行训练和预测。
以StNet模型为例:
单卡训练:
``` bash
export CUDA_VISIBLE_DEVICES=0
python train.py --model-name=STNET
--config=./configs/stnet.txt
--save-dir=checkpoints
```
多卡训练:
``` bash
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python train.py --model-name=STNET
--config=./configs/stnet.txt
--save-dir=checkpoints
```
视频模型库同时提供了快速训练脚本,脚本位于`scripts/train`目录下,可通过如下命令启动训练:
``` bash
bash scripts/train/train_stnet.sh
```
- 请根据`CUDA_VISIBLE_DEVICES`指定卡数修改`config`文件中的`num_gpus``batch_size`配置。
## 模型库结构
### 代码结构
```
configs/
stnet.txt
tsn.txt
...
dataset/
youtube/
kinetics/
datareader/
feature_readeer.py
kinetics_reader.py
...
metrics/
kinetics/
youtube8m/
...
models/
stnet/
tsn/
...
scripts/
train/
test/
train.py
test.py
infer.py
```
- `configs`: 各模型配置文件模板
- `datareader`: 提供Youtube-8M,Kinetics数据集reader
- `metrics`: Youtube-8,Kinetics数据集评估脚本
- `models`: 各模型网络结构构建脚本
- `scripts`: 各模型快速训练评估脚本
- `train.py`: 一键式训练脚本,可通过指定模型名,配置文件等一键式启动训练
- `test.py`: 一键式评估脚本,可通过指定模型名,配置文件,模型权重等一键式启动评估
- `infer.py`: 一键式推断脚本,可通过指定模型名,配置文件,模型权重,待推断文件列表等一键式启动推断
## Model Zoo
- 基于Youtube-8M数据集模型:
| 模型 | Batch Size | 环境配置 | cuDNN版本 | GAP | 下载链接 |
| :-------: | :---: | :---------: | :-----: | :----: | :----------: |
| Attention Cluster | 2048 | 8卡P40 | 7.1 | 0.84 | [model](https://paddlemodels.bj.bcebos.com/video_classification/attention_cluster_youtube8m.tar.gz) |
| Attention LSTM | 1024 | 8卡P40 | 7.1 | 0.86 | [model](https://paddlemodels.bj.bcebos.com/video_classification/attention_lstm_youtube8m.tar.gz) |
| NeXtVLAD | 160 | 4卡P40 | 7.1 | 0.87 | [model](https://paddlemodels.bj.bcebos.com/video_classification/nextvlad_youtube8m.tar.gz) |
- 基于Kinetics数据集模型:
| 模型 | Batch Size | 环境配置 | cuDNN版本 | Top-1 | 下载链接 |
| :-------: | :---: | :---------: | :----: | :----: | :----------: |
| StNet | 128 | 8卡P40 | 5.1 | 0.69 | [model](https://paddlemodels.bj.bcebos.com/video_classification/stnet_kinetics.tar.gz) |
| TSN | 256 | 8卡P40 | 7.1 | 0.67 | [model](https://paddlemodels.bj.bcebos.com/video_classification/tsn_kinetics.tar.gz) |
## 参考文献
- [Attention Clusters: Purely Attention Based Local Feature Integration for Video Classification](https://arxiv.org/abs/1711.09550), Xiang Long, Chuang Gan, Gerard de Melo, Jiajun Wu, Xiao Liu, Shilei Wen
- [Beyond Short Snippets: Deep Networks for Video Classification](https://arxiv.org/abs/1503.08909) Joe Yue-Hei Ng, Matthew Hausknecht, Sudheendra Vijayanarasimhan, Oriol Vinyals, Rajat Monga, George Toderici
- [NeXtVLAD: An Efficient Neural Network to Aggregate Frame-level Features for Large-scale Video Classification](https://arxiv.org/abs/1811.05014), Rongcheng Lin, Jing Xiao, Jianping Fan
- [StNet:Local and Global Spatial-Temporal Modeling for Human Action Recognition](https://arxiv.org/abs/1811.01549), Dongliang He, Zhichao Zhou, Chuang Gan, Fu Li, Xiao Liu, Yandong Li, Limin Wang, Shilei Wen
- [Temporal Segment Networks: Towards Good Practices for Deep Action Recognition](https://arxiv.org/abs/1608.00859), Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, Luc Van Gool
## 版本更新
- 3/2019: 新增模型库,发布Attention Cluster,Attention LSTM,NeXtVLAD,StNet,TSN五个视频分类模型。
# 数据使用说明
- [Youtube-8M](#Youtube-8M数据集)
- [Kinetics](#Kinetics数据集)
## Youtube-8M数据集
这里用到的是YouTube-8M 2018年更新之后的数据集。使用官方数据集,并将TFRecord文件转化为pickle文件以便PaddlePaddle使用。Youtube-8M数据集官方提供了frame-level和video-level的特征,这里只需使用到frame-level的特征。
### 数据下载
请使用Youtube-8M官方链接分别下载[训练集](http://us.data.yt8m.org/2/frame/train/index.html)[验证集](http://us.data.yt8m.org/2/frame/validate/index.html)。每个链接里各提供了3844个文件的下载地址,用户也可以使用官方提供的[下载脚本](https://research.google.com/youtube8m/download.html)下载数据。数据下载完成后,将会得到3844个训练数据文件和3844个验证数据文件(TFRecord格式)。
假设存放视频模型代码库的主目录为: Code\_Root,进入dataset/youtube8m目录
cd dataset/youtube8m
在youtube8m下新建目录tf/train和tf/val
mkdir tf && cd tf
mkdir train && mkdir val
并分别将下载的train和validate数据存放在其中。
### 数据格式转化
为了适用于PaddlePaddle训练,需要离线将下载好的TFRecord文件格式转成了pickle格式,转换脚本请使用[dataset/youtube8m/tf2pkl.py](./youtube8m/tf2pkl.py)
在dataset/youtube8m 目录下新建目录pkl/train和pkl/val
cd dataset/youtube8m
mkdir pkl && cd pkl
mkdir train && mkdir val
转化文件格式(TFRecord -> pkl),进入dataset/youtube8m目录,运行脚本
python tf2pkl.py ./tf/train ./pkl/train
python tf2pkl.py ./tf/val ./pkl/val
分别将train和validate数据集转化为pkl文件。tf2pkl.py文件运行时需要两个参数,分别是数据源tf文件存放路径和转化后的pkl文件存放路径。
备注:由于TFRecord文件的读取需要用到Tensorflow,用户要先安装Tensorflow,或者在安装有Tensorflow的环境中转化完数据,再拷贝到dataset/youtube8m/pkl目录下。为了避免和PaddlePaddle环境冲突,建议先在其他地方转化完成再将数据拷贝过来。
### 生成文件列表
进入dataset/youtube8m目录
ls $Code_Root/dataset/youtube8m/pkl/train/* > train.list
ls $Code_Root/dataset/youtube8m/pkl/val/* > val.list
在dataset/youtube8m目录下将生成两个文件,train.list和val.list,每一行分别保存了一个pkl文件的绝对路径。
## Kinetics数据集
Kinetics数据集是DeepMind公开的大规模视频动作识别数据集,有Kinetics400与Kinetics600两个版本。这里使用Kinetics400数据集,具体的数据预处理过程如下。
### mp4视频下载
在Code\_Root目录下创建文件夹
cd $Code_Root/dataset && mkdir kinetics
cd kinetics && mkdir data_k400 && cd data_k400
mkdir train_mp4 && mkdir val_mp4
ActivityNet官方提供了Kinetics的下载工具,具体参考其[官方repo ](https://github.com/activitynet/ActivityNet/tree/master/Crawler/Kinetics)即可下载Kinetics400的mp4视频集合。将kinetics400的训练与验证集合分别下载到dataset/kinetics/data\_k400/train\_mp4与dataset/kinetics/data\_k400/val\_mp4。
### mp4文件预处理
为提高数据读取速度,提前将mp4文件解帧并打pickle包,dataloader从视频的pkl文件中读取数据(该方法耗费更多存储空间)。pkl文件里打包的内容为(video-id,[frame1, frame2,...,frameN],label)。
在 dataset/kinetics/data\_k400目录下创建目录train\_pkl和val\_pkl
cd $Code_Root/dataset/kinetics/data_k400
mkdir train_pkl && mkdir val_pkl
进入$Code\_Root/dataset/kinetics目录,使用video2pkl.py脚本进行数据转化。首先需要下载[train](https://github.com/activitynet/ActivityNet/tree/master/Crawler/Kinetics/data/kinetics-400_train.csv)[validation](https://github.com/activitynet/ActivityNet/tree/master/Crawler/Kinetics/data/kinetics-400_val.csv)数据集的文件列表。
首先生成预处理需要的数据集标签文件
python generate_label.py kinetics-400_train.csv kinetics400_label.txt
然后执行如下程序:
python video2pkl.py kinetics-400_train.csv $Source_dir $Target_dir 8 #以8个进程为例
- 该脚本依赖`ffmpeg`库,请预先安装`ffmpeg`
对于train数据,
Source_dir = $Code_Root/dataset/kinetics/data_k400/train_mp4
Target_dir = $Code_Root/dataset/kinetics/data_k400/train_pkl
对于val数据,
Source_dir = $Code_Root/dataset/kinetics/data_k400/val_mp4
Target_dir = $Code_Root/dataset/kinetics/data_k400/val_pkl
这样即可将mp4文件解码并保存为pkl文件。
### 生成训练和验证集list
cd $Code_Root/dataset/kinetics
ls $Code_Root/dataset/kinetics/data_k400/train_pkl /* > train.list
ls $Code_Root/dataset/kinetics/data_k400/val_pkl /* > val.list
即可生成相应的文件列表,train.list和val.list的每一行表示一个pkl文件的绝对路径。
1. download kinetics-400_train.csv and kinetics-400_val.csv
2. ffmpeg is required to decode mp4
3. transfer mp4 video to pkl file, with each pkl stores [video_id, images, label]
python generate_label.py kinetics-400_train.csv kinetics400_label.txt # generate label file
python video2pkl.py kinetics-400_train.csv $Source_dir $Target_dir $NUM_THREADS
1. Tensorflow is required to process tfrecords
2. python tf2pkl.py $Source_dir $Target_dir
# Attention Cluster 视频分类模型
---
## 目录
- [模型简介](#模型简介)
- [数据准备](#数据准备)
- [模型训练](#模型训练)
- [模型评估](#模型评估)
- [模型推断](#模型推断)
- [参考论文](#参考论文)
## 模型简介
Attention Cluster模型为ActivityNet Kinetics Challenge 2017中最佳序列模型。该模型通过带Shifting Opeation的Attention Clusters处理已抽取好的RGB、Flow、Audio数据,Attention Cluster结构如下图所示。
<p align="center">
<img src="../../images/attention_cluster.png" height=300 width=400 hspace='10'/> <br />
Multimodal Attention Cluster with Shifting Operation
</p>
详细内容请参考[Attention Clusters: Purely Attention Based Local Feature Integration for Video Classification](https://arxiv.org/abs/1711.09550)
## 数据准备
Attention Cluster模型使用2nd-Youtube-8M数据集, 数据下载及准备请参考[数据说明](../../dataset/README.md)
## 模型训练
数据准备完毕后,可以通过如下两种方式启动训练:
python train.py --model-name=AttentionCluster
--config=./configs/attention_cluster.txt
--save-dir=checkpoints
--log-interval=10
--valid-interval=1
bash scripts/train/train_attention_cluster.sh
- 可下载已发布模型[model](https://paddlemodels.bj.bcebos.com/video_classification/attention_cluster_youtube8m.tar.gz)通过`--resume`指定权重存放路径进行finetune等开发
**数据读取器说明:** 模型读取Youtube-8M数据集中已抽取好的`rgb``audio`数据,对于每个视频的数据,均匀采样100帧,该值由配置文件中的`seg_num`参数指定。
**模型设置:** 模型主要可配置参数为`cluster_nums``seg_num`参数,当配置`cluster_nums`为32, `seg_num`为100时,在Nvidia Tesla P40上单卡可跑`batch_size=256`
**训练策略:**
* 采用Adam优化器,初始learning\_rate=0.001。
* 训练过程中不使用权重衰减。
* 参数主要使用MSRA初始化
## 模型评估
可通过如下两种方式进行模型评估:
python test.py --model-name=AttentionCluster
--config=configs/attention_cluster.txt
--log-interval=1
--weights=$PATH_TO_WEIGHTS
bash scripts/test/test_attention_cluster.sh
- 使用`scripts/test/test_attention_cluster.sh`进行评估时,需要修改脚本中的`--weights`参数指定需要评估的权重。
- 若未指定`--weights`参数,脚本会下载已发布模型[model](https://paddlemodels.bj.bcebos.com/video_classification/attention_cluster_youtube8m.tar.gz)进行评估
当取如下参数时:
| 参数 | 取值 |
| :---------: | :----: |
| cluster\_nums | 32 |
| seg\_num | 100 |
| batch\_size | 2048 |
| nums\_gpu | 7 |
在2nd-YouTube-8M数据集下评估精度如下:
| 精度指标 | 模型精度 |
| :---------: | :----: |
| Hit@1 | 0.87 |
| PERR | 0.78 |
| GAP | 0.84 |
## 模型推断
可通过如下命令进行模型推断:
python infer.py --model-name=attention_cluster
--config=configs/attention_cluster.txt
--log-interval=1
--weights=$PATH_TO_WEIGHTS
--filelist=$FILELIST
- 模型推断结果存储于`AttentionCluster_infer_result`中,通过`pickle`格式存储。
- 若未指定`--weights`参数,脚本会下载已发布模型[model](https://paddlemodels.bj.bcebos.com/video_classification/attention_cluster_youtube8m.tar.gz)进行推断
## 参考论文
- [Attention Clusters: Purely Attention Based Local Feature Integration for Video Classification](https://arxiv.org/abs/1711.09550), Xiang Long, Chuang Gan, Gerard de Melo, Jiajun Wu, Xiao Liu, Shilei Wen
# AttentionLSTM视频分类模型
---
## 内容
- [模型简介](#简介)
- [数据准备](#数据准备)
- [模型训练](#模型训练)
- [模型评估](#模型评估)
- [模型推断](#模型推断)
- [参考论文](#参考论文)
## 模型简介
递归神经网络(RNN)常用于序列数据的处理,可建模视频连续多帧的时序信息,在视频分类领域为基础常用方法。该模型采用了双向长短记忆网络(LSTM),将视频的所有帧特征依次编码。与传统方法直接采用LSTM最后一个时刻的输出不同,该模型增加了一个Attention层,每个时刻的隐状态输出都有一个自适应权重,然后线性加权得到最终特征向量。论文中实现的是两层LSTM结构,而本代码实现的是带Attention的双向LSTM,Attention层可参考论文[AttentionCluster](https://arxiv.org/abs/1711.09550)
详细内容请参考[Beyond Short Snippets: Deep Networks for Video Classification](https://arxiv.org/abs/1503.08909)
## 数据准备
AttentionLSTM模型使用2nd-Youtube-8M数据集,关于数据部分请参考[数据说明](../../dataset/README.md)
## 模型训练
### 随机初始化开始训练
数据准备完毕后,可以通过如下两种方式启动训练:
python train.py --model-name=AttentionLSTM
--config=./configs/attention_lstm.txt
--save-dir=checkpoints
--log-interval=10
--valid-interval=1
bash scripts/train/train_attention_lstm.sh
- AttentionLSTM模型使用8卡Nvidia Tesla P40来训练的,总的batch size数是1024。
### 使用预训练模型做finetune
请先将提供的[model](https://paddlemodels.bj.bcebos.com/video_classification/attention_lstm_youtube8m.tar.gz)下载到本地,并在上述脚本文件中添加`--resume`为所保存的预模型存放路径。
## 模型评估
可通过如下两种方式进行模型评估:
python test.py --model-name=AttentionLSTM
--config=configs/attention_lstm.txt
--log-interval=1
--weights=$PATH_TO_WEIGHTS
bash scripts/test/test_attention_lstm.sh
- 使用`scripts/test/test_attention_LSTM.sh`进行评估时,需要修改脚本中的`--weights`参数指定需要评估的权重。
- 若未指定`--weights`参数,脚本会下载已发布模型[model](https://paddlemodels.bj.bcebos.com/video_classification/attention_lstm_youtube8m.tar.gz)进行评估
模型参数列表如下:
| 参数 | 取值 |
| :---------: | :----: |
| embedding\_size | 512 |
| lstm\_size | 1024 |
| drop\_rate | 0.5 |
计算指标列表如下:
| 精度指标 | 模型精度 |
| :---------: | :----: |
| Hit@1 | 0.8885 |
| PERR | 0.8012 |
| GAP | 0.8594 |
## 模型推断
可通过如下命令进行模型推断:
python infer.py --model-name=attention_lstm
--config=configs/attention_lstm.txt
--log-interval=1
--weights=$PATH_TO_WEIGHTS
--filelist=$FILELIST
- 模型推断结果存储于`AttentionLSTM_infer_result`中,通过`pickle`格式存储。
- 若未指定`--weights`参数,脚本会下载已发布模型[model](https://paddlemodels.bj.bcebos.com/video_classification/attention_lstm_youtube8m.tar.gz)进行推断
## 参考论文
- [Beyond Short Snippets: Deep Networks for Video Classification](https://arxiv.org/abs/1503.08909) Joe Yue-Hei Ng, Matthew Hausknecht, Sudheendra Vijayanarasimhan, Oriol Vinyals, Rajat Monga, George Toderici
- [Attention Clusters: Purely Attention Based Local Feature Integration for Video Classification](https://arxiv.org/abs/1711.09550), Xiang Long, Chuang Gan, Gerard de Melo, Jiajun Wu, Xiao Liu, Shilei Wen
...@@ -147,4 +147,5 @@ class AttentionLSTM(ModelBase): ...@@ -147,4 +147,5 @@ class AttentionLSTM(ModelBase):
] ]
def weights_info(self): def weights_info(self):
return (None, None) return ('attention_lstm_youtube8m',
'https://paddlemodels.bj.bcebos.com/video_classification/attention_lstm_youtube8m.tar.gz')
# NeXtVLAD视频分类模型
---
## 目录
- [算法介绍](#模型简介)
- [数据准备](#数据准备)
- [模型训练](#模型训练)
- [模型评估](#模型评估)
- [模型推断](#模型推断)
- [参考论文](#参考论文)
## 算法介绍
NeXtVLAD模型是第二届Youtube-8M视频理解竞赛中效果最好的单模型,在参数量小于80M的情况下,能得到高于0.87的GAP指标。该模型提供了一种将桢级别的视频特征转化并压缩成特征向量,以适用于大尺寸视频文件的分类的方法。其基本出发点是在NetVLAD模型的基础上,将高维度的特征先进行分组,通过引入attention机制聚合提取时间维度的信息,这样既可以获得较高的准确率,又可以使用更少的参数量。详细内容请参考[NeXtVLAD: An Efficient Neural Network to Aggregate Frame-level Features for Large-scale Video Classification](https://arxiv.org/abs/1811.05014)
这里实现了论文中的单模型结构,使用2nd-Youtube-8M的train数据集作为训练集,在val数据集上做测试。
## 数据准备
NeXtVLAD模型使用2nd-Youtube-8M数据集, 数据下载及准备请参考[数据说明](../../dataset/README.md)
## 模型训练
### 随机初始化开始训练
在video目录下运行如下脚本即可
bash ./scripts/train/train_nextvlad.sh
### 使用预训练模型做finetune
请先将提供的预训练模型[model](https://paddlemodels.bj.bcebos.com/video_classification/nextvlad_youtube8m.tar.gz)下载到本地,并在上述脚本文件中添加--resume为所保存的预模型存放路径。
使用4卡Nvidia Tesla P40,总的batch size数是160。
### 训练策略
* 使用Adam优化器,初始learning\_rate=0.0002
* 每2,000,000个样本做一次学习率衰减,learning\_rate\_decay = 0.8
* 正则化使用l2\_weight\_decay = 1e-5
## 模型评估
用户可以下载的预训练模型参数,或者使用自己训练好的模型参数,请在./scripts/test/test\_nextvald.sh
文件中修改--weights参数为保存模型参数的目录。运行
bash ./scripts/test/test_nextvlad.sh
由于youtube-8m提供的数据中test数据集是没有ground truth标签的,所以这里使用validation数据集来做测试。
模型参数列表如下:
| 参数 | 取值 |
| :---------: | :----: |
| cluster\_size | 128 |
| hidden\_size | 2048 |
| groups | 8 |
| expansion | 2 |
| drop\_rate | 0.5 |
| gating\_reduction | 8 |
计算指标列表如下:
| 精度指标 | 模型精度 |
| :---------: | :----: |
| Hit@1 | 0.8960 |
| PERR | 0.8132 |
| GAP | 0.8709 |
## 模型推断
用户可以下载的预训练模型参数,或者使用自己训练好的模型参数,请在./scripts/infer/infer\_nextvald.sh
文件中修改--weights参数为保存模型参数的目录,运行如下脚本
bash ./scripts/infer/infer_nextvald.sh
推断结果会保存在NEXTVLAD\_infer\_result文件中,通过pickle格式存储。
## 参考论文
- [NeXtVLAD: An Efficient Neural Network to Aggregate Frame-level Features for Large-scale Video Classification](https://arxiv.org/abs/1811.05014), Rongcheng Lin, Jing Xiao, Jianping Fan
# StNet 视频分类模型
---
## 目录
- [模型简介](#模型简介)
- [数据准备](#数据准备)
- [模型训练](#模型训练)
- [模型评估](#模型评估)
- [模型推断](#模型推断)
- [参考论文](#参考论文)
## 模型简介
StNet模型框架为ActivityNet Kinetics Challenge 2018中夺冠的基础网络框架,本次开源的是基于ResNet50实现的StNet模型,基于其他backbone网络的框架用户可以依样配置。该模型提出“super-image"的概念,在super-image上进行2D卷积,建模视频中局部时空相关性。另外通过temporal modeling block建模视频的全局时空依赖,最后用一个temporal Xception block对抽取的特征序列进行长时序建模。StNet主体网络结构如下图所示:
<p align="center">
<img src="../../images/StNet.png" height=300 width=500 hspace='10'/> <br />
StNet Framework Overview
</p>
详细内容请参考AAAI'2019年论文[StNet:Local and Global Spatial-Temporal Modeling for Human Action Recognition](https://arxiv.org/abs/1811.01549)
## 数据准备
StNet的训练数据采用由DeepMind公布的Kinetics-400动作识别数据集。数据下载及准备请参考[数据说明](../../dataset/README.md)
## 模型训练
数据准备完毕后,可以通过如下两种方式启动训练:
python train.py --model-name=STNET
--config=./configs/stnet.txt
--save-dir=checkpoints
--log-interval=10
--valid-interval=1
bash scripts/train/train_stnet.sh
- 可下载已发布模型[model](https://paddlemodels.bj.bcebos.com/video_classification/stnet_kinetics.tar.gz)通过`--resume`指定权重存放路径进行finetune等开发
**数据读取器说明:** 模型读取Kinetics-400数据集中的`mp4`数据,每条数据抽取`seg_num`段,每段抽取`seg_len`帧图像,对每帧图像做随机增强后,缩放至`target_size`
**训练策略:**
* 采用Momentum优化算法训练,momentum=0.9
* 权重衰减系数为1e-4
* 学习率在训练的总epoch数的1/3和2/3时分别做0.1的衰减
**备注:**
* 在训练StNet模型时使用PaddlePaddle Fluid 1.3 + cudnn5.1。使用cudnn7.0以上版本时batchnorm计算moving mean和moving average会出现异常,此问题还在修复中。建议用户安装PaddlePaddle时指定cudnn版本,
pip install paddlepaddle\_gpu==1.3.0.post85
或者在PaddlePaddle的whl包[下载页面](http://paddlepaddle.org/documentation/docs/zh/1.3/beginners_guide/install/Tables.html/#permalink-4--whl-release)选择下载cuda8.0\_cudnn5\_avx\_mkl对应的whl包安装。
关于安装PaddlePaddle的详细操作请参考[安装文档](http://www.paddlepaddle.org/documentation/docs/zh/1.3/beginners_guide/install/index_cn.html)
## 模型评估
可通过如下两种方式进行模型评估:
python test.py --model-name=STNET
--config=configs/stnet.txt
--log-interval=1
--weights=$PATH_TO_WEIGHTS
bash scripts/test/test__stnet.sh
- 使用`scripts/test/test_stnet.sh`进行评估时,需要修改脚本中的`--weights`参数指定需要评估的权重。
- 若未指定`--weights`参数,脚本会下载已发布模型[model](https://paddlemodels.bj.bcebos.com/video_classification/stnet_kinetics.tar.gz)进行评估
当取如下参数时:
| 参数 | 取值 |
| :---------: | :----: |
| seg\_num | 25 |
| seglen | 5 |
| target\_size | 256 |
在Kinetics400的validation数据集下评估精度如下:
| 精度指标 | 模型精度 |
| :---------: | :----: |
| TOP\_1 | 0.69 |
## 模型推断
可通过如下命令进行模型推断:
python infer.py --model-name=stnet
--config=configs/stnet.txt
--log-interval=1
--weights=$PATH_TO_WEIGHTS
--filelist=$FILELIST
- 模型推断结果存储于`STNET_infer_result`中,通过`pickle`格式存储。
- 若未指定`--weights`参数,脚本会下载已发布模型[model](https://paddlemodels.bj.bcebos.com/video_classification/stnet_kinetics.tar.gz)进行推断
## 参考论文
- [StNet:Local and Global Spatial-Temporal Modeling for Human Action Recognition](https://arxiv.org/abs/1811.01549), Dongliang He, Zhichao Zhou, Chuang Gan, Fu Li, Xiao Liu, Yandong Li, Limin Wang, Shilei Wen
...@@ -128,6 +128,10 @@ class STNET(ModelBase): ...@@ -128,6 +128,10 @@ class STNET(ModelBase):
def pretrain_info(self): def pretrain_info(self):
return ('ResNet50_pretrained', 'https://paddlemodels.bj.bcebos.com/video_classification/ResNet50_pretrained.tar.gz') return ('ResNet50_pretrained', 'https://paddlemodels.bj.bcebos.com/video_classification/ResNet50_pretrained.tar.gz')
def weights_info(self):
return ('stnet_kinetics',
'https://paddlemodels.bj.bcebos.com/video_classification/stnet_kinetics.tar.gz')
def load_pretrain_params(self, exe, pretrain, prog, place): def load_pretrain_params(self, exe, pretrain, prog, place):
def is_parameter(var): def is_parameter(var):
if isinstance(var, fluid.framework.Parameter): if isinstance(var, fluid.framework.Parameter):
......
# TSN 视频分类模型
---
## 内容
- [模型简介](#模型简介)
- [数据准备](#数据准备)
- [模型训练](#模型训练)
- [模型评估](#模型评估)
- [模型推断](#模型推断)
- [参考论文](#参考论文)
## 模型简介
Temporal Segment Network (TSN) 是视频分类领域经典的基于2D-CNN的解决方案。该方法主要解决视频的长时间行为判断问题,通过稀疏采样视频帧的方式代替稠密采样,既能捕获视频全局信息,也能去除冗余,降低计算量。最终将每帧特征平均融合后得到视频的整体特征,并用于分类。本代码实现的模型为基于单路RGB图像的TSN网络结构,Backbone采用ResNet-50结构。
详细内容请参考ECCV 2016年论文[StNet:Local and Global Spatial-Temporal Modeling for Human Action Recognition](https://arxiv.org/abs/1608.00859)
## 数据准备
TSN的训练数据采用由DeepMind公布的Kinetics-400动作识别数据集。数据下载及准备请参考[数据说明](../../dataset/README.md)
## 模型训练
数据准备完毕后,可以通过如下两种方式启动训练:
python train.py --model-name=TSN
--config=./configs/tsn.txt
--save-dir=checkpoints
--log-interval=10
--valid-interval=1
bash scripts/train/train_tsn.sh
- 可下载已发布模型[model](https://paddlemodels.bj.bcebos.com/video_classification/tsn_kinetics.tar.gz)通过`--resume`指定权重存放路径进行finetune等开发
**数据读取器说明:** 模型读取Kinetics-400数据集中的`mp4`数据,每条数据抽取`seg_num`段,每段抽取1帧图像,对每帧图像做随机增强后,缩放至`target_size`
**训练策略:**
* 采用Momentum优化算法训练,momentum=0.9
* 权重衰减系数为1e-4
* 学习率在训练的总epoch数的1/3和2/3时分别做0.1的衰减
## 模型评估
可通过如下两种方式进行模型评估:
python test.py --model-name=TSN
--config=configs/tsn.txt
--log-interval=1
--weights=$PATH_TO_WEIGHTS
bash scripts/test/test_tsn.sh
- 使用`scripts/test/test_tsn.sh`进行评估时,需要修改脚本中的`--weights`参数指定需要评估的权重。
- 若未指定`--weights`参数,脚本会下载已发布模型[model](https://paddlemodels.bj.bcebos.com/video_classification/tsn_kinetics.tar.gz)进行评估
当取如下参数时,在Kinetics400的validation数据集下评估精度如下:
| seg\_num | target\_size | Top-1 |
| :------: | :----------: | :----: |
| 3 | 224 | 0.66 |
| 7 | 224 | 0.67 |
## 模型推断
可通过如下命令进行模型推断:
python infer.py --model-name=TSN
--config=configs/tsn.txt
--log-interval=1
--weights=$PATH_TO_WEIGHTS
--filelist=$FILELIST
- 模型推断结果存储于`TSN_infer_result`中,通过`pickle`格式存储。
- 若未指定`--weights`参数,脚本会下载已发布模型[model](https://paddlemodels.bj.bcebos.com/video_classification/tsn_kinetics.tar.gz)进行推断
## 参考论文
- [Temporal Segment Networks: Towards Good Practices for Deep Action Recognition](https://arxiv.org/abs/1608.00859), Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, Luc Van Gool
...@@ -132,6 +132,10 @@ class TSN(ModelBase): ...@@ -132,6 +132,10 @@ class TSN(ModelBase):
def pretrain_info(self): def pretrain_info(self):
return ('ResNet50_pretrained', 'https://paddlemodels.bj.bcebos.com/video_classification/ResNet50_pretrained.tar.gz') return ('ResNet50_pretrained', 'https://paddlemodels.bj.bcebos.com/video_classification/ResNet50_pretrained.tar.gz')
def weights_info(self):
return ('tsn_kinetics',
'https://paddlemodels.bj.bcebos.com/video_classification/tsn_kinetics.tar.gz')
def load_pretrain_params(self, exe, pretrain, prog, place): def load_pretrain_params(self, exe, pretrain, prog, place):
def is_parameter(var): def is_parameter(var):
return isinstance(var, fluid.framework.Parameter) and (not ("fc_0" in var.name)) return isinstance(var, fluid.framework.Parameter) and (not ("fc_0" in var.name))
......
...@@ -61,14 +61,14 @@ def infer(): ...@@ -61,14 +61,14 @@ def infer():
startup_program = fluid.framework.Program() startup_program = fluid.framework.Program()
test_program = fluid.framework.Program() test_program = fluid.framework.Program()
with fluid.framework.program_guard(test_program, startup_program): with fluid.framework.program_guard(test_program, startup_program):
loss, data_list, auc_var, batch_auc_var = ctr_dnn_model(args.embedding_size, args.sparse_feature_dim) loss, auc_var, batch_auc_var, _, data_list = ctr_dnn_model(args.embedding_size, args.sparse_feature_dim, False)
exe = fluid.Executor(place) exe = fluid.Executor(place)
feeder = fluid.DataFeeder(feed_list=data_list, place=place) feeder = fluid.DataFeeder(feed_list=data_list, place=place)
with fluid.scope_guard(inference_scope): fluid.io.load_persistables(executor=exe, dirname=args.model_path,
[inference_program, _, fetch_targets] = fluid.io.load_inference_model(args.model_path, exe) main_program=fluid.default_main_program())
def set_zero(var_name): def set_zero(var_name):
param = inference_scope.var(var_name).get_tensor() param = inference_scope.var(var_name).get_tensor()
...@@ -80,9 +80,9 @@ def infer(): ...@@ -80,9 +80,9 @@ def infer():
set_zero(name) set_zero(name)
for batch_id, data in enumerate(test_reader()): for batch_id, data in enumerate(test_reader()):
loss_val, auc_val = exe.run(inference_program, loss_val, auc_val = exe.run(test_program,
feed=feeder.feed(data), feed=feeder.feed(data),
fetch_list=fetch_targets) fetch_list=[loss, auc_var])
if batch_id % 100 == 0: if batch_id % 100 == 0:
logger.info("TEST --> batch: {} loss: {} auc: {}".format(batch_id, loss_val/args.batch_size, auc_val)) logger.info("TEST --> batch: {} loss: {} auc: {}".format(batch_id, loss_val/args.batch_size, auc_val))
......
...@@ -104,7 +104,7 @@ def ctr_deepfm_model(factor_size, sparse_feature_dim, dense_feature_dim, sparse_ ...@@ -104,7 +104,7 @@ def ctr_deepfm_model(factor_size, sparse_feature_dim, dense_feature_dim, sparse_
return avg_cost, auc_var, batch_auc_var, py_reader return avg_cost, auc_var, batch_auc_var, py_reader
def ctr_dnn_model(embedding_size, sparse_feature_dim): def ctr_dnn_model(embedding_size, sparse_feature_dim, use_py_reader=True):
def embedding_layer(input): def embedding_layer(input):
return fluid.layers.embedding( return fluid.layers.embedding(
...@@ -126,13 +126,15 @@ def ctr_dnn_model(embedding_size, sparse_feature_dim): ...@@ -126,13 +126,15 @@ def ctr_dnn_model(embedding_size, sparse_feature_dim):
label = fluid.layers.data(name='label', shape=[1], dtype='int64') label = fluid.layers.data(name='label', shape=[1], dtype='int64')
datas = [dense_input] + sparse_input_ids + [label] words = [dense_input] + sparse_input_ids + [label]
py_reader = fluid.layers.create_py_reader_by_data(capacity=64, py_reader = None
feed_list=datas, if use_py_reader:
name='py_reader', py_reader = fluid.layers.create_py_reader_by_data(capacity=64,
use_double_buffer=True) feed_list=words,
words = fluid.layers.read_file(py_reader) name='py_reader',
use_double_buffer=True)
words = fluid.layers.read_file(py_reader)
sparse_embed_seq = list(map(embedding_layer, words[1:-1])) sparse_embed_seq = list(map(embedding_layer, words[1:-1]))
concated = fluid.layers.concat(sparse_embed_seq + words[0:1], axis=1) concated = fluid.layers.concat(sparse_embed_seq + words[0:1], axis=1)
...@@ -156,4 +158,4 @@ def ctr_dnn_model(embedding_size, sparse_feature_dim): ...@@ -156,4 +158,4 @@ def ctr_dnn_model(embedding_size, sparse_feature_dim):
auc_var, batch_auc_var, auc_states = \ auc_var, batch_auc_var, auc_states = \
fluid.layers.auc(input=predict, label=words[-1], num_thresholds=2 ** 12, slide_steps=20) fluid.layers.auc(input=predict, label=words[-1], num_thresholds=2 ** 12, slide_steps=20)
return avg_cost, auc_var, batch_auc_var, py_reader return avg_cost, auc_var, batch_auc_var, py_reader, words
...@@ -46,7 +46,7 @@ class CriteoDataset(Dataset): ...@@ -46,7 +46,7 @@ class CriteoDataset(Dataset):
return self._reader_creator(file_list, True, trainer_num, trainer_id) return self._reader_creator(file_list, True, trainer_num, trainer_id)
def test(self, file_list): def test(self, file_list):
return self._reader_creator(file_list, False, -1) return self._reader_creator(file_list, False, 1, 0)
def infer(self, file_list): def infer(self, file_list):
return self._reader_creator(file_list, False, -1) return self._reader_creator(file_list, False, 1, 0)
...@@ -174,7 +174,8 @@ def train_loop(args, train_program, py_reader, loss, auc_var, batch_auc_var, ...@@ -174,7 +174,8 @@ def train_loop(args, train_program, py_reader, loss, auc_var, batch_auc_var,
if batch_id % 1000 == 0 and batch_id != 0: if batch_id % 1000 == 0 and batch_id != 0:
model_dir = args.model_output_dir + '/batch-' + str(batch_id) model_dir = args.model_output_dir + '/batch-' + str(batch_id)
if args.trainer_id == 0: if args.trainer_id == 0:
fluid.io.save_inference_model(model_dir, data_name_list, [loss, auc_var], exe) fluid.io.save_persistables(executor=exe, dirname=model_dir,
main_program=fluid.default_main_program())
batch_id += 1 batch_id += 1
except fluid.core.EOFException: except fluid.core.EOFException:
py_reader.reset() py_reader.reset()
...@@ -184,7 +185,8 @@ def train_loop(args, train_program, py_reader, loss, auc_var, batch_auc_var, ...@@ -184,7 +185,8 @@ def train_loop(args, train_program, py_reader, loss, auc_var, batch_auc_var,
model_dir = args.model_output_dir + '/pass-' + str(pass_id) model_dir = args.model_output_dir + '/pass-' + str(pass_id)
if args.trainer_id == 0: if args.trainer_id == 0:
fluid.io.save_inference_model(model_dir, data_name_list, [loss, auc_var], exe) fluid.io.save_persistables(executor=exe, dirname=model_dir,
main_program=fluid.default_main_program())
# only for ce # only for ce
if args.enable_ce: if args.enable_ce:
...@@ -206,7 +208,7 @@ def train(): ...@@ -206,7 +208,7 @@ def train():
if not os.path.isdir(args.model_output_dir): if not os.path.isdir(args.model_output_dir):
os.mkdir(args.model_output_dir) os.mkdir(args.model_output_dir)
loss, auc_var, batch_auc_var, py_reader = ctr_dnn_model(args.embedding_size, args.sparse_feature_dim) loss, auc_var, batch_auc_var, py_reader, _ = ctr_dnn_model(args.embedding_size, args.sparse_feature_dim)
optimizer = fluid.optimizer.Adam(learning_rate=1e-4) optimizer = fluid.optimizer.Adam(learning_rate=1e-4)
optimizer.minimize(loss) optimizer.minimize(loss)
if args.cloud_train: if args.cloud_train:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册