From 37ce9ac173bbe47d65344a486b1aab02d3b0569d Mon Sep 17 00:00:00 2001 From: dengkaipeng Date: Sat, 2 Mar 2019 18:17:05 +0800 Subject: [PATCH] move paper out of table. --- fluid/PaddleCV/video/README.md | 18 +++++++++++++----- fluid/PaddleCV/video/models/stnet/README.md | 2 +- fluid/PaddleCV/video/models/tsn/README.md | 2 +- 3 files changed, 15 insertions(+), 7 deletions(-) diff --git a/fluid/PaddleCV/video/README.md b/fluid/PaddleCV/video/README.md index e554979a..cfd96689 100644 --- a/fluid/PaddleCV/video/README.md +++ b/fluid/PaddleCV/video/README.md @@ -6,11 +6,11 @@ | 模型 | 类别 | 描述 | | :--------------- | :--------: | :------------: | -| [Attention Cluster](./models/attention_cluster/README.md) [[论文](https://arxiv.org/abs/1711.09550)] | 视频分类| CVPR'18提出的视频多模态特征注意力聚簇融合方法 | -| [Attention LSTM](./models/attention_lstm/README.md) [[论文](https://arxiv.org/abs/1503.08909)] | 视频分类| 常用模型,速度快精度高 | -| [NeXtVLAD](./models/nextvlad/README.md) [[论文](https://arxiv.org/abs/1811.05014)] | 视频分类| 2nd-Youtube-8M最优单模型 | -| [StNet](./models/stnet/README.md) [[论文](https://arxiv.org/abs/1811.01549)] | 视频分类| AAAI'19提出的视频联合时空建模方法 | -| [TSN](./models/tsn/README.md) [[论文](https://arxiv.org/abs/1608.00859)] | 视频分类| ECCV'16提出的基于2D-CNN经典解决方案 | +| [Attention Cluster](./models/attention_cluster/README.md) | 视频分类| CVPR'18提出的视频多模态特征注意力聚簇融合方法 | +| [Attention LSTM](./models/attention_lstm/README.md) | 视频分类| 常用模型,速度快精度高 | +| [NeXtVLAD](./models/nextvlad/README.md) | 视频分类| 2nd-Youtube-8M最优单模型 | +| [StNet](./models/stnet/README.md) | 视频分类| AAAI'19提出的视频联合时空建模方法 | +| [TSN](./models/tsn/README.md) | 视频分类| ECCV'16提出的基于2D-CNN经典解决方案 | ## 主要特点 @@ -75,6 +75,14 @@ bash scripts/train/train_stnet.sh | StNet | 128 | 8卡P40 | 5.1 | 0.69 | [model](https://paddlemodels.bj.bcebos.com/video_classification/stnet_kinetics.tar.gz) | | TSN | 256 | 8卡P40 | 7.1 | 0.67 | [model](https://paddlemodels.bj.bcebos.com/video_classification/tsn_kinetics.tar.gz) | +## 参考文献 + +- [Attention Clusters: Purely Attention Based Local Feature Integration for Video Classification](https://arxiv.org/abs/1711.09550), Xiang Long, Chuang Gan, Gerard de Melo, Jiajun Wu, Xiao Liu, Shilei Wen +- [Beyond Short Snippets: Deep Networks for Video Classification](https://arxiv.org/abs/1503.08909) Joe Yue-Hei Ng, Matthew Hausknecht, Sudheendra Vijayanarasimhan, Oriol Vinyals, Rajat Monga, George Toderici +- [NeXtVLAD: An Efficient Neural Network to Aggregate Frame-level Features for Large-scale Video Classification](https://arxiv.org/abs/1811.05014), Rongcheng Lin, Jing Xiao, Jianping Fan +- [StNet:Local and Global Spatial-Temporal Modeling for Human Action Recognition](https://arxiv.org/abs/1811.01549), Dongliang He, Zhichao Zhou, Chuang Gan, Fu Li, Xiao Liu, Yandong Li, Limin Wang, Shilei Wen +- [Temporal Segment Networks: Towards Good Practices for Deep Action Recognition](https://arxiv.org/abs/1608.00859), Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, Luc Van Gool + ## 版本更新 - 3/2019: 新增模型库,发布Attention Cluster,Attention LSTM,NeXtVLAD,StNet,TSN五个视频分类模型。 diff --git a/fluid/PaddleCV/video/models/stnet/README.md b/fluid/PaddleCV/video/models/stnet/README.md index f49dfd9c..2b849ae7 100644 --- a/fluid/PaddleCV/video/models/stnet/README.md +++ b/fluid/PaddleCV/video/models/stnet/README.md @@ -105,5 +105,5 @@ StNet的训练数据采用由DeepMind公布的Kinetics-400动作识别数据集 ## 参考论文 -[StNet:Local and Global Spatial-Temporal Modeling for Human Action Recognition](https://arxiv.org/abs/1811.01549), Dongliang He, Zhichao Zhou, Chuang Gan, Fu Li, Xiao Liu, Yandong Li, Limin Wang, Shilei Wen +- [StNet:Local and Global Spatial-Temporal Modeling for Human Action Recognition](https://arxiv.org/abs/1811.01549), Dongliang He, Zhichao Zhou, Chuang Gan, Fu Li, Xiao Liu, Yandong Li, Limin Wang, Shilei Wen diff --git a/fluid/PaddleCV/video/models/tsn/README.md b/fluid/PaddleCV/video/models/tsn/README.md index a21e0488..6b030d9b 100644 --- a/fluid/PaddleCV/video/models/tsn/README.md +++ b/fluid/PaddleCV/video/models/tsn/README.md @@ -81,5 +81,5 @@ TSN的训练数据采用由DeepMind公布的Kinetics-400动作识别数据集。 ## 参考论文 -- [StNet:Local and Global Spatial-Temporal Modeling for Human Action Recognition](https://arxiv.org/abs/1608.00859), Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, Luc Van Gool +- [Temporal Segment Networks: Towards Good Practices for Deep Action Recognition](https://arxiv.org/abs/1608.00859), Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, Luc Van Gool -- GitLab